• 1 Post
  • 657 Comments
Joined 2 years ago
cake
Cake day: June 6th, 2023

help-circle
  • I would go a step further and say that any time one of these MAC systems has to resort to user interaction to do its job, it’s a straight up failure case: the system simply didn’t have enough information to do its job, ended up doing no better than a blanket “block everything” config, and is asking the user to do 100% of the heavy lifting of determining what should happen.

    So, when I hear

    If someone is lazy or not knowledgeable enough to make the right decision…No automated system can protect [them].

    I hear: “every access control system is fundamentally broken”. Which is fine, maybe that’s true, there’s a reason social engineering is so useful. So then all these systems should prioritize streamlining that failure case as much as possible: Tell the user what is accessing what, when, how, and then make it trivial to temporarily (with well defined limits), permanently, (or even volatile-y using CoW/containerization/overlay fs) grant or deny access as quickly and easily as possible.

    Every other system you’re comparing SELinux, AFAIK, handles this case better, which is why users tend to prefer them.

    For the record, I’m not arguing that SELinux is bad at the actual access control part, I’m only answering why people don’t like using it, which is how it handles the failure case part. Now it’s been a while since I’ve used SELinux and I’ve never used setroubleshooter, but if you tell me it actually streamlines all of this to be smoother than every other tool, then I’ll install it tonight!


  • How do you know when you’re letting through a valid access, an unnecessary one that could be a vulnerability, and an actively malicious one?

    I don’t think anyone is saying throw out all access control, they’re just saying SELinux adds too much unproductive friction for everyday usage. You said it takes 15m to troubleshoot. But that’s not a one time thing, that’s 15m that scales with the amount of new programs and updates you’re running. And 90% of people aren’t even going to be able to tell they’re looking at a malicious access if they’re in the habit of always working around blocks that show up.



  • If you are familiar with the concept of an NP-complete problem, the weights are just one possible solution.

    The Traveling Salesman Problem is probably the easiest analogy to make. It’s as though we’re all trying to find the shortest path through a bunch of points (ex. towns), and when someone says “here is a path that I think is pretty good”, that is analogous to sharing network weighs for an AI. We can then all openly test that solution against other solutions and determine which is “best”.

    What they aren’t telling you is whether people traveling that path somehow benefits them (maybe they own all the gas stations on that path. Or maybe they’ve hired highway men to rob people on that path). And figuring out if that’s the case in a hyper-dimensional space is non-trivial.




  • It’s not sunk cost, dude. We agreed that $120 will get them 5 years of service that meets their needs. Even if they switch to jellyfin after 5 years, they still got their money’s worth.

    It’s only sunk cost if they are worse off than if they had switched earlier. I guess if you’re arguing that they would still have $120 if they switch today, I would argue they should still pay that $120 toward jellyfin’s development. And that’s assuming they have time to switch to jellyfin AND it fits 100% of their usecases, either of which could be untrue.


  • Or Plex currently does everything they need it to, and $120 for 5+ years of keeping that going without any interruption of service is very reasonable. In the meantime, jellyfin will only get better and there might even be other options available by then.

    Stop trying to make the issue black and white, one-size-fits-all. There are perfectly legitimate reasons for people to use both Plex and Jellyfin.


  • So, when you say crippled kernel, do you actually mean you tweaked the kernel params/build to the point that it failed to boot? Or do you just mean you messed up some package config to the point that the normal boot sequence didn’t get you to a place you knew how to recover from and need to reinstall from scratch?

    I think I’m past the point where I need to do a full reinstall to recover from my mistakes. As long as I get a shell, I can usually undo whatever I did. I have btrfs+timeshift also set up, but I’ve never had to use it.







  • It seems like the issue here is, users want to be spoken to in colloquial language they understand, but any document a legal entity produces MUST be in unambiguous “legal” language.

    So unless there’s a way to write a separate “unofficial FAQ” with what they want to say, they are limited to what they legally have to say.

    And maybe that’s a good thing. Maybe now they need to create a formal document specifying in the best legalese exactly what they mean when they say they “will never sell your data”, because if there’s any ambiguity around it, then customers deserve for them to disambiguate. Unfortunately, it’s probably not going read as quick and catchy as an ambiguous statement.



  • Afaik the cookie policy on your site is not GDPR compliant, at least how it is currently worded. If all cookies are “technically necessary” for function of the site, then I think all you need to do is say that. (I think for a wiki it’s acceptable to require clients to allow caching of image data, so your server doesn’t have to pay for more bandwidth).


  • My recommendation would be, have two machines: new hw for all your services, and use the old hw for your NAS. Each could be whatever OS you’re comfortable with using. Most everything on the services machine could be in docker configs, including network mount points to the NAS. You might be able to get away with using the 1080TI in the services box depending on what all you want to do (AI stuff, or newer stream transcoding requirements may require newer hw).

    Moving the data from the old NAS to a new one without new disks will be a challenge, yes.

    I have a TrueNAS box and used jails for services. I recently set up a debian box separately, and am switching from jails on truenas to docker on debian. Wish I had done this from the start.