• 0 Posts
  • 92 Comments
Joined 2 years ago
cake
Cake day: July 3rd, 2023

help-circle

    1. You are going to find people who have done both. A lot of NAS devices run kind of low powered CPUs so separating it out into two devices can get you more compute power than a single device. For example, an old as the hills file bay may cost next to nothing, and then using your “last” desktop will get you a lot more storage and compute than a 1500$ modern NAS, but it’ll take up more space, cost more in electricity to run, and make more fan noise. This is the route I went. A modern NAS should be able to run what you listed though.
    2. TrueNAS scale is all about storage, but it lets you also run containers. Proxmox is all about virtualization, but you can then run a storage solution inside a VM or container. It’s not the kind of thing you’re going to get a right answer for because either way can work. Both are well-documented, capable solutions. I have tried both at times, but I had a lot more experience with Proxmox by the time I deployed TrueNAS, so I stuck with Proxmox and use a TrueNAS box (bare metal) for backups. It really is a matter of preference.
    3. If you have a MiniPC and NAS as separate devices, you will want to set up a network share, so you can seed on the MiniPC the copy that’s on the NAS. My seeding, Jellyfin, Plex, etc, all happen in a virtual hard drive mounted in a separate container from the services. Each of the services "see that drive as a network share despite being hosted on the same physical hardware.





  • I didn’t know anything about docker when I set up my NC years ago, so I ran it as a snap on bare metal. Man, it’s gotten so much better! It used to really suck. Like, simple file transfers just didn’t work half the time, so I’d be retrying the same thing over and over… A few years ago, I literally migrated it from bare metal to a VM, but kept the exact same install. I have so much crap on it now, I think I’ll never bother switching it out to docker, just because of the inconvenience. I know the snap version can just run using a local hostname, you just have to set it in trusted domains setting. Might be the same in the docker image?








  • I pretty much agree with all of this… I have a Mint XFCE installed on a thumb drive. (Not an installER , installED.) I can boot it on basically any computer that still supports Legacy, and I’ve done so on a Dell Venue Pro tablet (Atom CPU, 2Gb Ram). Had a bastard of a time getting it to boot, but it ran better than the on board Windows 8.1. This was post-Covid. Of all the systems I’ve run it on, one didn’t have WiFi, and one had a bunch of messing around to get the audio to switch between speakers and headphones reliably. But keep in mind, this is the exact same copy of the OS, across a half dozen systems. I’ve also upgraded it over five years or so…







  • Everyone is going to tell you to use dd. dd if=/dev/oldsdcard of=/dev/newsdcard

    Personally, I have actually eaten an entire system by getting the wrong /dev names for the input file and the output file.

    Gparted lets you copy whole partitions and resize them, and is graphical. I have yet to destroy my computer using gparted, but I’ve definitely done so with dd. (I’m also an idiot though, so…) Edit: gparted will also let you resize the new SD to the bigger partition size! However, it is actually possible to break your system in gparted too, so make sure you aren’t deleting partitions and stuff in there.