Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?
Are you concerned about your self-hosted bare metal machine being a single point of failure? Or, are you concerned it will be difficult to reproduce?
Considering I have a full backup, all services are Arch packages and all important data is on its own drive, I’m not concerned about anything
I’m doing this on a couple of machines. Only running NFS, Plex (looking at a Jellyfin migration soon), Home Assistant, LibreNMS and some really small other stuff. Not using VMs or LXC due to low-end hardware (pi and older tiny pc). Not using containers due to lack of experience with it and a little discomfort with the central daemon model of Docker, running containers built by people I don’t know.
The migration path I’m working on for myself is changing to Podman quadlets for rootless, more isolation between containers, and the benefits of management and updates via Systemd. So far my testing for that migration has been slow due to other projects. I’ll probably get it rolling on Debian 13 soon.
Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.
Yes, I’ll die on this hill.
But, but, docker, kubernetes, hyper-scale convergence and other buzzwords from the 2010’s! These fancy words can’t just mean resource and namespace isolation!
In all seriousness, the isolation provided by containers is significant enough that administration of containers is different from running everything in the same OS. That’s different in a good way though, I don’t miss the bad old days of everything on a single server in the same space. Anyone else remember the joys of Windows Small Business Server? Let’s run Active Directory, Exchange and MSSQL on the same box. No way that will lead to prob… oh shit, the RAM is on fire.
…oh shit, the RAM is on fire.
The RAM. The RAM. The 🐏 is on fire. We don’t need no water let the mothefuxker burn.
Burn mothercucker, burn.
(Thanks phone for the spelling mistakes that I’m leaving).
kubernetes
Kubernetes isn’t just resource isolation, it encourages splitting services across hardware in a cluster. So you’ll get more latency than VMs, but you get to scale the hardware much more easily.
Those terms do mean something, but they’re a lot simpler than execs claim they are.
I love using it at work. Its a great tool to get everything up and running kinda like ansible. Paired with containerization it can make applications more “standard” and easy to spin back up.
That being said, for a home server, it feels like overkill. I dont need my resources spread out so far. I dont want to keep updating my kub and container setup with each new iteration. Its just not fun (to me).
Move over, bud. That’s my hill to die on, too.
Learning this fact is what got me to finally dockerize my setup
Speak english doctor! But really is this a fancy way of saying its ok to docker all the things?
I use Raspberry Pi 4 with 16GB SD-card. I simply don’t have enough memory and CPU power for 15 separate database containers for every service which I want to use.
Databases on sd cards are a nightmare for sd card lifetimes. I would really recommend getting at least a USD SSD stick instead if you want to keep it compact.
Your SD card will die suddenly someday in the near future otherwise.
Thank you for your advice. I do use an external hard drive for my data.
So, are you running 15 services on the Pi 4 without containers?
I see. Are you the only user?
No.
Is your favorite color purple?
For me it’s lack of understanding usually. I haven’t sat down and really learned what docker is/does. And when I tried to use it once I ended up with errors (thankfully they all seemed contained by the docker) but I just haven’t gotten around to looking more into than seeing suggestions to install say Pihole in it. Pretty sure I installed Pihole outside of one. Jellyfin outside, copyparty outside, and I something else im forgetting at the moment.
I was thinking of installing a chat app in one, but I put off that project because I got busy at work and it’s not something I normally use.
I guess I just haven’t been forced to see the upsides yet. But am always wanting to learn
containerisation is to applications as virtual machines are to hardware.
VMs share the same CPU, memory, and storage on the same host.
Containers share the same binaries in an OS.When you say binaries do you mean locally stored directories kind of like what Lutris or Steam would do for a Windows game. (Create a fake c:\ )
Not so much a fake one but overlay the actual directory with specific needed files for that container.
Take the Linux lib directory. It exists on the host and had python version 3.12 installed. Your docker container may need python 3.14 so an overlay directory is created that redirects calls to /lib/python to /lib/python3.14 instead of the regular symlinked /lib/python3.12.
So let’s say I theoretically wanted to move a docker container to another device or maybe if I were re-installing an OS or moving to another distro, could I in theory drag my local docker container to an external and throw my device in a lake and pull that container off into the new device? If so … what then, I link the startups, or is there a “docker config” where they are all able to be linked and I can tell it which ones to launch on OS launch, User launch, delay or what not?
For ease of moving containers between hosts I would use a docker-compose.yaml to set how you want storage shared, what ports to present to the host, what environment variables your application wants. Using Wordpress as an example this would be your starting point
https://github.com/docker/awesome-compose/blob/master/wordpress-mysql/compose.yamlall the settings for the database is listed under the db heading. You would have your actual database files stored in /home/user/Wordpress/db_data and you would link /home/user/Wordpress/db_data to /var/lib/MySQL inside the container with the line
volumes: - db_data:/var/lib/mysql
As the compose file will also be in home/user/Wordpress/ you can drop the common path.
That way if you wanted to change hosts just copy the /home/user/Wordpress folder to the new server and run docker compose up -d and boom, your server is up. No need to faf about.
Containers by design are suppose to be temporary and the runtime data is recreated each time the container is launched. The persistent data is all you should care for.
I thought about running something like proxmox, but everything is too pooled, too specialized, or proxmox doesn’t provide the packages I want to use.
Just went with arch as the host OS and firejail or lxc any processes i want contained.
I’ve never installed a package on proxmox.
I’ve BARELY interacted with CLI on proxmox (I have a script that creates a nice Debian VM template, and occasionally having to really kill a VM).What would you install on proxmox?!
Firmware update utilities, host OS file system encryption packages, HBA management tools, temperature monitoring, and then a lot of the packages had bugs that were resolved with newer versions, but proxmox only provided old versions.
I’ve always done things bare metal since starting the selfhosting stuff before containers were common. I’ve recently switched to NixOS on my server, which also solves the dependency hell issue that containers are supposed to solve.
I’m running a TrueNAS server on bare metal with a handful of hard drives. I have virtualized it in the past, but meh, I’m also using TrueNAS’s internal features to host a jellyfin server and a couple of other easy to deploy containers.
So Truenas itself is running your containers?
Yeah, the more recent versions basically have a form of Docker as part of its setup.
I believe it’s now running on Debian instead of free BSD, which probably simplified the containers set up.
I started hosting stuff before containers were common, so I got used to doing it the old fashioned way and making sure everything played nice with each other.
Beyond that, it’s mostly that I’m not very used to containers.
That I’ve yet to see a containerization engine that actually makes things easier, especially once a service does fail or needs any amount of customization. I’ve two main services in docker, piped and webodm, both because I don’t have the time (read: am too lazy) to write a PKGBUILD. Yet, docker steals more time than maintaining a PKGBUILD, with random crashes (undebuggable, as the docker command just hangs when I try to start one specific container), containers don’t start properly after being updated/restarted by watchtower, and debugging any problem with piped is a chore, as logging in docker is the most random thing imagineable. With systemd, it’s in journalctl, or in /var/log if explicitly specified or obviously useful (eg. in multi-host nginx setups). With docker, it could be a logfile on the host, on the guest, or stdout. Or nothing, because, why log after all, when everything “just works”? (Yes, that’s a problem created by container maintainers, but one you can’t escape using docker. Or rather, in the time you have, you could more easily properly(!) install it bare metal) Also, if you want to use unix sockets to more closely manage permissions and prevent roleplaying a DHCP and DNS server for ports (by remembering which ports are used by which of the 25 or so services), you’ll either need to customize the container, or just use/write a PKGBUILD or similar for bare metal stuff.
Also, I need to host a python2.7 django 2.x or so webapp (yes, I’m rewriting it), which I do in a Debian 13 VM with Debian 9 and Debian 9 LTS repos, as it most closely resembles the original environment, and is the largest security risk in my setups, while being a public website. So into qemu it goes.
And, as I mentioned, either stuff is officially packaged by Arch, is in the AUR or I put it into the AUR.
Do you host on more than one machine? Containerization / virtualization begins to shine most brightly when you need to scale / migrate across multiple servers. If you’re only running one server, I definitely see how bare metal is more straight-forward.
One main server, with backup servers being very easy to get up and running, either by full-restoring the backup, or installing and restoring specific services. As everything’s backed up to a Hetzner Storage Box, I can always restore it (if I have my USB sticks with the keyfiles).
I don’t really see the need for multiple running hosts, apart from:
- Router
- Workstation which has a 1070 in it, if I need a GPU for something. My 1U server only has space for a low profile and one slot GPU/HPC processor, and one of those would cost way more than its value over my old 1070 would be.
This is a big part of why I don’t use VMs or containers at home. All of those abstractions only start showing their worth once you scale them out.
You can customize and debug pretty easily, I’ve found. You can create your own Dockerfile based on one you’re using and add customizations there, and
exec
will get you into the container.Personally I have seen the opposite from many services. Take Jitsi Meet for example. Without containers, it’s like 4 different services, with logs and configurations all over the system. It is a pain to get running, as none of the services work without everything else being up. In containers, Jitsi Meet is managed in one place, and one place only. (When using docker compose,) all logs are available with
docker compose logs
, and all config is contained in one directory.It is more a case-by-case thing whether an application is easier to set up and maintain with or without docker.
especially once a service does fail or needs any amount of customization.
A failed service gets killed and restarted. It should then work correctly.
If it fails to recover after being killed, then it’s not a service that’s fully ready for containerisation.
So, either build your recovery process to account for this… or fix it so it can recover.
It’s often why databases are run separately from the service. Databases can recover from this, and the services are stateless - doesn’t matter how many you run or restart.As for customisation, if it isn’t exposed via env vars then it can’t be altered.
If you need something beyond the env vars, then you use that container as a starting point and make your customisation a part of your container build processes via a dockerfile (or equivalent)It’s a bit like saying “chisels are great. But as soon as you need to cut a fillet steak, you need to sharpen a side of the chisel instead of the tip of the chisel”.
It’s using a chisel incorrectly.Exactly. Therefore, docker is not useful for those purposes to me, as using arch packages (or similar) is easier to fulfill my needs.
My NAS will stay on bare metal forever. Any complications there is something I really don’t want. Passthrough of drives/PCIe-devices works fine for most things, but I won’t use it for ZFS.
As for services, I really hate using Docker images with a burning passion. I’m not trusting anyone else to make sure the container images are secure - I want the security updates directly from my distribution’s repositories, and I want them fully automated, and I want that inside any containers. Having Nixos build and launch containers with systemd-nspawn solves some of it. The actual docker daemon isn’t getting anywhere near my systems, but I do have one or two OCI images running. Will probably migrate to small VMs per-service once I get new hardware up and running.
Additionally, I never found a source of container images I feel like I can trust long term. When I grab a package from Debian or RHEL, I know that package will keep working without any major changes to functionality or config until I upgrade to the next major. A container? How long will it get updates? How frequently? Will the config format or environment variables or mount points change? Will a threat actor assume control of the image? (Oh look, all the distros actually enforce GPG signatures in their repos!)
So, what keeps me on bare metal? Keeping my ZFS pools safe. And then just keeping away from the OCI ecosystem in general, the grass is far greener inside the normal package repositories.
A NAS as bare metal makes sense.
It can then correctly interact with the raw disks.You could pass an entire HBA card through to a VM, but I feel like it should be horses for courses.
Let a storage device be a storage device, and let a hypervisor be a hypervisor.
Depends on the application for me. For Mastodon, I want to allow 12K character posts, more than 4 poll question choices, and custom themes. Can’t do it with Docker containers. For Peertube, Mobilizon, and Peertube, I use Docker containers.
Why could you not have that Mastodon setup in containers? Sounds normal afaik
I’ll chime in: simplicity. It’s much easier to keep a few patches that apply to local OS builds: I use Nix, so my Mastodon microVM config just has an extra patch line. If there’s a new Mastodon update, the patch most probably will work for it too.
Yes, I could build my own Docker container, but you can’t easily build it with a patch (for Mastodon specifically, you need to patch js pre-minification). It’s doable, but it’s quite annoying. And then you need to keep track of upstream and update your Dockerfile with new versions.
After many failures, I eventually landed on OMV + Docker. It has a plugin that puts the Docker management into a web UI and for the few simple services I need, it’s very straightforward to maintain. I don’t cloud host because I want complete control of my data and I keep an automatic incremental backup alongside a physically disconnected one that I manually update.
Cool, how are you managing your disks? Are you overall happy with OMV?
Very happy with OMV. It’s not crazy customizable, so if you have something specialized, you might run into quirks trying to stick to the Web UI, but it’s just Debian under the hood, so it’s pretty manageable. 4x1TB drives RAID 5 for media/critical data, OS drive, and a Service data drive (databases, etc). Then an external 4TB for the incremental and another external 4TB for the disconnected backup.
Awesome, thanks. Upgrade process has been seamless?
Haven’t had to do a full OS upgrade yet, but standard packages can be updated and installed right in the web UI as well.
TrueNAS is on bare metal has I have a dedicated NAS machine that’s not doing everything else and also is not recommended to virtualize. Not sure if that counts.
Same for the firewall (opnsense) since it is it’s own machine.
Have you tried running containers on Truenas?
No because I run my containers elsewhere, not on the NAS
my two bare metal servers are the file server and music server. I have other services in a pi cluster.
file server because I can’t think of why I would need to use a container.
the music software is proprietary and requires additional complications to get it to work properly…or at all, in a container. it also does not like sharing resources and is CPU heavy when playing to multiple sources.
if either of these machines die, a temporary replacement can be sourced very easily(e.g. the back of my server closet) and recreated from backups while I purchase new or fix/rebuild the broken one.
IMO the only reliable method for containers is a cluster because if you’re running several containers on a device and it fails you’ve lost several services.
Cool, care to share more specifics on your Pi Cluster