Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?
My servers and NAS were created long before Docker was a thing, and as I am running them on a rolling release distribution there never was a reason to change anything. It works perfectly fine the way it is, and it will most likely run perfectly fine the next 10+ years too.
Well I am planning, when I find the time to research a good successor, to replace my aging HPE ProLiant MicroServer Gen8 that I use as Homeserver/NAS. Maybe I will then setup everything clean and migrate the services to docker/podman/whatever is fancy then. But most likely I will only transfer all the disks and keep the old system running on newer hardware. Life is short…
You sure you mean bare metal here? Bare metal means no OS.
Warms me heart to see people in this thread thinking for themselves and not doing something just because other people are.
I consider them unnecessary layers of abstraction. Why do I need to fiddle with Docker Compose to install Immich, Vaultwarden etc.? Wouldn’t it be simpler if I could just run
sudo apt install immich vaultwarden
, just like I can dosudo apt install qbittorrent-nox
today? I don’t think there’s anything that prohibits them from running on the same bare metal, actually I think they’d both run as well as in Docker (if not better because of lack of overhead)!Both your examples actually include their own bloat to accomplish the same thing that Docker would. They both bundle the libraries they depend on as part of the build
Idk about Immich but Vaultwarden is just a Cargo project no? Cargo statically links crates by default but I think can be configured to do dynamic linking too. The Rust ecosystem seems to favour static linking in general just by convention.
Yes, that was my point, you (generally) link statically in Rust because that resolves dependency issues between the different applications you need to run. Cost is a slightly bigger, bloatier binary, but generally it’s a very good tradeoff because a slightly bigger binary isn’t an inconvenience these days.
Docker achieves the same for everything, including dynamically linked projects that default to using shared libraries which can have dependency nightmares, other binaries that are being called, etc. It doesn’t virtualize an entire OS unless you’re using it on MacOS or Windows, so the performance overhead is not as big as people seem to think (disk space overhead, though… can get slightly bigger). It’s also great for dev environments because you can have different devs using whatever the fuck they prefer as their main OS and Docker will make everyone’s environment the same.
I generally wouldn’t put a Rust/Cargo project in docker by default since it’s pretty rare to run into external dependency issues with those, but might still do it for the tooling (docker compose, mainly).
It’s not just libraries in a docker
True, Docker does it better because any executables also have redundant copies. Running two different node applications on bare metal, they can still disagree about the node version, etc.
The actual old-school bloat-free way to do it is shared libraries of course. And that shit sucks.
Well, that is how I started out. Docker was not around yet (or not mainstream enough, maybe). So it is basically a legacy thing.
My main machine is a Frankenstein monster by now, so I am gradually moving. But since the days when I started out, time has become a scarce resource, so the process is painfully slow.
I’m using proxmox now with lots of lxc containers. Prior to that, I used bare metal.
VMs were never really an option for me because the overhead is too high for the low power machines I use – my entire empire of dirt doesn’t have any fans, it’s all fanless PCs. More reliable, less noise, less energy, but less power to throw at things.
Stuff like docker I didn’t like because it never really felt like I was in control of my own system. I was downloading a thing someone else made and it really wasn’t intended for tinkering or anything. You aren’t supposed to build from source in docker as far as I can tell.
The nice thing about proxmox’s lxc implementation is I can hop in and change things or fix things as I desire. It’s all very intuitive, and I can still separate things out and run them where I want to, and not have to worry about keeping 15 different services running on the same version of whatever common services are required.
Actually docker is excellent for building from source. Some projects only come with instructions for building in Docker because it’s easier to make sure you have tested versions of tools.
Im a hobbiest who just learned how to self host my own static website on a spare laptop over the summer. I went with what I knew and was comfortable with which is a fresh install of linux and installing from the apt package manager.
As im getting more serious im starting to take another look at docker. Unforunately my OS package manager only has old outdated versions of docker I may need to reinstall with like ubuntu/debian LTS server something with more cutting edge software in repo. I don’t care much for building from scratch and navigating dependency roulette.
What OS are you using?
Linux Mint 22
I guess it isn’t the most user friendly process, but you can add the official Docker repo and get an up-to-date version without compiling or anything. You just want to make sure to uninstall any Docker packages that you installed before, before you start.
https://linuxiac.com/how-to-install-docker-on-linux-mint-22/
They can but - if their current setup meets their needs - why? There ain’t nothing wrong with having a few simple spare laptops, each an isolated environment for a few simple home server tasks each.
Don’t get me wrong - I too advocate for docker, particularly on new builds, or as a relatively turnkey solution to get started for novice friends, but the best setup is the one that works, and they sound like they got theirs where they want it.
…because that isn’t what they said. They said that they are getting more serious and now looking at Docker, but the outdated version in the Mint repo is preventing them from exploring that any further. So I offered a method that I know works without any of the “dependency roulette” that they were concerned about, while also giving a disclaimer that it isn’t exactly noob-friendly. 🤷♂️
Fair point. I think my eyes glossed over the part where they said they where taking a second look at docker (but caught the rest about rebuilding the OS in general). My sincere apologies 😓😅
Ok I’m arguing for containers/VMs and granted I do this for a living… I’m a systems architect so I build VMs and containers pretty much all the time time at work… but having just one sorta beefy box at home that I can run lots of different things is the way to go. Plus I like to tinker with things so when I screw something up, I can get back to a known state so much easier.
Just having all this things sandboxed makes it SO much easier.
For me the learning curve of learning containers does not match the value proposition of what benefits they’re supposed to provide.
I really thought the same thing. But it truly is super easy. At least just the containers like docker. Not kubernetes, that shit is hard to wrap your head around.
Plus if you screw up one service and mess everything up, you don’t have to rebuild your whole machine.
100% agree, my server has pretty much nothing except docker installed on it and every service I run is always in containers.
Setting up a new service is mostly 0% risk and apps can’t bog down my main file system with random log files, configs, etc that feel impossible to completely remove.
I also know that if for any reason my server were to explode, all I would have to do is pull my compose files from the cloud and
docker compose up
everything and I am exactly where I left off at my last backup point.
All I have is Minecraft and a discord bot so I don’t think it justifies vms
It’s just another system to maintain, another link in the chain that can fail.
I run all my services on my personal gaming pc.
Mainly that I don’t understand how to use containers… or VMs that well… I have and old MyCloud NAS and little pucky PC that I wanted to run simple QoL services on… HomeAssistant, JellyFin etc…
I got Proxmox installed on it, I can access it… I don’t know what the fuck I’m doing… There was a website that allowed you to just run scripts on shell to install a lot of things… but now none of those work becuase it says my version of Proxmox is wrong (when it’s not?)… so those don’t work…
And at least VMs are easy(ish) to understand. Fake computer with OS… easy. I’ve built PCs before, I get it… Containers just never want to work, or I don’t understand wtf to do to make them work.
I wanted to run a Zulip or Rocket.chat for internal messaging around the house (wife and I both work at home, kid does home/virtualschool)… wanted to use a container because a service that simple doesn’t feel like it needs a whole VM… but it won’t work…
I would give docker compose a try instead. I found Proxmox to be too much, when a simple yaml file (that can be checked into a repo) can do the job.
Pay attention to when people say things can be improved (secrets/passwords, rootless/podman, backups), etc. And come back to them later.
Just don’t expose things to the internet until you understand the risks and don’t check in secrets to a public git repo and go from there. It is a lot more manageable and feels like a hobby vs feeling like I’m still at work trying to get high availability, concurrency and all this other stuff that does not matter for a home setup.
I would give docker compose a try instead. I found Proxmox to be too much, when a simple yaml file (that can be checked into a repo) can do the job.
Proxmox and Docker serve different purposes. They aren’t mutually exclusive. I have 4 separate VMs in my Proxmox cluster dedicated specifically to Docker; all running Dockge, too, so the stacks can all be managed from one interface.
I get that, but the services listed by the other comment run just fine in docker with less hassle by throwing in some bind mounts.
The 4 VMs dedicated dockge instances is exactly the kind of thing I had in mind for people that want to avoid something that sounds more like work than a hobby when starting out. Building the knowledge takes time and each product introduced reduces the likelihood of it being completed anytime soon.
Fair point. I’m 12 years into my own self-hosting journey, I guess it’s easy to forget that haha.
When I started dicking around with Docker, I initially used Portainer for a while, but that just had way too much going on and the licensing was confusing. Dockge is way easier to deal with, and stupid simple to set up.
It’s so simple that it takes so much less time, one day I may move to Podman but I need to have the time to learn. I host Jellyfin
In my own experience, certain things should always be on their own dedicated machines.
My primary router/firewall is on bare metal for this very reason.
I do not want to worry about my home network being completely unusable by the rest of my family because I decided to tweak something on the server.
I could quite easily run OpnSense in a VM, and I do that, too. I run proxmox, and have OpnSense installed and configured to at least provide connectivity for most devices. (Long story short: I have several subnets in my home network, but my VM OpnSense setup does not, as I only had one extra interface on that equipment, so only devices on the primary network would work)
And tbh, that only exists because I did have a router die, and installed OpnSense into my proxmox server temporarily while awaiting new-to-me equipment.
I didn’t see a point in removing it. So it’s there, just not automatically started.
Same here. In particular I like small cheap hardware to act as appliances, and have several raspberry pi.
My example is home assistant. Deploying on its own hardware means an officially supported management layer, which makes my life easier. It is actually running containers but i don’t have to deal with that. It also needs to be always available so i use efficient “right sized” hardware and it works regardless whether im futzing with my “lab”
My example is home assistant. Deploying on its own hardware means an officially supported management layer, which makes my life easier.
If you’re talking about backups and updates for addons and core, that works on VMs as well.
For my use case, I’m continually fiddling with my VM config. That’s my playground, not just the services hosted there. I want home assistant to always be available so it can’t be there.
I suppose I could have a “production “ vm server that I keep stable, separately from my “dev” vm server but that would be more effort. Maybe it’s simply that I don’t have many services I want to treat as production, so the physical hardware is the cheapest and easiest option
Why would I want add overheard and complexity to my system when I don’t need to? I can totally see legitimate use cases for docker, and work for purposes I use VMs constantly. I just don’t see a benefit to doing so at home.
Main benefit of Docker for home is Docker compose IMO. Makes it so easy to reuse your configuration
Then check IaC, for example with Terraform or Ansible