I am working on setting up a home server but I want it to be reproducible if I need to make large changes, switch out hardware, or restore from a failure. What do you use to handle this?
Carefully
Snapshots largely, most everything is VMs and docker containers. I have one VM set aside for dev work to test configs before updating the prod boxes as well.
reproducible
You tried writing bash scripts that set things up for you, haven’t you? It’s NixOS for you.
How do you manage your home server configuration
Poorly, which is to say that I just let borgmatic back up all my compose files and hope for the best
Yep.
“I manage my server in yaml. Sometimes yml.”
I use snapshots, once a month an image is made of the entire drive, and I have Duplicati that backs up to cloud. Whatever choice you make tho, remember 3,2,1, and backups are useless unless tested on a regular basis. The test portion always gives me anxiety.
I’d really like to know if there’s any practical guide on testing backups without requiring like, a crapton of backup-testing-only drives or something to keep from overwriting your current data.
Like I totally understand it in principle just not how it’s done. Especially on humble “I just wanna back up my stuff not replicate enterprise infrastructure” setups.
You can use qemu utilities to convert your Linux disk image to VDI which you can then import into VM Workstation or Virtualbox:
qemu-img convert -f qcow2 -O vdi your-image.qcow2 your-image.vdiOne thing you might run into is that Ubuntu server images often use VirtIO drivers, So you may have to make adjustments for that. Or you may run into the need for other drivers that VM Workstation or VirtualBox don’t provide.
https://documentation.ubuntu.com/server/how-to/virtualisation/qemu/#qemu
https://systemadministration.net/converting-virtual-disk-images-qemu-img/
ETA: There is also StarWind V2V Converter
With NixOS, you get a reproducible environment. When you need to change your hardware, you simply back up your data, write your NixOS configuration, and you can reproduce your previous environment.
I use it to manage all my services.
I went the nuclear option and am using Talos with Flux to manage my homelab.
My source of truth is the git repo with all my cluster and application configs. With this setup, I can tear everything down and within 30 min have a working cluster with everything installed automatically.
Are you using selfhosted git? Which one?
I have a similar setup, and even though I am hosting git (forgejo), I use ssh as a git server for the source of truth that k8s reads.
This prevents an ouroboros dependency where flux is using the git repo from forgejo which is deployed by flux…
Ansible!
Packer builds the terraformable/openTofuable templates to launch into the hypervisor where chef (eventually mgmtConfig) will manage them from there until they die.
All that is launched by git. Fire and forget. Updates are cronned.
There are no containers. Don’t got time to fuck about. If Systemd wasn’t an absolute embarrassment I’d not worry about updates even as much as I do, which isn’t much aside the aforementioned cancer.
Well I use Unraid, so I just back up my whole config folder along with the OS itself in case I need to flash it to a new USB. In other words, I just clone the whole thing. It means I can be up and running in a few minutes if everything was corrupted.
A data drive loss is pretty simple too, I just simulate the lost data until I can get a new HDD in. That takes a little longer to fix tho.
I’m just using Unraid for the server, after many iterations (PhotonOS, VMware, baremetal Windows Server, …). After many OSes, partial and complete hardware replacements, and general problems, I gave up trying to manage the base server too much. Backups are generally good enough if hardware fails or I break something.
The other side of this is that I’ve moved to having very, very little config on the server itself. Virtually everything of value is in a docker container with a single (admittedly way too large) docker compose file that describes all the services.
I think this is the ideal way for how I use a home server. Your mileage might vary, but I’ve learned the hard way that it’s really hard to maintain a server over the very long term and not also marry yourself to the specific hardware and OS configuration.
Terraform and ansible. Script service configuration and use source control. Containerize services where possible to make them system agnostic.
How do you decide what’s for Terraform and what’s for Ansible?
They’re good at different things.
Terraform is better at “here is a configuration file - make my infrastructure look like it” and Ansible is better at “do these things on these servers”.
In my case I use Terraform to create proxmox VMs and then Ansible provisions and configures software on those VMs.
NixOS for configuration and restic for data
I used to have a fille with every cli command and notes on how each thing was set up. When I had to reinstall it from scratch it took all day going through lots of manual steps and remembering how it should all go.
Recently I converted the whole thing to Ansible. Now I could rebuild my entire system on a brand new OS installation with one command that completes in minutes. It’s all modular and I can add new services easily whether they are docker containers or scripts or whatever. If I ever break anything, it will reset everything to its intended state and leave it alone otherwise. And it’s free and pretty easy to learn and start using.
Plus I use git along with it for version control, so I can always revert to any previous configuration instantly.
MicroOS is a decent choice, because it can cold boot off a configuration that uses ignition and combustion files. https://microos.opensuse.org/
And they have this file configurator so you don’t have to manually type all the syntax for your configs.






