It’s been a while, let’s go! Any major fuckups lately or smooth sailing?
I had to change the local DNS setup yesterday. I finally installed my wife Linux Mint and wanted to set her up for Vaultwarden real quick which became an hour long debug session since apparently CNAME entries for hostnames don’t work as I thought. Never came up the recent year as all my machines took it, but resolved refused to and so I eventually deleted the entries in the Pihole and created them as A records pointing to the VM with the reverse proxy, hoping I won’t need to change the IP anytime soon. It’s always DNS!
In other news I think I moved all my local dockered services to forgejo+komodo now and applying updates by merging renovate MRs still feels super smooth. I just updated my calibre web automated with a single click. Only exception is home assistant where I have yet to find a good split in what to throw in a docker volume and what to check in git and bindmount.
I finally figured out it was a bad stick of RAM in my server that has been causing random freezes and not some stupid mistake on my part. Thankfully it’s DDR3 so I can keep both of my kidneys and still afford the replacement.
Thankfully it’s DDR3
It’s one of the benefits of having older equipment. I use these guys for RAM purchases: https://www.memorystock.com/
deleted by creator
Waiting for my new glintet Ethernet kvm to arrive and connect to my server…
At home, smooth sailing. At “work/uni”, migrating everything to ceph, and been a pain in the arse installing OpenSuse with software raid for some reason
Got hit with this recently
https://github.com/jellyfin/jellyfin/issues/15148
Just restored an old backup. Everything is behind a vpn and is working so ill give it a while and see if it gets sorted before resorting to swapping out the sqlite version for each update.
Ouchy!
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DHCP Dynamic Host Configuration Protocol, automates assignment of IPs when connecting to a network DNS Domain Name Service/System HTTP Hypertext Transfer Protocol, the Web IP Internet Protocol nginx Popular HTTP server
4 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.
[Thread #51 for this comm, first seen 1st Feb 2026, 10:01] [FAQ] [Full list] [Contact] [Source code]
Wait I don’t understand how changing your CNAME to A records resolved your problem. Did your wife’s computer simply not resolve the CNAME records?
So I have my vms behind an opnsense with DHCP, the opnsense also creates local DNS records like vm1.opnsense. The pihole has conditional forwarding for .opnsense to the firewall, so I can resolve the domain everywhere in LAN.
I had CNAME records in the pihole for my actual domain (e.g. lemmy.nocturnal.garden) pointing to vm1.opnsense so I take a shortcut from inside the LAN, avoiding going “outside” via the public IP.
Mint/resolved resolves the .opnsense domains when I directly look them up, but for a reason I didn’t fully understand, it does not work with a CNAME entry pointing to that. So I have up on the CNAME approach and created A records for each service, directly pointing to the VM’s IP.
I’m curious as why you decided to setup pihole when you already have opnsense. More so that your records are in pihole and not opnsense
I’ve had pihole years before the opnsense, but also opnsense is not the main router but just sits in front of my homelab. The wifi etc is a FritzBox, which also acts as WAN for opnsense.
That way, everything still in the house still works if my homelab/opnsense is down. Pihole is on a pi in the FritzBox LAN.
That sounds overly complicated, why not have it all on opnsense instead of 3 different devices?
Is your opnsense unstable? Otherwise regarding network availability you are just introducing unnecessary failure points the network.
The point of the opnsense is that I can tinker with it without risking our home wifi. It needs to stay up for my wife, for our mqtt devices/home assistant etc.
I don’t introduce points of failure to our home network which is the critical part. If something in the opnsense misbehaves, it only impacts my lab stuff. The FritzBox + Pihole combination has proven pretty stable over years, even though I’m considering getting a second Pihole device for high availability.
Ah right, I thought you were doing it like this
Internet -> Fritzbox + Pihole -> Opnsense -> Home Network
It makes sense now :D
Yeah that would be a bit convoluted :D
Moved all my Unraid ‘apps’ to Dockhand, and linked my Pangolin VPS with the Hawser agent. I had Dockge for a while on newer container deployments, but wanted something a bit more playful, Dockhand is it.
I degoogled my GMail last year to Infomaniak, which was OK, but moved to Fastmail last week, which I now love! Setting the custom domain pulled in the sites favicon for the Fastmail account header, which made me smile too much for such a simple thing. Think I’ll be on Fastmail for the future. (Background syncing with the new Bichon email archiver).
I’ve been hinking about infrastructure as code tools. Skimmed the very surface of opentofu, looked at the list of alternatives.
I’m in need of something that is both, deployment automation and (implicit) documentation of the thing that I call “the zoo”. Namely:
- network definition
- machine definitions (VMs, containers) and their configuration
- inventory: keeping track of third party resources
Now I think about which tool would be the right one for the job while I’m still not 100% sure what the job is. I don’t like added complexity, it is quite possible this could become a dead end for me, if I spend more time wrangling the tool than I gain in the end.
PS: If you haven’t already, please take a look at your openssl packages. Since this week there are two new CVEs rated as high: https://openssl-library.org/news/vulnerabilities/index.html
I am currently switching over from Debian/rocky lxc containers on proxmox to declaratively creating vm via opentofu, then running nixos-anywhere and then running colmena for updates etc. works great and I should have done it sooner.
Problem Tailscale. I encrypted the authkey via agenix but the new nixos hosts can not read the file and fail to login. The file is available but I think the vms can not decrypt it. Needs further investigation
I had a weird issue with a server SSD.
6 months ahead of scheduled swap, it didn’t die, it just started reading and writing really sluggishly, making the whole server behave really weird. Disk smart statistics looked healthy and disk self tests passed with flying colors. Anyway, had to swap it early and do a re-install of the OS.
The rest of my cluster temporarily took over running some pods and only saw downtime for a few pods that were dependent on some disks in the failing server.
I guess the incident has restarted my interest in distributed storage.
Blergh, how did you pinpoint it?
More luck than anything really. It was probably because it had 6 months left and the fact that reading and writing felt slow. Everything else behaved normally and buying a new disk was an educated guess that turned out to be the correct choice.
My server mysteriously stopped working in December. After a scheduled restart, the OS wouldn’t load so the fan was running on high for a few days while I was staying at a friends for a few days.
I checked the logs and couldn’t find anything suspicious. Loaded a previous backup that worked and still nothing loaded on startup. Tested the Pi 5 with a USB drive that had a fresh Alpine Linux install on it and everything loaded up fine so I was able to rule out any hardware issues. The HDD with the old OS mounted just fine to my laptop. I still have no idea what happened.
This happened a few days before my domain name expired and I was planning to change my domain name to something shorter. Decided to hold off on remaking my server from scratch until I finish a few other projects.
The other projects will help me manage my network connected devices so it’s all working towards a common goal. Fortunately I am getting very close to finishing those projects. I am putting the final touches on my last project and should done within a few days.
Next I’ll reinstall my Pi 4 with HomeAssistant again to fix it’s networking issue. Only the terrarium grow lights are affected and my gecko chose to hibernate outside of the terrarium this winter so she’s unaffected (heat lamps are controlled by a separate, isolated device). After that I’ll fix my Pi 5 server and this time go with Podman over Docker.
I finally installed my wife
Man…technology has come a long way.
Nothing here to write home about. A couple of minor tweaks to the network, and blocking even more unnecessary traffic. I’ve been on a mission to reduce costs in consumables such as electricity. I have a cron that shuts everything down at a certain time in the evening, and am working on a WOL routine fired by a cron from my stand alone pfsense box, to the server, to crank it back up in the morning just before I get up. It seemed to be the lowest hanging fruit so I have it on priority. It just didn’t make sense to run the server for 10 - 12 hours on idle I don’t have any midnight mass downloads of Linux iso’s nor do I make services available to other users so, it seemed to be a good place to start. I guess, by purist’s standards, it’s not a server anymore but an intermittent service, but it seems to be working for me. Will check consumption totals at the end of the month.
Other than that, I haven’t added anything new to the lineup, and I am just enjoying the benefits.
If you want to go all in, get some plug that measures the energy! Also let’s you directly see the effects of turning stuff on/off. My last server went up 3W when I started using the second network interface! Let drives go to sleep, play with C-States, etc
I had a post a while back about what I was doing to cut costs.
- TLP: Adjusts CPU frequency scaling, PCI‑e ASPM, SATA link power‑management
- Powertop: Used to profile power consumption and has a tune feature sudo powertop --auto-tune
- cpufrequtils: Used to manage the CPU governor directly
- logind.conf: Can be used to put the whole server to sleep when idle
After doing all of that, which does help out during operational hours, I decided to save 10-12 hours of consumption by just shutting it down. The old ‘turn the light out if you’re not in the room’ concept. Right now I am manually booting the server, and it doesn’t take that long to resume operations. However, why not employ some automation and magic packets to fire it back up in the morning.
ETA: I do have a watt meter on the server.
Sounds good! Are you on SSD or HDD?
The OS lives on an SSD and I have two aux drives. One is HDD, but it is a samba share for Navidrome, so it’s not like it’s spinning constandly. Everything gets a 3,2,1 backup.
ETA: Now that you mention it, I guess I could employ a park(?) for the HDD before shutting down.
Finally killed my Discord account and moved my monitoring notifications to a self-hosted nyfy server. Works well.
Recently obtained a free circa-2017 mac mini which I installed Linux on, to create a docker hosting environment. Current have Jellyfin, SearXNG, and Forgejo.
My much older NAS serves as the NFS drive for the Jellyfin media (formerly, I ran Plex directly on the NAS, but this was slow/unreliable as the NAS has only dual 1Ghz ARM cores).
One of the drives in the NAS died Thursday night, but no serious issue as its RAID 1. I wonder if the new load on it pushed it over the edge. (Also, I wonder if I could use the mac minis SSD as a sort of cache in front of the NAS, to reduce wear on it, if that would even help…)
Luckily I had some gift cards from recycling old tablets and phones, so I could get a replacement drive at minimal cost. I went with a cheap WD Blue drive instead of the 2.5x more expensive Seagate IronWolf drives I had used in the past. We will see how that fares over the next few years.
Upon replacing the drive yesterday, I found the one that failed was a 2017 mfg date, so its life was 8 years (from when I initially populated the NAS). The other drive was replaced in 2021 (but it actually failed in 2020, I just left the NAS unused for a year at that time, so it had a life of 3 years). Some insight into the life span of the Iron Wolf drives.
Things I’d like to add soon:
- kiwix instance
- normalize my ebook/magazine collection
- setup to download my youtube subscriptions to Jellyfin’s media directory so I can avoid the youtube app/website
- something for music to ditch that subscription






