Inspired by this comment to try to learn what I’m missing.
- Cloudflare proxy
- Reverse Proxy
- Fail2ban
- Docker containers on their own networks
To add some points, that I do:
- Proper logging: So I could realize something unusual is going on
- rootless podman container: harder to escalate privileges and gain root
- Apparmor: same, plus it could trigger suspicious log entries
As many others have said, not allowing inbound WAN connections into my LAN is an important step. I also run k3s on my server with Calico as the CNI and make heavy use of network policies to keep anything I’m running from misbehaving. That, along with easy ingress makes k3s worth it for me over Docker Compose. I use OpenWRT on my router and force certain devices to run through a VPN and block other devices from the internet entirely.
If I need remote access, I just log into NPM and I have certain URL’s created for Plex, or Sonarr, Radarr etc. No issues so far.
They aren’t on the internet mainly.
My router (opnsense) has a wireguard server which is how I access things when out of the house.
I do have a minecraft server for my friends and I, but that VM is on its own network isolated from everything else.
Fail2ban config can get fairly involved in my experience. I’m probably not doing it the right way, as I wrote a bunch of web server ban rules — anyone trying to access wpadmin gets banned, for instance (I don’t use WordPress, and if I did, it wouldn’t be accessible from my public facing reverse proxy).
I just skimmed my nginx logs and looked for anything funky and put that in a ban rule, basically.
Thanks!
Here’s the setup I followed. It seems like it might take away some manual work for you: https://m.youtube.com/watch?v=Ha8NIAOsNvo&t=1294s&pp=ygUIRmFpbDJiYW4%3D
Some I haven’t yet found in this thread:
- rootless podman
- container port mapping to localhost (e.g.
127.0.0.1:8080:8080
) - systemd services with many of its sandboxing features (PrivateTmp, …)
Does adding 127.0.0.1 make it so only that server can access it or what? I’ve seen that but not understand
I assume #2 is just to keep containers/stacks able to talk to each other without piercing the firewall for ports that aren’t to be exposed to the outside? It wouldn’t prevent anything if one of the containers on that host were compromised, afaik.
It’s mostly to allow the reverse proxy on localhost to connect to the container/service, while blocking all other hosts/IPs.
This is especially important when using docker as it messes with iptables and can circumvent firewall like e.g. ufw.
You’re right that it doesn’t increase security on case of a compromised container. It’s just about outside connections.
OK, yah, that’s what I was getting at.
Containers can talk to each other without any ports exposed at all, they just need to be added to the same docker network.
I was getting more at stacks on a host talking, ie: you have a postgres stack with PG and Pgadmin, but want to use it with other stacks or k8s swarm, without exposing the pg port outside the machine. You are controlling other containers from interacting except on the allowed ports, and keeping those port from being available off the host.
You can do that by joining the containers to the same docker network, you don’t need to expose ports even to localhost.
I put up a sign that says, “No hackers allowed plz”
How has that been going?
“All your containers are belong to us.”
One thing I do is instead of having an open SSH port, I have an OpenVPN server that I’ll connect to, then SSH to the host from within the network. Then, if someone hacks into the network, they still won’t have SSH access.
I do the same, but with Wireguard instead of OpenVPN. The performance is much better in my experience and it sucks less battery life.
Tailscale and being at my house is the only two ways in so I feel those are pretty good for me.
- Fail2ban
- UFW
- Reverse Proxy
- IPtraf (monitor)
- Lynis (Audit)
- OpenVas (Audit)
- Nessus (Audit)
- Non standard SSH port
- CrowdSec + Appsec
- No root logins
- SSH keys
- Tailscale
- RKHunter
in the context of the comment you referenced:
Definitely have the server on its own VLAN. It shouldn’t have any access to other devices that are not related to the services and I would also add some sort of security software.
If you have a public service that you expect to have multiple users on you definitely should have some level of monitoring whether it is just the application logs from the forum that you want to host or further have some sort of EDR on the server.
Things I would do if I was hosting a public forum:
- Reverse proxy
- fail2ban
- dedicated server that does not have any personal data or other services that are sensitive
- complete network isolation with VLAN
- send application logs to ELK
- clamAV
And if the user base grows I would also add:
- EDR such as velociraptor
- an external firewall / ips
- possibly move from docker to VM for further isolation (not likely)
use a cheap vlan switch to make an actual vlan DMZ with the services’ router
use non-root containers everywhere. segment services in different containers
Just tailscale really.
My services are only exposed to the tailscale network, so I don’t have to worry about otger devices on my LAN.
A good VPN with MFA is all you really need if you are the only user.
Default block for incoming traffic is always a good starting point.
I’m personally using crowdsec to good results, but still need to add some more to it as I keep seeing failed attacks that should be blocked much quicker.I expose some stuff through IPv6 only with my Synology NAS (I am CGNATED) and I have always wondered if I still need to use fail2ban in that environment…
My Synology has an auto block feature that from my understanding is essentially fail2ban, what I don’t know is if such a feature works for all my exposed services but Synology’s.