tldr: I’d like to set up a reverse proxy with a domain and an SSL cert so my partner and I can access a few selfhosted services on the internet but I’m not sure what the best/safest way to do it is. Asking my partner to use tailsclae or wireguard is asking too much unfortunately. I was curious to know what you all recommend.
I have some services running on my LAN that I currently access via tailscale. Some of these services would see some benefit from being accessible on the internet (ex. Immich sharing via a link, switching over from Plex to Jellyfin without requiring my family to learn how to use a VPN, homeassistant voice stuff, etc.) but I’m kind of unsure what the best approach is. Hosting services on the internet has risk and I’d like to reduce that risk as much as possible.
-
I know a reverse proxy would be beneficial here so I can put all the services on one box and access them via subdomains but where should I host that proxy? On my LAN using a dynamic DNS service? In the cloud? If in the cloud, should I avoid a plan where you share cpu resources with other users and get a dedicated box?
-
Should I purchase a memorable domain or a domain with a random string of characters so no one could reasonably guess it? Does it matter?
-
What’s the best way to geo-restrict access? Fail2ban? Realistically, the only people that I might give access to live within a couple hundred miles of me.
-
Any other tips or info you care to share would be greatly appreciated.
-
Feel free to talk me out of it as well.
If security is one of your concerns, search for “HTTP client side certificates”. TL;DR: you can create certificates to authenticate the client and configure the server to allow connections only from trusted devices. It adds extra security because attackers cannot leverage known vulnerabilities on the services you host since they are blocked at http level.
It is a little difficult to find good and updated documentation but I managed to make it work with nginx. The downside is that Firefox mobile doesn’t support them, but Firefox PC and Chrome have no issues.
Of course you want also a server side certificate, the easiest way is to get it from Let’s Encrypt
That’s interesting, I didn’t know that was a thing. I’ll look into it, thanks!
I remember that I started by following these two guides.
https://fardog.io/blog/2017/12/30/client-side-certificate-authentication-with-nginx/
https://stackoverflow.com/questions/7768593/
something I’m not sure it is mentioned here is that android (at lest the version on my phone) accepts only a legacy format for certificates and the error message when you try to import the new format is totally opaque. If you cannot import it there just check openssl flags to change the export format.
I’ve tried 3 times so far in Python/gradio/Oobabooga and never managed to get certs to work or found a complete visual reference guide that demonstrates a complete working example like what I am looking for in a home network. (Only really commenting to subscribe to watch this post develop, and solicit advice:)
So far, I’ve played around with reverse proxies and ssl certs and the easiest method I’ve found so far was docker. Just haven’t put anything in production yet. If you don’t know how to use docker, learn, it’s so worth it.
Here is the tutorial I used and the note I left for myself. You’ll need a domain to play around with. Once you figure out how to get NGINX and certbot set up, replacing the helloworld container with a different one is relatively straight forward.
DO NOT FORGET, you must give certbot read write permissions in the docker-compose.yml file which isn't shown in this tutorial -----EXAMPLE, NOT PRODUCTION CODE---- nginx: container_name: nginx restart: unless-stopped image: nginx depends_on: - helloworld ports: - 80:80 - 443:443 volumes: - ./nginx/nginx.conf:/etc/nginx/nginx.conf - ./certbot/conf:/etc/letsencrypt:ro - ./certbot/www:/var/www/certbot:ro certbot: image: certbot/certbot container_name: certbot volumes: - ./certbot/conf:/etc/letsencrypt:rw - ./certbot/www:/var/www/certbot:rw command: certonly --webroot -w /var/www/certbot --keep-until-expiring --email *email* -d *domain1* -d *domain2* --agree-tos
I’d add that Traefik works even better with Docker because you tag your other containers that have web ports and Traefik picks that up from Docker and terminates the SSL connection for them. You don’t even have to worry about setting up SSL on every individual service, Traefik will take care of that even for services that don’t implement SSL.
You don’t even have to worry about setting up SSL on every individual service
I probably need to look into it more but since traefik is the reverse proxy, doesn’t it just get one ssl cert for a domain that all the other services use? I think that’s how my current nginx proxy is set up; one cert configured to work with the main domain and a couple subdomains. If I want to add a subdomain, if I remember correctly, I just add it to the config, restart the containers, and certbot gets a new cert for all the domains
Traefik basically has certbot built in so when you configure a new hostname on a service it automatically handles requesting and refreshing the cert for you. It can either request individual certificates for each hostname or a wildcard certificate (*.yourdomain.com) that covers all subdomains.
The neat trick is that in Docker you configure Traefik by adding Docker tags to the other containers you want to proxy. When you start up a container, Traefik automatically reads the config from the tags, does any necessary setup, then viola it’s ready to go!
nixos with nginx services does all proxying and ssl stuff, fail2ban is there as well
I know I should learn NixOS, I even tried for a few hours one evening but god damn, the barrier to entry is just a little too high for me at the moment 🫤
i guess you were able to install the os ok? are you using proxmox or regular servers?
i can post an example configuration.nix for the proxy and container servers that might help. i have to admit debugging issues with configurations can be very tricky.
in terms of security i was always worried about getting hacked. the only protection for that was to make regular backups of data and config so i can restore services, and to create a dmz behind my isp router with a vlan switch and a small router just for my services to protect the rest of my home network
i guess you were able to install the os ok? are you using proxmox or regular servers?
I was. It was learning the Nix way of doing things that was just taking more time than i had anticipated. I’ll get around to it eventually though
I tried out proxmox years ago but besides the web interface, I didn’t understand why I should use it over Debian or Ubuntu. At the moment, I’m just using Ubuntu and docker containers. In previous setups, I was using KVMs too.
Correct me if I’m wrong, but don’t you have to reboot every time you change your Nix config? That was what was painful. Once it’s set up the way you want, it seemed great but getting to that point for a beginner was what put me off.
I would be interested to see the config though
this is my nginx config for my element/matrix services
as you can see i am using a proxmox NixOS with an old 23.11 nix channel but i’m sure the config can be used in other NixOS environments
{ pkgs, modulesPath, ... }: { imports = [ (modulesPath + "/virtualisation/proxmox-lxc.nix") ]; security.pki.certificateFiles = [ "/etc/ssl/certs/ca-certificates.crt" ]; system.stateVersion = "23.11"; system.autoUpgrade.enable = true; system.autoUpgrade.allowReboot = true; nix.gc = { automatic = true; dates = "weekly"; options = "--delete-older-than 14d"; }; networking.firewall.allowedTCPPorts = [ 80 443 ]; services.openssh = { enable = true; settings.PasswordAuthentication = true; }; users.users.XXXXXX = { isNormalUser = true; home = "/home/XXXXXX"; extraGroups = [ "wheel" ]; shell = pkgs.zsh; }; programs.zsh.enable = true; security.acme = { acceptTerms = true; defaults.email = "[email protected]"; }; services.nginx = { enable = true; virtualHosts._ = { default = true; extraConfig = "return 500; server_tokens off;"; }; virtualHosts."XXXXXX.dynu.net" = { enableACME = true; addSSL = true; locations."/_matrix/federation/v1" = { proxyPass = "http://192.168.10.131:8008/"; extraConfig = "client_max_body_size 300M;" + "proxy_set_header X-Forwarded-For $remote_addr;" + "proxy_set_header Host $host;" + "proxy_set_header X-Forwarded-Proto $scheme;"; }; locations."/" = { extraConfig = "return 302 https://element.xxxxxx.dynu.net/;"; }; extraConfig = "proxy_http_version 1.1;"; }; virtualHosts."matrix.XXXXXX.dynu.net" = { enableACME = true; addSSL = true; extraConfig = "proxy_http_version 1.1;"; locations."/" = { proxyPass = "http://192.168.10.131:8008/"; extraConfig = "client_max_body_size 300M;" + "proxy_set_header X-Forwarded-For $remote_addr;" + "proxy_set_header Host $host;" + "proxy_set_header X-Forwarded-Proto $scheme;"; }; }; virtualHosts."element.XXXXXX.dynu.net" = { enableACME = true; addSSL = true; locations."/" = { proxyPass = "http://192.168.10.131:8009/"; extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;"; }; }; virtualHosts."call.XXXXXX.dynu.net" = { enableACME = true; addSSL = true; locations."/" = { proxyPass = "http://192.168.10.131:8080/"; extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;"; }; }; virtualHosts."livekit.XXXXXX.dynu.net" = { enableACME = true; addSSL = true; locations."/wss" = { proxyPass = "http://192.168.10.131:7881/"; # proxyWebsockets = true; extraConfig = "proxy_http_version 1.1;" + "proxy_set_header X-Forwarded-For $remote_addr;" + "proxy_set_header Host $host;" + "proxy_set_header Connection \"upgrade\";" + "proxy_set_header Upgrade $http_upgrade;"; }; locations."/" = { proxyPass = "http://192.168.10.131:7880/"; # proxyWebsockets = true; extraConfig = "proxy_http_version 1.1;" + "proxy_set_header X-Forwarded-For $remote_addr;" + "proxy_set_header Host $host;" + "proxy_set_header Connection \"upgrade\";" + "proxy_set_header Upgrade $http_upgrade;"; }; }; virtualHosts."livekit-jwt.XXXXXX.dynu.net" = { enableACME = true; addSSL = true; locations."/" = { proxyPass = "http://192.168.10.131:7980/"; extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;"; }; }; virtualHosts."turn.XXXXXX.dynu.net" = { enableACME = true; http2 = true; addSSL = true; locations."/" = { proxyPass = "http://192.168.10.131:5349/"; }; }; }; }
this is my container config for element/matrix podman containers do not run as root so you have to get the file privileges right on the volumes mapped into the containers. i used
top
to find out what user the services were running as. you can see there are some settings there where you can change the user if you are having permissions problems{ pkgs, modulesPath, ... }: { imports = [ (modulesPath + "/virtualisation/proxmox-lxc.nix") ]; security.pki.certificateFiles = [ "/etc/ssl/certs/ca-certificates.crt" ]; system.stateVersion = "23.11"; system.autoUpgrade.enable = true; system.autoUpgrade.allowReboot = false; nix.gc = { automatic = true; dates = "weekly"; options = "--delete-older-than 14d"; }; services.openssh = { enable = true; settings.PasswordAuthentication = true; }; users.users.XXXXXX = { isNormalUser = true; home = "/home/XXXXXX"; extraGroups = [ "wheel" ]; shell = pkgs.zsh; }; programs.zsh.enable = true; environment.etc = { "fail2ban/filter.d/matrix-synapse.local".text = pkgs.lib.mkDefault (pkgs.lib.mkAfter '' [Definition] failregex = .*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Failed password login.* .*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Attempted to login as.*\n.*Invalid username or password.* ''); }; services.fail2ban = { enable = true; maxretry = 3; bantime = "10m"; bantime-increment = { enable = true; multipliers = "1 2 4 8 16 32 64"; maxtime = "168h"; overalljails = true; }; jails = { matrix-synapse.settings = { filter = "matrix-synapse"; action = "%(known/action)s"; logpath = "/srv/logs/synapse.json.log"; backend = "auto"; findtime = 600; bantime = 600; maxretry = 2; }; }; }; virtualisation.oci-containers = { containers = { postgres = { autoStart = false; environment = { POSTGRES_USER = "XXXXXX"; POSTGRES_PASSWORD = "XXXXXX"; LANG = "en_US.utf8"; }; image = "docker.io/postgres:14"; ports = [ "5432:5432" ]; volumes = [ "/srv/postgres:/var/lib/postgresql/data" ]; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--pull=newer" ]; }; synapse = { autoStart = false; environment = { LANG = "C.UTF-8"; # UID="0"; # GID="0"; }; # user = "1001:1000"; image = "ghcr.io/element-hq/synapse:latest"; ports = [ "8008:8008" ]; volumes = [ "/srv/synapse:/data" ]; log-driver = "json-file"; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--log-opt" "max-size=10m" "--log-opt" "max-file=1" "--log-opt" "path=/srv/logs/synapse.json.log" "--pull=newer" ]; dependsOn = [ "postgres" ]; }; element = { autoStart = true; image = "docker.io/vectorim/element-web:latest"; ports = [ "8009:80" ]; volumes = [ "/srv/element/config.json:/app/config.json" ]; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--pull=newer" ]; # dependsOn = [ "synapse" ]; }; call = { autoStart = true; image = "ghcr.io/element-hq/element-call:latest-ci"; ports = [ "8080:8080" ]; volumes = [ "/srv/call/config.json:/app/config.json" ]; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--pull=newer" ]; }; livekit = { autoStart = true; image = "docker.io/livekit/livekit-server:latest"; ports = [ "7880:7880" "7881:7881" "50000-60000:50000-60000/udp" "5349:5349" "3478:3478/udp" ]; cmd = [ "--config" "/etc/config.yaml" ]; entrypoint = "/livekit-server"; volumes = [ "/srv/livekit:/etc" ]; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--pull=newer" ]; }; livekitjwt = { autoStart = true; image = "ghcr.io/element-hq/lk-jwt-service:latest-ci"; ports = [ "7980:8080" ]; environment = { LK_JWT_PORT = "8080"; LIVEKIT_URL = "wss://livekit.xxxxxx.dynu.net/"; LIVEKIT_KEY = "XXXXXX"; LIVEKIT_SECRET = "XXXXXX"; }; entrypoint = "/lk-jwt-service"; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--pull=newer" ]; }; }; }; }
you only need to reboot Nix when something low level has changed. i honestly don’t know where that line is drawn so i reboot quite a lot when i’m setting up a Nix server and then hardly reboot it at all from then on even with auto-updates running oh and if i make small changes to the services i just run
sudo nixos-rebuild switch
and don’t reboot
Tailscale is very popular among people I know who has similar problems. Supposedly it’s pretty transparent and easy to use.
If you want to do it yourself, setting up dyndns and a wireguard node on your network (with the wireguard udp port forwarded to it) is probably the easiest path. The official wireguard vpn app is pretty good at least for android and mac, and for a linux client you can just set up the wireguard thing directly. There are pretty good tutorials for this iirc.
Some dns name pointing to your home IP might in theory be an indication to potential hackers that there’s something there, but just having an alive IP on the internet will already get you malicious scans. Wireguard doesn’t respond unless the incoming packet is properly signed so it doesn’t show up in a regular scan.
Geo-restriction might just give a false sense of security. Fail2ban is probably overkill for a single udp port. Better to invest in having automatic security upgrades on and making your internal network more zero trust
It doesn’t improve security much to host your reverse proxy outside your network, but it does hide your home IP if you care.
If your app can exploited over the web and through a proxy it doesn’t matter if that proxy is on the same machine or over the network.
Either tailscale or cloudflare tunnels are the most adapted solution as other comments said.
For tailscale, as you already set it up, just make sure you have an exit node where your services are. I had to do a bit of tinkering to make sure that the ips were resolved : its just an argument to the tailscale command.
But if you dont want to use tailscale because its to complicated to your partner, then cloudlfare tunnels is the other way to go.
How it works is by creating a tunnel between your services and cloudlare, kind of how a vpn would work. You usually use the cloudlfared CLI or directly throught Cloudflare’s website to configure the tunnel. You should have a DNS imported to cloudflare by the way, because you have to do a binding such as : service.mydns.com -> myservice.local Cloudlfare can resolve your local service and expose it to a public url.
Just so you know, cloudlfare tunnels are free for some of that usage, however cloudlfare has the keys for your ssl traffic, so they in theory could have a look at your requests.
best of luck for the setup !
On my home network I have nginxproxymanager running let’s encrypt with my domain for https, currently only for vaultwarden (I’m testing it for a bit for rolling it out or migrating wholly over to https). My domain is a ######.xyz that’s cheap.
For remote access I use Tailscale. For friends and family I give them a relay [raspberry pi with nginx which proxys them over tailscale] that sits on their home network, that way they need “something they have”[the relay] and “something they know” [login credentials] to get at my stuff. I won’t implement biometrics for “something they are”. This is post hoc justification though, and nonesense to boot. I don’t want to expose a port and a VPS has low WAF and I’m not installing tailscale on all of their devices so s relay is an unhappy compromise.
For bonus points I run pihole to pretty up the domain names to service.swirl and run a homarr instance so no-one needs to remember anything except home.swirl, but if they do remember immich.swirl that works too.
If there are many ways to skin a cat I believe I chose to use a spoon, don’t be like me. Updating each dockge instance is a couple minutes and updating diet pi is a few minutes more which, individually, is not a lot on my weekly/monthly maintence respectfully. But on aggregate… I have checklists. One day I’ll write a script that will ssh into a machine > update/upgrade the os > docker compose pull/rebuild/purge> move on to the next relay… That’ll be my impetus to learn how to write a script.
That’ll be my impetus to learn how to write a script.
This part caught my eye. You were able to do all that other stuff without ever attempting to write a script? That’s surprising and awesome. Assuming you are running everything on a linux server, I feel like a bash script that is run via a cronjob would be your best bet, no need to ssh into the server, just let it do it on it’s own. I haven’t tested any of this but I do have scripts I wrote that do automatic ZFS backups and scrubs; the order should go something like:
open the terminal on the server and type
mkdir scripts
cd scripts
nano docker-updates.sh
type something along the lines of this (I’m still learning docker so adjust the commands to your needs)
#!/bin/bash cd /path/to/scripts/docker-compose.yml docker compose pull && docker compose up -d docker image prune -f
save the file and then type
sudo chmod +x ./docker-updates.sh
to make it executableand finally set up a cronjob to run the script at specific intervals. type
crontab -e
or
sudo crontab -e
(this is if you want to run the script as root but ideally, you just add your user to the docker group so this shouldn’t be needed)and at the bottom of the file type this and save, that’s it:
# runs script at 1am on the first of every month 0 1 1 * * /path/to/scripts/docker-updates.sh
this website will help you choose a different interval
For OS updates you basically do the same thing except the script would look something like: (I forget if you need to type “sudo” or not; it’s running as root so I don’t think you need it but maybe try it with sudo in front of both "apt"s if it’s not working. Also use whatever package manager you have if you aren’t using apt)
while in the scripts folder you created earlier
nano os-updates.sh
#!/bin/bash apt update -y && apt upgrade -y reboot now
save and don’t forget to make it exectuable
then use
sudo crontab -e
(because you’ll need root privileges to update. this will run the script as root without requiring you to input your password)# runs script at 12am on the first of every month 0 0 1 * * /path/to/scripts/os-updates.sh
I did think about cron but, long ago, I heard it wasn’t best practice to update through cron because the lack of logs makes things difficult to see where things went wrong, when they do.
I’ve got automatic-upgrades running on stuff so it’s mostly fine. Dockge is running purely to give me a way to upgrade docker images without having to ssh. It’s just the monthly routine of “apt update && apt upgrade -y” *5 that sucks.
Thank you for the advice though. I’ll probably set cron to update the images with the script as you suggest. I have a “maintenance” homarr page as a budget uptime kuma so I can quickly look there to make sure everything is pinging at least. I made the page so I can quickly get to everyone’s dockge, pihole and nginx but the pings were a happy accident.
the lack of logs
That’s the best part, with a script, you can pipe the output of the updates into a log file you create yourself. I don’t currently do that, if something breaks, I just roll back to a previous snapshot and try again later but it’s possible and seemingly straight forward.
This askubuntu link will probably help
The biggest reason to use VPN is that some ISPs may take issue with you running a web server over a residential service when they see incoming HTTP requests to your IP. If you don’t want to require VPN, then Cloudflare tunnels are perfect for this and they also solve the need for dynamic DNS if you want to use static domain because your domain points to the Cloudflare edge servers and they route it to you wherever your tunnel endpoint is running.
Past that, Traefik is a great reverse proxy that can manage getting LetsEnrcypt SSL certificates for you even with wildcard domains and would still work fine with dynamic DNS.
Do you mind giving a high level overview of what a Cloudlfare tunnel is doing? Like, what’s connected to what and how does the data flow? I’ve seen cloudflare mentioned a few other times in the comments here. I know Cloudflare offers DNS services via their 1.1.1.1 and 1.0.0.1 IPs and I also know they somehow offer DDoS protection (although I’m not sure how exactly. caching?). However, that’s the limit of my knowledge of Cloudflare
Basically the Cloudflare tunnel client connects from the computer running your services (or proxy) out to Cloudflare’s edge servers and your DNS hostname is set to the IP of one of Cloudflare’s edge servers. Cloudflare acts like a reverse proxy by sending incoming SSL requests for your hostname to your tunnel client through their own network. The DNS record doesn’t expose your public IP and the Cloudflare tunnel client easily works behind firewalls, NAT, and doesn’t need a static IP because it connects outbound to Cloudflare’s network.
The biggest limitation is that this only works for SSL traffic because it can be routed by hostname in the SNI without needing a client on the client side. They do offer tunnels for other connections, but that requires their client running on both sides so it’s more like a traditional VPN again.
ISPs shouldn’t care unless it is explicitly prohibited in the contract. (I’ve never seen this)
I still wouldn’t expose anything locally though since you would need to pay for a static IP.
Instead, I just use a VPS with Wireguard and a reverse proxy.
Tailscale is completely transparent on any devices I’ve used it on. Install, set up, and never look at it again because unless it gets turned off, it’s always on.
I’ve run into a weird issue where on my phone, tailscale will disconnect and refuse to reconnect for a seemingly random amount of time but usually less than hour. It doesn’t happen often but it is often enough that I’ve started to notice. I’m not sure if it’s a network issue or app issue but during that time, I can’t connect to my services. All that to say, my tolerance for that is higher than my partner’s; the first time something didn’t work, they would stop using it lol
So I have it running on about 20 phones for customers of mine that use Blue Iris with it. But these are all Apple devices, I’m the only one with Android. I’ve never had a complaint except one person that couldn’t get on at all, and we found that for some reason the Blue Iris app was blacklisted in the network settings from using the VPN. But that’s the closest I’ve seen to your problem.
I wonder if you set up a ping every 15 seconds from the device to the server if that would keep the tunnel active and prevent the disconnect. I don’t think tailscale has a keepalive function like a wireguard connection. If that’s too much of a pain, you might want to just implement Wireguard yourself since you can set a KeepAlive value and the tunnel won’t go idle. Tailscale is probably wanting to reduce their overhead so they don’t include a keepalive.
Nginx Proxy Manager + LetsEncrypt.
Why is it too much asking your partner to use wireguard? I installed wireguard for my wife on her iPhone, she can access everything in our home network like she was at home, and she doesn’t even know that she is using VPN.
A few reasons
- My partner has plenty of hobbies but sys-admin isn’t one of them. I know I’ll show them how to turn off wireguard to troubleshoot why “the internet isn’t working” but eventually they would forget. Shit happens, sometimes servers go down and sometimes turning off wireguard would allow the internet to work lol
- I’m a worrier. If there was an emergency, my partner needed to access the internet but couldn’t because my DNS server went down, my wireguard server went down, my ISP shit the bed, our home power went out, etc., and they forgot about the VPN, I’d feel terrible.
- I was a little too ambitious when I first got into self hosting. I set up services and shared them before I was ready and ended up resetting them constantly for various reasons. For example, my Plex server is on it’s 12th iteration. My partner is understandably weary to try stuff I’ve set up. I’m at a point where I don’t introduce them to a service I set up unless accessing it is no different than using an app (like the Homeassistant app) or visiting a website. That intermediary step of ensuring the VPN is on and functional before accessing the service is more than I’d prefer to ask of them
Telling my partner to visit a website seems easy, they visit websites every day, but they don’t use a VPN everyday and they don’t care to.
- I don’t think this is a problem with tailscale but you should check. Also you don’t have to pipe all the traffic through your tunnel. In the allowed IPs you can specify only your subnet so that everything else leaves via the default gateway.
- in the DNS server field in your WireGuard config you can specify anything, doesn’t have to be RFC1918 compliant. 1.1.1.1 will work too
- At the end of the day, a threat model is always gonna be security vs. convenience. Plex was used as an attack vector in the past as most most people don’t rush to patch it (and rightfully so, there are countless horror stories of PMS updates breaking the whole thing entirely). If you trust that you know what you’re doing, and trust the applications you’re running to treat security seriously (hint: Plex doesn’t) then go ahead, set up your reverse proxy server of choice (easiest would be Traefik, but if you need more robustness then nginx is still king) and open 443 to the internet.
you’re talking to a community of admins that force their family to “use the thing”. they can’t understand why anyone can’t debug tech issues because they have surrounded themselves with people who can.
I get it, my wife isn’t technical at all. she gets online about once a week to check email. I couldn’t even begin to explain to her how to debug her connection problems past turn it off and on again.
so, to simplify things, she doesn’t connect to the home network outside of the home network. but I was able to teach her how to download movies/shows from Plex to her phone and I was able to explain why ads show up on her apps when she’s out of the house.
it’s not perfect, but it’s the best I can give her with her understanding of the technology. knowing the limitations of your user base is just as important as developing the tools they will use and how they will access them.
I get where the original commenter is coming from. A VPN is easy to use, why not have my partner just use the VPN? But like, try adding something to your routine that you don’t care about or aren’t interested in. It’s an uphill battle and not every hill is worth dying on.
All that to say, I appreciate your comment.
I use this https://github.com/ZoeyVid/NPMplus. I use unifi for goe-blocking.
Cloudflare
AWS
McDonald’s
Sears & Roebuck
Johnson & Johnson
I presume you’re referring to Cloudflare tunnel?
Yep, cloudflare tunnel / Zero trust.
Dead easy to set up.
I use traefik with a wildcard domain pointing to a Tailscale IP for services I don’t want to be public. For the services I want to be publicly available I use cloudflare tunnels.
I used to do a reverse proxy setup with caddy , but now I self host a Wireguard VPN. It has access to Nextcloud on the same machine, Home Assistant and Kodi on another. On our phones, Wireguard only has access to certain apps the rest of the network traffic is normal, so a nice simple setup.