Hello people, I recently rented a vps server from OVH and I want to start hosting my own piefed instance and a couple other services. I am running debian 13 with docker, and I have nginx proxy manager almost set up. I want to set up subdomains so when I do social.my.domain it will go to my piefed instance, but how do I tell the machine to send piefed traffic to this subdomain and joplin traffic (for example) to another domain? Can I use nginx/docker natively for that or do I have to install another program. Thanks for the advice.


It’s called a Reverse Proxy. The most popular options are going to be Nginx, Caddy, Traefik, Apache (kinda dated, but easy to manage), or HAProxy if you’re just doing containers.
Yup. The reverse proxy takes http/https requests from the WAN, and forwards them to the appropriate services on your LAN. It will also do things like automatically maintain TLS certificates, so https requests can be validated. Lastly, it can usually do some basic authentication or group access stuff. This is useful to ensure that only valid users or devices are able to reach services that otherwise don’t support authentication.
So for example, let’s say you have a service called
ExampServrunning on192.168.1.50:12345. This port is not forwarded, and the service is not externally available on the WAN without the reverse proxy.Now you also have your reverse proxy service, listening on
192.168.1.50:80and192.168.1.50:443… Port 80 (standard for http requests) and 443 (standard for https requests) are forwarded to it from the WAN. Your reverse proxy is designed to take requests from your various subdomains, ensure they are valid, upgrade them from http to https (if they originated as http), and then forward them to your various services.So maybe you create a subdomain of
exampserv.example.com, with an A-NAME rule to forward to your WAN IPv4 address. So any requests for that subdomain will hit ports 80 (for http) or 443 (for https) on your WAN. These http and https requests will be forwarded to your reverse proxy, because those ports are forwarded. Your reverse proxy takes these requests. It validates them (by upgrading to https if it was originally an http request, verifying that the https request isn’t malformed, that it came from a valid subdomain, prompting the user to enter a username and password if that is configured, etc.)… After validating the request, it forwards the traffic to192.168.1.50:12345where your ExampServ service is running.Now your ExampServ service is available internally via the IP address, and externally via the subdomain. And as far as the ExampServ service is concerned, all of the traffic is LAN, because it’s simply communicating with the reverse proxy that is on the same network. The service’s port is not forwarded directly (which is a security risk in and of itself), it is properly gated behind an authentication wall, and the reverse proxy is ensuring that all requests are valid https requests, with a proper TLS handshake. And (most importantly for your use case), you can have multiple services running on the same device, and each one simply uses a different subdomain in your DNS and reverse proxy rules.
FWIW I don’t find Apache dated at all. It’s mature software, yes, but it’s also incredibly powerful and flexible, and regularly updated and improved. It’s probably not the fastest by any benchmark, but it was never intended to be (and for self-hosting, it doesn’t need to be). It’s an “everything and the kitchen sink” web server, and I don’t think that’s always the wrong choice. Personally, I find Apache’s litlte-known and perhaps misleadingly named Managed Domains (mod_md/MDomain) by far the easiest and clearest way to automatically manage and maintain SSL certificates, it’s really nice and worth looking into if you use Apache and are using any other solution for certificate renewal.
I’ll be honest with you here, Nginx kind of ate httpd’s lunch 15 years ago, and with good reason.
It’s not that httpd is “bad”, or not useful, or anything like that. It’s that it’s not as efficient and fast.
The Apache DID try to address this awhile back, but it was too late. All the better features of nginx just kinda did httpd in IMO.
Apache is fine, it’s easy to learn, there’s a ton of docs around for it, but a massively diminished userbase, meaning less up to date information for new users to find in forums in the like.
Apache has the better open source tooling IMO.
I use both, but at work I prefer apache simply for its relative ease of setting up our SSO solution. There is probably a tool for that in nginx as well, but its either proprietary or hard to find (and I did try to find it, but setting up and learning apache and then SSO was actually easier for me).
What makes you say that? From my experience、HAProxy a very competent, flexible, performant and scalable general proxy. It was already established when Docker came on the scene. The more container-oriented would be Traefik (or Envoy).
HAProxy is not meant for complex routing or handling of endpoints. It’s a simple service for Load Balancing or proxying alone. All the others have better features otherwise.
More concretely…? What cursed endpoints is this too simple for?
https://docs.haproxy.org/3.2/configuration.html#http-request
https://docs.haproxy.org/3.2/configuration.html#7
https://docs.haproxy.org/3.2/configuration.html#11
What is an example of a “Better feature” relevant here?
For starters: Rails, PHP, and passthrough routing stacks like message handlers and anything that expects socket handling. It’s just not built for that, OR session management for such things if whatever it’s talking to isn’t doing so.
It seems like you think I’m talking smack about HAProxy, but you don’t understand it’s real origin or strengths and assume it can do anything.
It can’t. Neither can any of the other services I mentioned.
Chill out, kid.
One related story: I did have the arguable pleasure to operate a stateful Websockets/HTTP2-heavy horizontally scaled “microservice” API with Rails and even more Ruby, as well as gRPC written in other stuff. Pinning of instances based on auth headers and sessions, weighting based on subpaths, stuff like that. It was originally deployed with Traefik. When it went from “beta” stage to having to handle heavier traffic consistently and reliably on the public internet, Traefik did not cut it anymore and after a few rounds of evaluation we settled on HAProxy, which was never regretted IIRC. My friends company had it in front of one of the countries busiest online services at the time, a pipeline largely built in PHP. Fronted with haproxy. I have seen similar patterns patterns play out at other times in other places.
Outside of $work I’ve had them all running side by side or layered (should consolidate some but ain’t nobody got time for that) over 5+ years so I think I have a decent feel for their differences.
I’m not saying HAProxy is perfect, always the best pick, has the most features, or without tradeoffs. It does take a lot more upfront learning and tweaking to get what you need from it. But I can’t square your claims with lived experience, especially when you specifically contrast it with Traefik, which I would say is easy to get started with, has popular first-class support for containers, and loved by small teams - but breaks at scale and when you hit more advanced use-cases.
Not that any of the things either of us have mentioned so far is releveant whatsoever for a budding homelabber asking how to do domain-based http routing.
I think you are just baiting now.