Hello people, I recently rented a vps server from OVH and I want to start hosting my own piefed instance and a couple other services. I am running debian 13 with docker, and I have nginx proxy manager almost set up. I want to set up subdomains so when I do social.my.domain it will go to my piefed instance, but how do I tell the machine to send piefed traffic to this subdomain and joplin traffic (for example) to another domain? Can I use nginx/docker natively for that or do I have to install another program. Thanks for the advice.

  • Foofighter@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 minutes ago

    I’m not using socker myself but npm and other services in proxmox containers and VMs. The concept is the same though.

    NPM allows you to define a host, which needs to be the subdomain name, allows NPM to know how to handle and serve requests to said domain. In you case this would be the full social.my.domain. Additionally you need to set the local ip /port of the service you’re hosting. You can also use a local host name, which makes it easier to move services to other ips, which probably doesn’t happen often.

    Finally HTTPs, SSL, TLS should be configured. This can be tricky if you don’t have specific instructions but should not be neglected!

  • kumi@feddit.online
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    31 minutes ago

    The right nginx config will do this. Since you already have Nginx Proxy Manager, you shouldn’t need to introduce another proxy in the middle just for this.

    Most beginners find Caddy a lot easier to learn and configure compared to Nginx, BTW.

  • frongt@lemmy.zip
    link
    fedilink
    English
    arrow-up
    8
    ·
    4 hours ago

    how do I tell the machine to send piefed traffic to this subdomain and joplin traffic (for example) to another domain

    You don’t send traffic to domains. You point all the domains to one host, and on that host, set up a reverse proxy like nginx, caddy, or traefik, and then configure HTTP routing rules. That proxy can run in docker. I use traefik and it does all the routing automatically once I add labels to my docker-compose file.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    5 hours ago

    It’s called a Reverse Proxy. The most popular options are going to be Nginx, Caddy, Traefik, Apache (kinda dated, but easy to manage), or HAProxy if you’re just doing containers.

    • kumi@feddit.online
      link
      fedilink
      English
      arrow-up
      1
      ·
      28 minutes ago

      HAProxy if you’re just doing containers

      What makes you say that? From my experience、HAProxy a very competent, flexible, performant and scalable general proxy. It was already established when Docker came on the scene. The more container-oriented would be Traefik (or Envoy).

    • cecilkorik@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 hours ago

      FWIW I don’t find Apache dated at all. It’s mature software, yes, but it’s also incredibly powerful and flexible, and regularly updated and improved. It’s probably not the fastest by any benchmark, but it was never intended to be (and for self-hosting, it doesn’t need to be). It’s an “everything and the kitchen sink” web server, and I don’t think that’s always the wrong choice. Personally, I find Apache’s litlte-known and perhaps misleadingly named Managed Domains (mod_md/MDomain) by far the easiest and clearest way to automatically manage and maintain SSL certificates, it’s really nice and worth looking into if you use Apache and are using any other solution for certificate renewal.

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 hours ago

        I’ll be honest with you here, Nginx kind of ate httpd’s lunch 15 years ago, and with food reason.

        It’s not that httpd is “bad”, or not useful, or anything like that. It’s that it’s not as efficient and fast.

        The Apache DID try to address this awhile back, but it was too late. All the better features of nginx just kinda did httpd in IMO.

        Apache is fine, it’s easy to learn, there’s a ton of docs around for it, but a massively diminished userbase, meaning less up to date information for new users to find in forums in the like.

  • nutbutter@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 hours ago

    In your DNS settings, from your domain provider, add all the A and AAAA records for the sub domains you want to use. So, when someone hits the port 443 using one of those domains, your Nginx Proxy Manager will decide which service to show to the client based on the domain.

    how do I tell the machine to send piefed traffic to this subdomain

    Configure your Nginx Proxy Manager. It should be using port 80 for HTTP, port 443 for HTTPS and another port for its WebUI (8081 is default, iirc).

    So, if I type piefed.yourdomain.com in my address bar, the DNS tells my browser your IP, my browser hits your VPS on port 443, then Nginx Proxy Manager automatically sees that the user is requesting piefed, and will show me piefed.

    For the SSL certificates, you can either generate a new certificate for every subdomain, or use a wild card certificate which can work on all subdomains.

  • DecronymB
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    5 minutes ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DNS Domain Name Service/System
    HTTP Hypertext Transfer Protocol, the Web
    HTTPS HTTP over SSL
    IP Internet Protocol
    NAT Network Address Translation
    SSL Secure Sockets Layer, for transparent encryption
    TLS Transport Layer Security, supersedes SSL
    VPS Virtual Private Server (opposed to shared hosting)
    nginx Popular HTTP server

    9 acronyms in this thread; the most compressed thread commented on today has 15 acronyms.

    [Thread #1001 for this comm, first seen 14th Jan 2026, 02:55] [FAQ] [Full list] [Contact] [Source code]

  • deadcade@lemmy.deadca.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 hours ago

    The job of a reverse proxy like nginx is exactly this. Take traffic coming from one source (usually port 443 HTTPS) and forward it somewhere else based on things like the (sub)domain. A HTTPS reverse proxy often also forwards the traffic as HTTP on the local machine, so the software running the service doesn’t have to worry about ssl.

    Be sure to get yourself a firewall on that machine. VPSes are usually directly connected to the internet without NAT in between. If you don’t have a firewall, all internal services will be accessible, stuff like databases or the internal ports of the services you host.

    • kossa@feddit.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      all internal services will be accessible

      What? Only when they are configured to listen on outside interfaces. Which, granted, they often are in default configuration, but when OP uses Docker on that host, chances are kinda slim that they run some rando unconfigured database directly. Which still would be password or authentication protected in default config.

      I mean, it is never wrong slapping a firewall onto something, I guess. But OTOH those “all services will be exposed and evil haxxors will take you over” is also a disservice.

      • deadcade@lemmy.deadca.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        I’ve seen many default docker-compose configurations provided by server software that expose the ports of stuff like databases by default (which exposes it on all host interfaces). Even outside docker, a lot of software, has a default configuration of “listen on all interfaces”.

        I’m also not saying “evil haxxors will take you over”. It’s not the end of the world to have a service requiring authentication exposed to the internet, but it’s much better to only expose what should be public.

        • kossa@feddit.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          Yep, fair. Those docker-composes which just forward the ports to the host on all interfaces should burn. At least they should make them 127.0.0.1 forwards, I agree.

    • a_person@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      What service would you recommenced for firewall. The firewall I use on my laptop is ufw, should I use that on the vps or is their a different service that works better?

      • kumi@feddit.online
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        46 minutes ago

        Firewalld

        sudo apt-get install firewalld  
        systemctl enable --now firewalld # ssh on port 22 opened but otherwise most things blocked by default  
        firewall-cmd --get-active-zones  
        firewall-cmd --info-zone=public  
        firewall-cmd --zone=public --add-port=1234/tcp  
        firewall-cmd --runtime-to-permanent  
        

        There are some decent guides online. Also take a look in /etc/firewalld/firewalld.conf and see if you want to change anything. Pay attention to the part about Docker.

        You need to know about zones, ports, and interfaces for the basics. Services are optional. Policies are more advanced.

      • deadcade@lemmy.deadca.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        UFW works well, and is easy to configure. UFW is a great option if you don’t need the flexibility (and insane complexity) that manually managing iptables rules offers,

        • kumi@feddit.online
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          32 minutes ago

          The main problem with UFW, besides being based on legacy iptables (instead of the modern nftables which is easier to learn and manage), is the config format. Keeping track of your changes over track is hard, and even with tools like ansible it easily becomes a mess where things can fall out of sync with what you expect.

          Unless you need iptables for some legacy system or have a weird fetish for it, nobody needs to learn iptables today. On modern Linux systems, iptables isn’t a kernel module anymore but a CLI shim that actually interacts with the nft backend.

          Misconfigured UFW resulting in getting pwned is very common. For example, with default settings, Docker will bypass UFW completely for incoming traffic.

          I strongly recommend firewalld, or rawdogging nftables, instead of ufw.

          There used to be limitations with firewalld but policies maturing and replacing the deprecated “direct” rules together with other general improvements has made it a good default choice by now.