Hi everyone.

I was trying to research about how to implement SSL on the traffic between my clients and the containers that I host on my server.

Basically, my plan was to use upstream SSL in HAProxy to attempt to achieve this, but in order for that to work, each individual container on my server needs to be able to decrypt SSL. I do not think that is possible and that every container has the necessary libraries for it. This puts a halt on my idea for upstream encryption of traffic from my reverse-proxy to my containers.

With that said, ChatGPT suggested I use Kubernetes with a service mesh like Istio. The idea was intriguing so I started to read about it; but before I dive head-first into using k3s (TBH it’s overkill for my setup), is there any way to implement server-side encryption with podman containers and a reverse-proxy?

After writing all of this, I think I’m missing the point about a reverse-proxy being an SSL termination endpoint, but if my question makes sense to you, please let me know your thoughts!

Thanks!

  • @vegetaaaaaaa@lemmy.world
    link
    fedilink
    English
    3
    edit-2
    1 year ago

    I’m missing the point about a reverse-proxy being an SSL termination endpoint

    Yes, that’s usually one of the jobs of the reverse proxy. Communication between the RP and an application container running on the same host is typically unencrypted. If you’re really paranoid about a rogue process intercepting HTTP connections between the RP and the application container, setup separate container networks for each application, and/or use unix sockets.

    ChatGPT suggested I use Kubernetes

    wtf…

    • @MigratingtoLemmy@lemmy.worldOP
      link
      fedilink
      English
      21 year ago

      Hey, thanks for your comment. Could you explain a bit more about how using Unix sockets would improve my security posture here (in terms of not having unencrypted traffic on the network)? I will think about creating separate namespaces in podman.

      Good thing I asked haha. Is the fact that I mentioned ChatGPT setting a wrong impression? I like to go and ask about such questions to ChatGPT/Bing, sometimes they give wonderful answers, sometimes, not the best. Like this one. I thought that there must be an easier way to secure my traffic/do as much as possible to restricting it without jumping straight to k3s.

      Thanks!

      • Chewy
        link
        fedilink
        English
        41 year ago

        Nothing wrong with asking LLM’s about topics, I’d even say it’s a good idea instead of directly asking on a forum. Just like searching before asking, asking an LLM before asking humans is good.

        And mentioning where you got the recommendation for k8s is also helpful. I’m not knowledgeable about k8s, but I guess the “wtf” was about the overkill of recommending k8s when simpler solutions exist.

        Unix sockets have permissions like any file, so it’s simple to restrict access to a user/group and thus process running as the user. If it’s unencrypted http on a server other processes could listen on localhost, but I’m unsure about that part.

        • @MigratingtoLemmy@lemmy.worldOP
          link
          fedilink
          English
          11 year ago

          Sorry for replying this late; I wanted to read more about Unix sockets and podman before I got back. Thanks for your comment.

          I already responded to the other commenter with what I’ve understood and my plans, I’ll paste it here too:

          If I understand correctly, Unix sockets specifically allow two or more processes to communicate amongst each other, and are supporter on Podman (and Docker).

          Now, the question is: how do I programmatically programmatically utilise sockets for containers to communicate amongst each other?

          I was considering a reverse proxy per pod as someone else suggested, since every podman pod has its own network namespace. Connecting between pods should likely be through the reverse proxies then. I just need to figure out how I can automate the deployment of such proxies along with the pods.

          Thanks again for your comment, and please let me know if I’m missing anything.

          • Chewy
            link
            fedilink
            English
            11 year ago

            Thanks for the long reply. Sadly I don’t know enough about unix sockets and docker/podman networking to help you.

            I’ve only used unix sockets with postgresql and signald. For both I had to mount the socket into the container and for the postgres I had to change the config to use unix sockets.

            • @MigratingtoLemmy@lemmy.worldOP
              link
              fedilink
              English
              11 year ago

              I see. My use-case would probably be better served through a software bus implementation (how would I keep all of these containers attached to the bus? Isn’t that a security risk?), but perhaps handling everything through the network behind individual reverse-proxies might be the best idea in this case.

      • @vegetaaaaaaa@lemmy.world
        link
        fedilink
        English
        31 year ago

        Is the fact that I mentioned ChatGPT setting a wrong impression?

        Not at all, but the fact that it suggested jumping straight to k8s for such a trivial problem is… interesting.

        how using Unix sockets would improve my security posture here

        Unix sockets enforce another layer of protection by requiring the user/application writing/reading to/from them to have a valid UID or be part of the correct group (traditional Linux/Unix permission system). Whereas using plain localhost HTTP networking, a rogue application could somehow listen on the loopback interface and/or exploit a race condition to bind the port and prentend to be the “real” application. Network namespaces (which container management tools use to create isolated virtual networks) mostly solve this problem. Again, basic unencrypted localhost networking is fine for a vast majority of use cases/threat models.

        • @MigratingtoLemmy@lemmy.worldOP
          link
          fedilink
          English
          11 year ago

          Hey, thanks for your comment. My apologies in replying this late; I wanted to read more about Unix sockets and podman before I got back.

          If I understand correctly, Unix sockets specifically allow two or more processes to communicate amongst each other, and are supporter on Podman (and Docker).

          Now, the question is: how do I programmatically programmatically utilise sockets for containers to communicate amongst each other?

          I was considering a reverse proxy per pod as someone else suggested, since every podman pod has its own network namespace. Connecting between pods should likely be through the reverse proxies then. I just need to figure out how I can automate the deployment of such proxies along with the pods.

          Thanks again for your comment, and please let me know if I’m missing anything.

          • @vegetaaaaaaa@lemmy.world
            link
            fedilink
            English
            1
            edit-2
            1 year ago

            how do I programmatically programmatically utilise sockets for containers to communicate amongst each other?

            Sockets are filesystem objects, similar to a file. So for 2 containers to access the same socket, the container exposing the socket must export it to the host filesystem via a bind mount/volume, and the container that needs read/write on this socket must be able to access it, also via a bind mount. The user ID or groups of the user accessing the socket must be allowed to access the socket via traditional unix permissions.

            Again, I personally do not bother with this, I run the reverse proxy directly on the host, and configure it to forward traffic over HTTP on the loopback interface to the containers. [1] [2] [3] and many others lead me to think the risk is acceptable in my particular case. If I was forced to do otherwise, I would probably look into plugging the RP into the appropriate podman network namespaces, or running it on a dedicated host (VM/physical - this time using SSL/TLS between RP and applications, since traffic leaves the host) and implementing port forwarding/firewalling with netfilter.

            I have a few services exposing a unix socket (mainly php-fpm) instead of a HTTP/localhost socket, in this case I just point the RP at these sockets (e.g. ProxyPass unix:/run/php/php8.2-fpm.sock). If the php-fpm process was running in a container, I’d just export /run/php/php8.2-fpm.sock from the container to /some/place/myapp/php.sock on the host, and target this from the RP instead.

            You need to think about what actual attacks could actually happen, what kind of damage they would be able to do, and mitigate from there.

            how I can automate the deployment of such proxies along with the pods

            That’s a separate question. I use ansible for all deployment/automation needs - when it comes to podman I use the podman_container and podman_generate_systemd modules to automate deployment of containers as systemd services. Ansible also configures my reverse proxy to forward traffic to the container (simply copy files in /etc/apache2/sites-available/...; a2enconf; systemctl reload apache2). I have not used pods yet, but there is a podman_pod module. A simple bash script should also do the trick in a first time.

            • @MigratingtoLemmy@lemmy.worldOP
              link
              fedilink
              English
              1
              edit-2
              1 year ago

              I would probably look into plugging the RP into the appropriate podman network namespaces, or running it on a dedicated host (VM/physical - this time using SSL/TLS between RP and applications, since traffic leaves the host) and implementing port forwarding/firewalling with netfilter.

              Could you detail how you would do this? Especially since the containers in my case do not support HTTPS (they do not have the libraries compiled, if I’m not wrong).

              Thank you for the clarification. I do not think I’ll be running malicious containers inside my pods, but I would like to contain unencrypted traffic as much as possible. Running an RP for every pod seems doable and since I reach containers through their loopback address inside the pod, this is reasonably safe for my use-case too.

              Could you confirm if one can reach one’s containers on the loopback address in a separate network namespace on podman? I was wondering about the differences between a pod and a network namespace on podman, and so far the only mention of something like this is that containers in pods share a “security context”. I don’t know enough to understand what this is since I haven’t read about pods in Kubernetes.

              Thanks, I was planning to use Ansible too.

              • @vegetaaaaaaa@lemmy.world
                link
                fedilink
                English
                2
                edit-2
                1 year ago

                Could you detail how you would do this?

                I would re-read all docs about podman networking, different network modes, experiment with systemd PrivateNetwork option, re-read some basic about network namespaces, etc ;) I have no precise guide as I’ve never attempted it, so I would do some research, trial and error, take notes, etc, which is the stage you’re at.

                Edit: https://www.cloudnull.io/2019/04/running-services-in-network-name-spaces-with-systemd/,https://gist.github.com/rohan-molloy/35d5ccf03e4e6cbd03c3c45528775ab3, …

                Could you confirm if one can reach one’s containers on the loopback address in a separate network namespace on podman?

                I think each pod uses its own network namespace [1]. You should check the docs and experiment (ip netns, ip addr, ip link, ip route...).

                I think it’s doable, but pretty much uncharted territory - at least the docs for basic building blocks exist, but I’ve never come across a real world example of how to do this. So if you go this way, you will be on your own debugging, documenting and maintaining the system and fixing it when it breaks. It will be an interesting learning experiment though, hope you can document and share the outcome. Good luck!

                • @MigratingtoLemmy@lemmy.worldOP
                  link
                  fedilink
                  English
                  11 year ago

                  Thank you, I do realise that each pod uses its own namespace. I was talking about if containers part of a different network namespace (outside of their pods) could also reach out to each other via the loopback address.

  • Max-P
    link
    fedilink
    English
    31 year ago

    The mesh proxy would work, but it’s not easy to configure and for somewhat little benefit, especially if they’re all running on the same box. The way that’d work is, NGINX would talk to the mesh proxy which would encrypt it to the other mesh proxy for the target container, and then it would talk to the container unencrypted again. You talk to 3 containers and still end up unencrypted.

    Unless you want TLS between nodes and containers, you can skip the intermediate step and have NGINX talk directly to the containers plaintext. That’s why it’s said to do TLS termination: the TLS session ends at that reverse proxy.

  • @ithilelda@lemmy.world
    link
    fedilink
    English
    2
    edit-2
    1 year ago

    if I’m understanding your question correct, you are trying to use tls on containers that may not have tls libraries?

    there are two ways to that. one is to rebuild every container by yourself modifying its services to contain tls. the other is to use a pod. you put your service container and a reverse proxy into the same pod, setup that reverse proxy correctly as an edge proxy terminating tls, and expose only the reverse proxy’s port. that way, it will just look like a service with tls enabled.

    since you are considering tls for everyone, I assume that you don’t care about overheads. adding a reverse proxy in front of every container is like 10-50MB of additional memory, and it won’t matter on modern systems.

    • @MigratingtoLemmy@lemmy.worldOP
      link
      fedilink
      English
      11 year ago

      Thank you, this is an excellent idea. I will probably not run a pod for every container (technically I can, since Netavark is supported for rootless containers in Podman 4.0), but I will definitely have a few pods on my system, where I can definitely use a reverse-proxy for every pod. Just need to figure out how I can automate it.

      Thanks again

      • @notfromhere@lemmy.one
        link
        fedilink
        English
        11 year ago

        Single node k3s is possible and can do what you’re asking but has some overhead (hence your acknowledgment of overkill). One thing i think it gets right and would help here is the reverse proxy service. It’s essentially a single entity with configuration of all of your endpoints in it. It’s managed programmatically so additions or changes are not needed to he done by hand. It sounds like you need a reverse proxy to terminate the TLS then ingress objects defined to route to individual containers/pods. If you try for multiple reverse proxies you will have a bad time managing all of that overhead. I strongly recommend going for a single reverse proxy setup unless you can automate the multiple proxies setup.

  • @DecronymAB
    link
    fedilink
    English
    1
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    HTTP Hypertext Transfer Protocol, the Web
    HTTPS HTTP over SSL
    SSL Secure Sockets Layer, for transparent encryption
    TLS Transport Layer Security, supersedes SSL
    k8s Kubernetes container management package

    5 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.

    [Thread #264 for this sub, first seen 6th Nov 2023, 14:30] [FAQ] [Full list] [Contact] [Source code]