I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?

  • @ShittyBeatlesFCPres@lemmy.world
    link
    fedilink
    English
    1811 months ago

    I’m gonna play devil’s advocate here.

    You should play around with it. But I’ve been a Linux server admin for a long time and — this might be unpopular — I think Docker is unimportant for your situation. I use Docker daily at work and I love it. But I didn’t bother with it for my home server. I’ll never need to scale it or deploy anything repeatedly or where I need 100% uptime.

    At home, I tend to try out new things and my old docker-compose files are just not that valuable. Docker is amazing at work where I have different use cases but it mostly just adds needless complexity on a home server.

    • Great Blue HeronOP
      link
      fedilink
      English
      8
      edit-2
      11 months ago

      That’s exactly how I feel about it. Except (as noted in my post…) the software availability issue. More and more stuff I want is “docker first” and I really have to go out of my way to install and maintain non docker versions. Case in point - I’m trying to evaluate Immich so I can move off Google photos. It looks really nice, but it seems to be effectively “docker only.”

      • GreyBeard
        link
        fedilink
        English
        1211 months ago

        The advantage of docker, as I see it for home labs, is keeping things tidy, ensuring compatibility, and easy to manage/backup setup configs, app configs, and app data. It is all very predictable and manageable. I can move my docker compose and data from one host to another in literal seconds. I can, likewise, spin up and down test environments in seconds too. Obviously the whole scaling thing that people love containers for is pointless in a homelab, but many of the things that make it scalable also make it easy to manage.

      • @Tsubodai@programming.dev
        link
        fedilink
        English
        711 months ago

        Im probably the opposite of you! Started using docker at home after messing up my raspberry pi a few too many times trying stuff out, and not really knowing what the hell I was doing. Since moved to a proper nas, with (for me, at least) plenty of RAM.

        Love the ability to try out a new service, which is kind of self-documenting (especially if I write comments in the docker-compose file). And just get rid of it without leaving any trace if it’s not for me.

        Added portainer to be able to check on things from my phone browser, grafana for some pretty metrics and graphs, etc etc etc.

        And now at work, it’s becoming really, really useful, and I’m the only person in my (small, scientific research) team who uses containers regularly. While others are struggling to keep their fragile python environments working, I can try out new libraries, take my env to the on-prem HPC or the external cloud, and I don’t lose any time at all. Even “deployed” some little utility scripts for folks who don’t realise that they’re actually pulling my image from the internal registry when they run it. A much, much easier way of getting a little time-saving script into the hands of people who are forced to use Linux but don’t have a clue how to use it.

    • @Shdwdrgn@mander.xyz
      link
      fedilink
      English
      211 months ago

      This is kinda where I’m at as well. I have always run my home services each in their own VM. There’s no fuss to set up a new one, if I want to move it to a different server I just copy the *.img file over and launch it. Sure I run a lot of internet services across my various machines but it all just works so I don’t understand what purpose there would be to converting all the custom configurations over to docker. It might make sense if I was trying to run all my services directly on the bare metal, but who does that?

      • Terrasque
        link
        fedilink
        English
        311 months ago

        VM’s have much bigger overhead, for one. And VM’s are less reproducible too. If you had to set up a VM again, do you have all the steps written down? Every single step? Including that small “oh right” thing you always forget? A Dockerfile is basically just a list of those steps, written in a way a computer can follow. And every time you build an image in docker, it just plays that list and gives you the resulting file system ready to run.

        It’s incredibly practical in some cases, let’s say you want to try a different library or upgrade a component to a newer version. With VM’s you could do it live, but you risk not being able to go back. You could make a copy or make a checkpoint, but that’s rather resource intensive. With docker you just change the Dockerfile slightly and build a new image.

        The resulting image is also immutable, which means that if you restart the docker container, it’s like reverting to first VM checkpoint after finished install, throwing out any cruft that have gathered. You can exempt specific file and folders from this, if needed. So every cruft and change that have happened gets thrown out except the data folder(s) for the program.

        • @Shdwdrgn@mander.xyz
          link
          fedilink
          English
          011 months ago

          I’m not sure I understand this idea that VMs have a high overhead. I just checked one of my servers, there are nine VMs running everything from chat channels to email to web servers, and the server is 99.1% idle. And this is on a poweredge R620 with low-power CPUs, it’s not like I’m running something crazy-fast or even all that new. Hell until the beginning of this year I was running all this stuff on poweredge 860’s which are nearly 20 years old now.

          If I needed to set up the VM again, well I would just copy the backup as a starting point, or copy one of the mirror servers. Copying a VM doesn’t take much, I mean even my bigger storage systems only use an 8GB image. That takes, what, 30 seconds? And for building a new service image, I have a nearly stock install which has the basics like LDAP accounts and network shares set up. Otherwise once I get a service configured I just let Debian manage the security updates and do a full upgrade as needed. I’ve never had a reason to try replacing an individual library for anything, and each of my VMs run a single service (http, smtp, dns, etc) so even if I did try that there wouldn’t be any chance of it interfering with anything else.

          Honestly from what you’re saying here, it just sounds like docker is made for people who previously ran everything directly under the main server installation and frequently had upgrades of one service breaking another service. I suppose docker works for those people, but the problems you are saying it solves are problems I have never run in to over the last two decades.

          • Terrasque
            link
            fedilink
            English
            511 months ago

            Nine. How much ram do they use? How much disk space? Try running 90, or 900. Currently, on my personal hobby kubernetes cluster, there’s 83 different instances running. Because of the low overhead, I can run even small tools in their own container, completely separate from the rest. If I run say… a postgresql server… spinning one up takes 90mb disk space for the image, and about 15 mb ram.

            I worked at a company that did - among other things - hosting, and was using VM’s for easier management and separation between customers. I wasn’t directly involved in that part day to day, but was friend with the main guy there. It was tough to manage. He was experimenting with automatic creating and setting up new VM’s, stripping them for unused services and files, and having different sub-scripts for different services. This was way before docker, but already then admins were looking in that direction.

            So aschually, docker is kinda made for people who runs things in VM’s, because that is exactly what they were looking for and duct taping things together for before docker came along.

            • @Shdwdrgn@mander.xyz
              link
              fedilink
              English
              111 months ago

              Yeah I can see the advantage if you’re running a huge number of instances. In my case it’s all pretty small scale. At work we only have a single server that runs a web site and database so my home setup puts that to shame, and even so I have a limited number of services I’m working with.

              • Terrasque
                link
                fedilink
                English
                211 months ago

                Yeah, it also has the effect that when starting up say a new postgres or web server is one simple command, a few seconds and a few mb of disk and ram, you do it more for smaller stuff.

                Instead of setting up one nginx for multiple sites you run one nginx per site and have the settings for that as part of the site repository. Or when a service needs a DB, just start a new one just for that. And if that file analyzer ran in it’s own image instead of being part of the web service, you could scale that separately… oh, and it needs a redis instance and a rabbitmq server, that’s two more containers, that serves just that web service. And so on…

                Things that were a huge hassle before, like separate mini vm’s for each sub-service, and unique sub-services for each service doesn’t just become practical but easy. You can define all the services and their relations in one file and docker will recreate the whole stack with all services with one command.

                And then it also gets super easy to start more than one of them, for example for testing or if you have a different client. … which is how you easily reach a hundred instances running.

                So instead of a service you have a service blueprint, which can be used in service stack blueprints, which allows you to set up complex systems relatively easily. With a granularity that would traditionally be insanity for anything other than huge, serious big-company deployments.

                • @Shdwdrgn@mander.xyz
                  link
                  fedilink
                  English
                  211 months ago

                  Well congrats, you are the first person who has finally convinced me that it might actually be worth looking at even for my small setup. Nobody else has been able to even provide a convincing argument that docker might improve on my VM setup, and I’ve been asking about it for a few years now.

                  • Terrasque
                    link
                    fedilink
                    English
                    1
                    edit-2
                    11 months ago

                    It’s a great tool to have in the toolbox. Might take some time to wrap your head around, but coming from vm’s you already have most of the base understanding.

                    From a VM user’s perspective, some translations:

                    • Dockerfile = script to set up a VM from a base distro, and create a checkpoint that is used as a base image for starting up vm’s
                    • A container is roughly similar to a running VM. It runs inside the host os, jailed, which account for it’s low overhead.
                    • When a container is killed, every file system change gets thrown out. Certain paths and files can be mapped to host folders / storage to keep data between restarts.
                    • Containers run on their own internal network. You can specify ports to nat in from host interface to containers.
                    • Most service setup is done by specifying environment variables for the container, or mapping in a config file or folder.
                    • Since the base image is static, and config is per container, one image can be used to run multiple containers. So if you have a postgres image, you can run many containers on that image. And specify different config for each instance.
                    • Docker compose is used for multiple containers, and their relationship. For example a web service with a DB, static file server, and redis cache. Docker compose also handles things like setting up a unique network for the containers, storage volumes, logs, internal name resolution, unique names for the containers and so on.

                    A small tip: you can “exec” into a running container, which will run a command inside that container. Combined with interactive (-i) and terminal (-t) flags, it’s a good way to get a shell into a running container and have a look around or poke things. Sort of like getting a shell on a VM.

                    One thing that’s often confusing for new people are image tags. Partially because it can mean two things. For example “postgres” is a tag. That is attached to an image. The actual “name” of an image is it’s Sha sum. An image can have multiple tags attached. So far so good, right?

                    Now, let’s get complicated. The actual tag, the full tag for “postgres” is actually “docker.io/postgres:latest”. You see, every tag is a URL, and if it doesn’t have a domain name, docker uses it’s own. And then we get to the “: latest” part. Which is called a tag. Yup. All tags have a tag. If one isn’t given, it’s automatically set to “latest”. This is used for versioning and different builds.

                    For example postgres have tags like “16.1” which points to latest 16.1.x version image, built on postgres maintainers’ preferred distro. “16.1-alpine” that point to latest Alpine based 16.1.x version. “16” for latest 16.x.x version, “alpine” for latest alpine based version, be it 16 or 17 or 18… and so on. You can find more details here.

                    The images on docker hub are made by … well, other people. Often the developers of that software themselves, sometimes by docker, sometimes by random people. You can make your own account there, it’s free. If you do, make an image and pushes it, it will be available as shdwdrgn/name - if it doesn’t have a user component it’s maintained / sanctioned by docker.

                    You can also run your own image repository service, as long as it has https with valid cert. Then it will be yourdomain.tld/something

                    So that was a brief introduction to the strange World of docker. Docker is a for profit company, btw. But the image format is standardized, and there’s fully open source ways to make and run images too. At the top of my head, podman and Kubernetes.

                • @MaximilianKohler@lemmy.world
                  link
                  fedilink
                  English
                  18 months ago

                  Instead of setting up one nginx for multiple sites you run one nginx per site and have the settings for that as part of the site repository.

                  Doesn’t that require a lot of resources since you’re running (mysql, nginx, etc.) numerous times (once for each container), instead of once globally?

                  Or, per your comment below:

                  Since the base image is static, and config is per container, one image can be used to run multiple containers. So if you have a postgres image, you can run many containers on that image. And specify different config for each instance.

                  You’d only have two instances of postgres, for example, one for all docker containers and one global/server-wide? Still, that doubles the resources used no?