Hi all. I was curious about some of the pros and cons of using Proxmox in a home lab set up. It seems like in most home lab setups it’s overkill. But I feel like there may be something I’m missing. Let’s say I run my home lab on two or three different SBCs. Main server is an x86 i5 machine with 16gigs memory and the others are arm devices with 8 gigs memory. Ample space on all. Wouldn’t Proxmox be overkill here and eat up more system resources than just running base Ubuntu, Debian or other server distro on them all and either running the services needed from binary or docker? Seems like the extra memory needed to run the Proxmox software and then the containers would just kill available memory or CPU availability. Am I wrong in thinking that Proxmox is better suited for when you have a machine with 32gigs or more of memory and some sort of base line powerful cpu?

  • @DeltaTangoLima@reddrefuge.com
    link
    fedilink
    English
    8
    edit-2
    5 months ago

    It all depends on how you want to homelab.

    I was into low power homelabbing for a while - half a dozen Raspberry Pis - and it was great. But I’m an incessant tinkerer. I like to experiment with new tech all the time, and am always cloning various repos to try out new stuff. I was reaching a limit with how much I could achieve with just Docker alone, and I really wanted to virtualise my firewall/router. There were other drivers too. I wanted to cut the streaming cord, and saving that monthly spend helped justify what came next.

    I bought a pair of ex enterprise servers (HP DL360s) and jumped into Proxmox. I now have an OPNsense VM for my firewall/router, and host over 40 Proxmox CTs, running (at a guess) around 60-70 different services across them.

    I love it, because Proxmox gives me full separation of each service. Each one has its own CT. Think of that as me running dozens of Raspberry Pis, without the headache of managing all that hardware. On top of that, Docker gives me complete portability and recoverability. I can move services around quite easily, and can update/rollback with ease.

    Finally, the combination of the two gives me a huge advantage over bare metal for rapid prototyping.

    Let’s say there’s a new contender that competes with Immich. They offer the promise of a really cool feature no one else has thought of in a self-hosted personal photo library. I have Immich hosted on a CT, using Docker, and hiding behind Nginx Proxy Manager (also on a CT), accessible via photos.domain on my home network.

    I can spin up a Proxmox CT from my custom Debian template, use my Ansible playbook to provision Docker and all the other bits, access it in Portainer and spin up the latest and greatest Immich competitor, all within mere minutes. Like, literally 10 minutes max.

    I have a play with the competitor for a bit. If I don’t like it, I just delete the CT and move on. If I do, I can point my photos.domain hostname (via Nginx Proxy Manager) to the new service and start using it full-time. Importantly, I can still keep my original Immich CT in place - maybe shutdown, maybe not - just in case I discover something I don’t like about the new kid on the block.

    That’s a simplified example, but hopefully illustrates at least what I get out of using Proxmox the way I do.

    The cons for me is the cost. Initial cost of hardware, and the cost of powering beefier kit like this. I’m about to invest in some decent centralised storage (been surviving with a couple li’l ARM-based NASes) to I can get true HA with my OPNsense firewall (and a few other services), so that’s more cost again.