Hey all, I’ve been doing a bunch of research on selfhosting the last few weeks as I’d love to lean on more open source projects for my daily productivity & entertainment. My main goal is to backup all my personal documents, photos, and videos (around 1tb so far over ~5 years, so not too demanding) and host a few services to access files on local storage (Immich, Jellyfin) and personal (paperless-ngx, homeassistant, morss). Although I’m not afraid to mess around learning Docker, I’d like to prioritize low maintenance in balance with relatively low long-term cost so that I don’t run into an issue that takes more than a day to restore access to my files/backups. I’d rather save that time for the fun stuff, like endlessly configuring HA automations.

All that said, I figure a decent solution would be to run a local NAS in RAID 6 with a cold storage HDD to swap whenever I transfer a bunch of files from my camera for local backup, and a remote backup at either my parents’ home or maybe eventually on another friend’s NAS. The main thing I’m wondering right now is if a prebuilt NAS (Synology, Asustor, etc.) is worth it in comparison to a custom built system for simple maintenance, reliable and low-bandwidth remote backup and recovery, and solid file sharing options for friends and family? I’ve heard SFTPGo is a great project for file transfers if going custom built, so I’m not completely worried about the last point, but it’d still be a nice bonus to not have to worry about another service.

My greatest fear is having to explain to my parents what a terminal is, so I’d like something reliable with a good price which I can hopefully maintain without crossing that bridge. I know most prebuilt NAS systems aren’t as cost effective or flexible for hosting a bunch of services also, so if I did go with a prebuilt, I would probably pick up a micro PC like a NUC or an old Dell Optiplex to network with the NAS for Immich, and maybe use some internal storage to keep some movies to stream with Jellyfin (unless there’s a limitation I’m not considering). Any advice?

  • @GottaRiskIt@lemmy.world
    link
    fedilink
    English
    161 year ago

    I went down the rabbit hole a while back, I have the space so I went with an old Dell R720 server rack with 24 cores/48 threads, and like 128 gigs of ram for 300 dollars off eBay.

    I flashed the raid controller to IT mode using this guide

    The perk of going this route is that I can run UnRaid which has an awesome web interface for creating docker containers and content servers.

    At the same time you get the ability to add drives over time without having to rebuild your array. I started with a cache, parity, and storage. Over time I have added an additional parity drive and 6 more storage drives.

    With this setup and similar you also can use SAS drives. Used helium filled enterprise drives are around 80 dollars for 10tb.

    I run a plex server with mostly 4k content, game servers, Wordpress, pihole, media grabbers (aars), seedbox, home nas and countless other containers basically 24/7.

    It works incredibly well especially for the price, but it is large. If you have space I highly recommend it. I run mine in an insulated crawlspace lol.

    • @skybox@lemm.eeOP
      link
      fedilink
      English
      21 year ago

      Damn that R720 sounds like a great all-in-one solution. Is the power draw manageable?

      Also woah! Helium filled drives? What’s the lifespan/risk on those if they’ve already gotten their lifespan cut short?

      • thejevans
        link
        fedilink
        English
        61 year ago

        My R720xd is fully loaded with 12 HDDs, 2 SFP+ DACs, 2 SSDs, 2 SD cards, 128GB RAM, and 2 of the higher end CPUs available for the platform. Running ESXi with a bunch of VMs including TrueNAS, pfsense, and plex + arr stack, I average at about 250W-320W, and it’s loud as hell.

      • @GottaRiskIt@lemmy.world
        link
        fedilink
        English
        1
        edit-2
        1 year ago

        Average draw would be similar to the other person 200-300 watts under load. And 140 watts idle. My server is really only super loud on boot. The noise levels are a none issue for me.

        Most the drives I get I scan the smart data and most have nearly no usage on them. The drives are cheap enough and running parity I am not too worried about data loss. I have been running the server for 2.5 years now and I have yet to lose a drive.

        I run these guys and with a ssd cache I really have zero complaints. Like I said my main priority was 4k video and they handle even the largest file steaming without issue. Although I try my best to avoid transcoding and using a shield to minimize that.

  • @MonkCanatella@sh.itjust.works
    link
    fedilink
    English
    101 year ago

    You already sound like you’re down the rabbit hole! If I could restart I would probably do a DIY server instead of a synology NAS. It’s just really satisfying how much stuff you can offload onto the NAS itself. And Synology are notorious for using weak components for that type of thing. Transcoding can be particularly weak depending on the model, which if you’re wanting to host content with Jellyfin, may give you some regret. That said having a synology NAS and a NUC could be a great solution. Or you could just make a DIY with a jonsbo case that can handle most anything you can throw at it and be extensible when you go further down the rabbit hole. All things considered if I could redo it all, I would go this route.

    Some advantages: can upgrade CPU, slot in GPU for transcoding and other types of work, upgrade ram, no hacks to use NVMEs as a volume, can probably find a board with native 10g networking to avoid using up a pci slot, more pci slots.

    Disadvantages: easier to footgun, no shr raid but you seem set on raid 6 anyway.

    • @skybox@lemm.eeOP
      link
      fedilink
      English
      41 year ago

      The thing that attracted me most to Synology is that they have pretty braindead simple software, I assume their systems have decent power management given the low hardware specs, and Hybrid Backup, Snapshot Replication, and Active Backup for Business seem to be a solid set of remote backup options which I couldn’t find simple, non-proprietary alternatives for. Plus, it would be nice to have a NUC or Optiplex separate since I don’t know if running a NAS off them would be the best idea but they’re also cheap and have great power management (I think I saw a 200W 80+ platinum PSU in an optiplex with a i5-7500, which seems like a great value alone). Ultimately I’m just not sure if there’s a way to combine the pros of each of those solutions together to avoid the annoyances of maintaining two systems and trusting Synology’s hardware and software to keep my system running smooth long-term.

      Also honestly I just picked RAID 6 cause I heard most people prefer to rely on RAID levels that tolerate more than one disk failure. Is SHR any good even though it’s proprietary?

      • @MonkCanatella@sh.itjust.works
        link
        fedilink
        English
        21 year ago

        Hybrid Backup, Snapshot Replication, and Active Backup for Business seem to be a solid set of remote backup options which I couldn’t find simple, non-proprietary alternatives for

        I assumed there was some great open source alternatives to any of these. That’s surprising to be honest. And yes Synology is very simple, and this has pros and cons. It doesn’t your shit can’t get rocked. I had some issue with certificates and it took two weeks of downtime to get back up and running.

        NUCs provide fairly good value for the machine but ultimately you don’t avoid any of the work adding a synology to the mix. But if it seems like a good value why not pick one up for a rainy day?

        I only use SHR-1 which has one parity drive. This is for a 6 bay. It’s just as performant as raid. The benefits are being able to add new drives without wiping all the data first, and being able to have multiple drive sizes. in raid, if you have multiple drive sizes, each drive is cut down to the size of the smallest drive in the array (at least from what I know).

        • @skybox@lemm.eeOP
          link
          fedilink
          English
          11 year ago

          I haven’t looked very hard so there could be backup services I’m missing. So far I’ve found restic/autorestic and duplicati, but I’m not sure what their differences in purposes are or pros/cons between them.

          Also I’ve heard Unraid has a flexible storage solution which would be nice as I would like to just upgrade as I go instead of planning substantial disk upgrades, but are there also solutions for that on custom built systems instead of SHR?

  • @space@lemmy.dbzer0.com
    link
    fedilink
    English
    101 year ago

    In terms of performance and flexibility, building your own is better. Depends on what you want out of it.

    If all you want is an easy to setup NAS with no bells and whistles, get a synology. If you want to build a server that also has a NAS, if you want to be in control of the software, build your own.

    You don’t even need server hardware. I used an older desktop computer with an HBA card. It’s also less noisy and much smaller.

  • @jozza@lemmy.world
    link
    fedilink
    English
    81 year ago

    There are a few people saying that a synology NAS may not do everything you’d ever want, but there’s an underlying assumption there that you should run everything on a single device. There’s value in isolating functions to their dedicated device, especially when the alternative means a guaranteed compromise.

    • @shrugal@lemm.ee
      link
      fedilink
      English
      21 year ago

      What compromise are you talking about? My NAS runs everything I need just fine, and I don’t think adding another device would improve anything.

      The only limiting factors I can think of are performance or memory constraints, but since I don’t use all the services at the same time there is no issue.

        • @shrugal@lemm.ee
          link
          fedilink
          English
          1
          edit-2
          1 year ago

          That only makes sense when you’re talking about adding redundancy imo, because multiple devices also add more sources of failure. Personally I’d rather have everything failing all at once every 20 years (with backups ofc) than something different breaking all the time.

  • @SK4nda1@lemmy.ml
    link
    fedilink
    English
    71 year ago

    Buy a nas. You’ll be up and running much quicker. Build a separate server instead. Look for low powered intel NUCs and run portainer or proxmox. Or both. Use rsync or nfs to backup relevant data to the bought nas and use Infrastructure as code/gitops to configure the NUC.

    • @thirdBreakfast@lemmy.world
      link
      fedilink
      English
      31 year ago

      I went this route - Synology NAS and a couple of HP Mini G2 800s running Proxmox for my compute loads. And I would recommend that arrangement for someone just getting started in self-hosting. Get going quickly and safely and put your effort into the cool stuff.

      That said, I’ve drunk the ZFS kool-aid and have learned enough along the way to consider moving to TrueNAS or similar on some sort of low power setup in the future. I’m in no hurry.

      • @SK4nda1@lemmy.ml
        link
        fedilink
        English
        31 year ago

        Jeaaaah I made the mistake of building everything myself. 1.5 years and counting and I have no working environment due to free time constraints.

  • Dandroid
    link
    fedilink
    English
    51 year ago

    I have a Synology pre built. Self hosting on it is doable, but I found it very limiting because of all of the packages that don’t exist for its custom distro. Eventually I got a new gaming PC and converted my old one to a most standard Linux distro because of this.

    This was back before I knew anything about docker. You could probably get around some of the package limitations by using docker. In fact, I have done this. I am using rsnapshot in a container to backup my server because rsnapshot is not available on Synology.

  • @DecronymAB
    link
    fedilink
    English
    3
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    ESXi VMWare virtual machine hypervisor
    NAS Network-Attached Storage
    NUC Next Unit of Computing brand of Intel small computers
    PSU Power Supply Unit
    RAID Redundant Array of Independent Disks for mass storage
    SATA Serial AT Attachment interface for mass storage
    SSD Solid State Drive mass storage

    7 acronyms in this thread; the most compressed thread commented on today has 13 acronyms.

    [Thread #147 for this sub, first seen 19th Sep 2023, 06:55] [FAQ] [Full list] [Contact] [Source code]

  • @CCatMan@lemmy.one
    link
    fedilink
    English
    31 year ago

    I ended up going with a synology Nas as i didn’t need high performance CPU and wanted a turn key solution. For what you get hardware wise, its low value, but if you factor in software and support, it works out to OK value.

    You mentioned your parents will be using this. What services are you hoping to host? Outside network access is another rabbit hole.

    Check the hardware requirements of the services you plan the host, but from what ir sounds like, you would likely be better served with decent pc 8th gen Intel with the storage in a 4 bay NAS or internal to the PC.

    I suggested 8th gen Intel as a min for video transcoding (if needed)

    • @skybox@lemm.eeOP
      link
      fedilink
      English
      11 year ago

      My parents won’t necessarily be using the NAS, I’d just be using some kind of system (maybe even just a raspi) as a remote backup solution with a wireguard tunnel to my local NAS, but if a drive fails, I’d be about 700 miles away to manage it.

      If it was a perfect world, I’d like to just ship a new drive to my parents and tell them to unplug the failing one and plug in the new one, then manage the rest automatically/myself remotely, but I assume that’s a pipe dream.

      • @howlingecko@sh.itjust.works
        link
        fedilink
        English
        1
        edit-2
        1 year ago

        Built a NAS over 5 years ago. It runs UnRaid and configured with dual parity (tolerates two drive failures). If a drive were to go bad: shutdown the NAS, slide the drive out, slide the new drive in, power back up and the rest could be done remotely (via your WireGuard tunnel).

        Unraid is capable of hosting your VMs and/or docker containers as well. I have Syncthing running in a container with a remote machine (also running Syncthing) and they sync backups.

        One of the main perks of UnRaid is that you can mix and match drive sizes. You just have to make sure that your largest capacity drive(s) are your parity drive(s).

  • Yote.zip
    link
    fedilink
    English
    31 year ago

    I would go custom and use hardware that you can re-configure and re-use in the future. If you pick up a Synology now and wind up feeling restricted by it in 2 years, it might become useless e-waste. If you have anything laying around, put that to use while you’re getting your feet wet - you probably don’t know what hardware configurations you’ll end up wanting in a year, and you don’t want to underbuy/overbuy.

    You can also test self-hosting without any real hardware by spinning up a VM and passing in “fake” hard-drives to it. Try setting up a RAID6 in this fashion and see what happens. After you’ve played around enough you can just export all your Docker data etc onto real hardware.

    I haven’t used any of the prebuilt things so I’m not sure how user-friendly they are compared to normal solutions, but I’d find it hard to believe that they offer anything truly unique in terms of being accessible for normies. Assuming you’re going to be the only one taking care of the NAS administration, there’s likely an accessible webUI for every public service you want to offer to your friends/family.

  • TheHolm
    link
    fedilink
    English
    21 year ago

    I was using QNAPs NASes for more than 10 years. It was a great product, not anymore. Feature bloat took its toll. It can do a lot but do it badly. So if you go for prebuild avoid QNAP. Build your own.

  • @hoodlem@hoodlem.me
    link
    fedilink
    English
    21 year ago

    I went with a Synology and have been very happy with it. Easy to use, very nice GUI, yet quite powerful with the features provided.

    From there I moved on to NUC. I used to host several things through Docker on the Synology but I’m now moving many of those things to the NUC.

  • @shrugal@lemm.ee
    link
    fedilink
    English
    2
    edit-2
    1 year ago

    If you enjoy researching, tinkering and customizing everything exactly how you envision it then build a custom one. If you “just” want to use the thing and run some docker containers then buy a NAS. From what you wrote I think a NAS is what you are looking for, especially the low maintenance part. Just make sure it’s not the most basic one, so it actually has the power to run what you need.

    The one great thing about Synology NAS is that most things are right there in the UI or package center. You can just install them without researching 100 different alternatives, and configure them in the UI instead of config files. What’s not there can be installed just like on a custom server, because it is just a regular server after all. You also get good customer support if something doesn’t work, especially useful when you’re not as knowledgeable in everything yet.

  • thejevans
    link
    fedilink
    English
    11 year ago

    If power usage and/or noise are concerns, I would steer clear of enterprise gear.

    I started out with a Synology NAS, which died and took my data with it because of their proprietary software raid. I think you don’t need to worry about that these days, but I haven’t looked into it much. I haven’t gone back to a pre built NAS since.

    Currently, my production setup consists of a Dell R720xd that runs pretty much everything, and a Dell R710 that runs as a backup TrueNAS server. It’s loud, sucks back about 550W, produces a ton of heat, and takes up a good deal of space when you add in the rack mount switch and ups. I just moved pretty far, and I decided to move my homelab to my dad’s house instead of taking it with me.

    My plan is to migrate to a more reasonable setup incrementally. I’m currently building a proxmox ve host out of my old gaming PC (ryzen 2700x + gtx1060). I added 2x 10TB drives, made a mirrored zfs pool, and I’m running an openmediavault VM to share it on the network. I have another VM for home assistant, another for matrix/jitsi/etherpad, another for jellyfin/arr stack/sabnzbd with the GPU passed through for transcoding, another for swag/paperless-ngx/immich, and a final one for the MASH Ansible playbook. And I have a small fanless AliExpress PC running pfsense as a router/gateway.

    The “ideal” final setup is to basically build another machine to put TrueNAS onto that will replace my openmediavault setup. I’m aiming for total average power draw to be under 100W.

    My suggestion given my experience with different hardware is to scrap together whatever you can for cheap, run proxmox with openmediavault, and build the VMs for services whose data you don’t care much about first, then build a dedicated NAS running TrueNAS. The NAS doesn’t have to be fancy. It doesn’t need ECC ram. You could probably build a competent, compact NAS for about $400 without HDDs. Once you have the NAS, then build out services like NextCloud, immich, and paperless-ngx, where losing the data would suck. And then think about a backup solution for that data.