I’ve posted a few days ago, asking how to setup my storage for Proxmox on my Lenovo M90q, which I since then settled. Or so I thought. The Lenovo has space for two NVME and one SATA SSD.

There seems to a general consensus, that you shouldn’t use consumer SSDs (even NAS SSDs like WD Red) for ZFS, since there will be lots of writes which in turn will wear out the SSD fast.

Some conflicting information is out there with some saying it’s fine and a few GB writes per day is okay and others warning of several TBs writes per day.

I plan on using Proxmox as a hypervisor for homelab use with one or two VMs runnning Docker, Nextcloud, Jellyfin, Arr-Stack, TubeArchivist, PiHole and such. All static data (files, videos, music) will not be stored on ZFS, just the VM images themselves.

I did some research and found a few SSDs with good write endurance (see table below) and settled on two WD Red SN700 2TB in a ZFS Mirror. Those drives have 2500TBW. For file storage, I’ll just use a Samsung 870EVO with 4TB and 2400TBW.

SSD TB TBW
980 PRO 1TB 600 68
2TB 1200 128
SN 700 500GB 1000 48
1TB 2000 70
2TB 2500 141
870 EVO 2TB 1200 117
4TB 2400 216
SA 500 2TB 1300 137
4TB 2500 325

Is that good enough? Would you rather recommend enterprise grade SSDs? And if so, which ones would you recommend, that are m.2 NVME? Or should I just stick with ext4 as a file system, loosing data security and the ability for snapshots?

I’d love to hear your thought’s about this, thanks!

  • SayCyberOnceMore
    link
    fedilink
    English
    82 years ago

    I’m kinda repeating things already said here, but there’s a couple of points I wanted to highlight…

    Monitor the SMART health: Enterprize and consumer drives fail, it’s good to know in advance.

    Plan for failure: something will go wrong… might be a drive failure, might be you wiping it by accident… just do backups.

    Use redundancy; several cheapo rubbish drives in a RAID / ZFS / BTRFS pool are always better than 1 “good” drive on it’s own.

    Main point: build something and destroy it to see what happens, before you build your “final” setup - experience is always better than theory.

    I built my own NAS and was going with ZFS until I fkd around with it… for me… I then went with BTRFS because of my skills, tools I use, etc… BTRFS just made more sense to me… so I know I can repair it.

    And test your backups 🎃

    • @Pete90@feddit.deOP
      link
      fedilink
      English
      22 years ago

      I’m currently playing around in VMs even before I order my hard drives. Just to see, what I can do. Next up is to simulate a root drive failure and how to replace that. I also want to test rolling back from snapshots.

      The data that I really do need and can’t replace is redundant anyway: one copy on my PC, one on my external HDD, one on my NAS and one on a system at my sisters place. Thats 4 copies on several media (one cold) and at another place. :)

  • AnonStoleMyPants
    link
    fedilink
    English
    72 years ago

    Don’t sweat it.

    I remember looking into this as well like a year ago. I also found the same info and started to look into ssds, consumer and enterprise grade and after all that I realised that most of it is just useless fuzzing about. Yes it is an interesting rabbit hole in which I spent a week probably. In the end one simple thing nullifies most of this: you can track writes per day and SSD health. It is not like you need to somehow made a guess when the drives fail. You do not. Keep track of the health and writes per day and you will get a good sense of how your system behaves. Run that for 6 months and you are infinitely wiser when it comes to this stuff.

    • @Pete90@feddit.deOP
      link
      fedilink
      English
      32 years ago

      That rabbit hole is interesting, but also deep and scary. I’m trying to challenge myself by setting up Proxmox, as so far I’ve just used Raspbery Pis as well as OpenMediaVault. So when I saw those stories about drives dying after 6 months, I was a bit concerned;. Especially because I can’t yet verify the truth in those storries, since I’d call myself and advanced novice if I’, being generous.

      I’ll track drive usage and wear and see what my system does. Good point, then I can get rid of the guesswork. Thank you a lot!

  • @NAK@lemmy.world
    link
    fedilink
    English
    52 years ago

    I’ll agree with the other commenter here.

    Also there may not be any difference between the consumer and enterprise drives. The reason the enterprise cost more is the better warranty. But because they have different components.

    Monitor the drives, modern drives are pretty good at predicting when they are dying, and replace it necessary.

    • @Pete90@feddit.deOP
      link
      fedilink
      English
      12 years ago

      Yeah, concering TBW there wasn’t a huge difference between cosumer- and enterprise drives that I saw. Something along 2500TBW vs. 3500TBW (unless you go with those unaffordable drives, then yes). I’ll monitor the drives and if I find rapidly increasing wear, I can still switch to another file system. The whole reason I bought the Lenovo is to setup a second machine and experiment, while I still have a running “production” system. Thank you!

  • @DecronymAB
    link
    fedilink
    English
    1
    edit-2
    2 years ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    NAS Network-Attached Storage
    RAID Redundant Array of Independent Disks for mass storage
    SSD Solid State Drive mass storage

    3 acronyms in this thread; the most compressed thread commented on today has 4 acronyms.

    [Thread #259 for this sub, first seen 2nd Nov 2023, 14:30] [FAQ] [Full list] [Contact] [Source code]