So, i got persuaded to switch from a “server that is going to do everything” to “compute server + storage server”

The two are connected via a DAC on an intel x520 network card.

Compute is 10.0.0.1, Storage is 10.255.255.254 and i left the usable hosts in the middle for future expansion.

Before I start to use it, I’m wondering if i chose the right protocols to share data between them.

I set NFS and iSCSI.

With iSCSI i create an image, share that image on the compute server, format it as btrfs, use it as a native drive. Files are not accessible anywhere else.

With NFS i just mount the share and files can be accessed from another computer.

Speed:

I tried to time how long it takes to fill a dummy file with zeroes.

/iscsi# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync"
250000+0 records in
250000+0 records out
2048000000 bytes (2.0 GB, 1.9 GiB) copied, 0.88393 s, 2.3 GB/s

real    0m2.796s
user    0m0.051s
sys     0m0.915s
/nfs# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync"
250000+0 records in
250000+0 records out
2048000000 bytes (2.0 GB, 1.9 GiB) copied, 2.41414 s, 848 MB/s

real    0m3.539s
user    0m0.038s
sys     0m1.453s
/sata-smr-wd-green-drive-for-fun# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync"
250000+0 records in
250000+0 records out
2048000000 bytes (2.0 GB, 1.9 GiB) copied, 10.1339 s, 202 MB/s

real    0m46.885s
user    0m0.132s
sys     0m2.423s

what i see from this results:

the sata slow drive goes at 1.6 gigabit/s but then for some reason the computer needs so much time to acknowledge the operation.

nfs transferred it at 6.8 gigabit/s which is what i expected from a nvme array. Same command on the storage server gives similar speed.

iscsi transfers at 18.4 gigabit/s which is not possible with my drives and the fiber connection. Probably is using some native file system trickery to detect “it’s just a file full of zeroes, just tell the user it’s done”

The biggest advantage of NFS is that I can share a whole directory and get direct access. Also sharing another disk image via iscsi requires a service restart which means i have to take down the compute server.

But with iscsi i am the owner of the disk so i can do whatever i want, don’t need to worry about permissions, i am root, chown all the stuff

So… after this long introduction and explanation, what protocol would you use for…:

  • /var/lib/mysql - a database. Inside a disk image shared via iscsi or via nfs?

  • virtual machine images. Copy them inside another image that’s then shared via iscsi? Maybe nfs is much better for this case. Otherwise with iscsi i would have a single giant disk image that contains other disk images…

  • lots of small files like WordPress. Maybe nfs would add too much overhead? But it would be much easier to backup if it was an NFS share instead of a disk image

  • Nine
    link
    fedilink
    English
    81 year ago

    Are we just gonna not talk about OP using 10/8? 😂

    • StarDreamer
      link
      fedilink
      English
      5
      edit-2
      1 year ago

      What someone does with their 16,777,215 private IPv4 addresses is none of our business…

      Now just connect all of that with dumb L2 switches and watch those broadcasts fly!

    • @ezjohnson@lemmy.ml
      link
      fedilink
      English
      41 year ago

      “future expansion” - if OP adds an average of 10 servers every day for the next ~4600 years they’ll run out of address space.

    • @Markaos@lemmy.one
      link
      fedilink
      English
      31 year ago

      I mean, there is the whole 128/8 for localhost, kinda hard to beat that with crazy allocations. And OP still has another /12 and /16 networks available even if they refuse to further divide them.