How to know when to pick iSCSI

Hi folks,

I am doing some recagigering of my homelab, and only recently realized NFS forces sync writes… this probably explains why performance has tanked over the last bit as I moved my linux VM’s and hosts over to NFS from SBM in my homelab.

But this has not got me thinking, for things like my NVR which saves data to my NAS, this probably shouldn’t be NFS… but, maybe it should be iSCSI? Only that singular machine needs access to the data, so no need to add additional layers of networking overhead beyond what is needed to move the bits over the line.

Same with my proxmox backup server, only that machine needs access to the data, would this also be a good candidate for iSCSI?

TLDR; how do you pick iSCSI over SMB or NFS?

Also of note, my system is currently only spinning rust, but I will soon be adding SAS SSD’s, one for SLOG to help with sync writes from NFS, and one to be L2Arc metadata only. Both of these should help some of my current performance issues, but I am wondering if I should be considering iSCSI for some of these devices.

As far as I am aware NFS cannot force sync writes. It can request them. The ultimate authority is the setting on the dataset of Standard, Always or Disabled

Note that having said that its is sensible to use sync writes on VM’s and databases as the potential loss of upto 5 seconds of writes in the event of a sudden power outage is potentially catastropic.

1 Like

The biggest difference between iSCSI and the other two is that iSCSI is virtual block storage, while NFS and SMB are file storage. That means that the client machine is going to be responsible for formatting the “disk”, in most cases the NAS won’t be able to see its contents, and you’d only be able to share any given disk with one client at a time.

VM images are the only place where iSCSI makes much sense to me, FWIW–and then only if you have only a single hypervisor.

1 Like

VMFS formatted iSCSI volumes can be mounted by multiple ESXi servers at the same time.

3 Likes

I typically see the iSCSI use-case being the need for higher IOPS or if you have a specific need for something to be presented and accessed as a block device (like the VM scenario dan previously mentioned).

Your NVR is very unlikely to see any benefit from that and I doubt your PBS will either.

The “solo user” argument isn’t really a reason to go iSCSI, one can be a solo user if something is shared with NFS or SMB.

2 Likes

If anything, iSCSI is going to be even worse than NFS.
For regular shares, you could disable sync writes.
For VMs, the default NFS behaviour seems appropriate. Consider adding a SLOG.

Would it be worse? I am definitely missunderstanding how iSCSI vs NFS work if that is the case… which is probabaly the case.

I will be getting a SLOG to eleviate some of this for sure. Also will be curious to see how much activity the SLOG actually is going once I get it installed.

NFS, like SMB, shares files: The client requests files, or send files; the server is responsible for managing reads and writes with the underlying filesystem—and thus can optimises its operations.
iSCSI presents a virtual raw device: The client manages its own filesystem on the device, reading and writing blocks (including metadata blocks for the filesystem it uses); the server doesn’t known anything about the nature of blocks it handles and has to “act dumb” with them—it also adds its own overhead, as ZFS will add its own metadata to all blocks, data and client metadata.
iSCSI requires more resources, in particular RAM (recommended min. 64 GB).

2 Likes

Ah. Makes sense. So NFS + SLOG would be the way to go, assuming the writes are actually synchronous. Or alternatively I could turn off sync.

NFS defaults to sync writes, due to its historical use. As said, “integrity over performance” is the right call for VMs. So adding an Optane drive for SLOG should help.
NVR can be NFS with sync=disabled.

Note that VMs typically involve many small transactions, for which raidz is quite inefficient, and mirrors are recommended. If all storage is on the 10-wide raidz2, you may reconsider your pool layout.

I have a few SAS SSD’s on their way now. Definitely not as performant as Optane, but should be a good upgrade. One will be slog, one will be L2arc with metadata only.

All mass storage does go to the Z2 array, but the only thing constantly writing is the NVR. The other VM’s don’t really access mass storage often, and the VM’s themselves live on my Proxmox boot SSD’s.