iSCSI Performance Slow Reads

I want to preface this with some background, since I know this question is posted all over the place all the time. I have done a lot of reading, I know ZFS, TrueNAS, and iSCSI quite well, and I have read a lot of the posts including “The path to success for block storage” about iSCSI setups. I also believe pretty thoroughly that this system is setup well for iSCSI performance, but maybe I’m ignorant.

I’ve been doing some validation for performance on an iSCSI share I have setup and connected to a Windows 10 VM. The VM is capable of writing to the iSCSI share at nearly 1 gigabyte (yes byte) per second (close to line speed of 10GbE on this hypervisor host). But reads are sitting much more commonly in the 30-60 megabyte per second range, which feels low considering this setup.

I’m not looking for god levels of performance, but this seemed like more of a disparity than I’d expect. It might even be fine for our use, but still threw me off.

The NAS itself is running an EPYC 7281 w/ 256GB of RAM and 16 Micron 5300 SATA drives in it, it’s been nothing but a monster for our use case. Not the fastest thing in the universe, but it does great and I sincerely doubt the CPU is an issue here.

I have 8 mirrored VDEVs setup for this pool, no L2ARC or SLOG, and the pool is around 40% full.

Here are the configurations:

  • Dataset is set to 128KiB Record Size
  • zvol is set to Sync = Always
  • iSCSI Extent is set to 512 Logical Block Size
  • LUN RPM is SSD
  • zvol in use is 500GiB

The main point of this post is to try and find out if I’m missing something super obvious, I’ve done setups like this before and don’t recall having such a massive disparity, even with Sync = Always.

IOPS wise performance isn’t as far apart, I saw about 17k reads and 13k writes with 4k randoms at queue depth of 8 and 8 threads.