Super fast disks, slow performance

I have a pair of Samsung 990 Pro 4TB M.2 as a mirrored VDEV, not even my primary pool.

Running FIO on TrueNAS Scale latest, I get around 5K x 5K IOPS readx x writes.

On the same VDEV, I have a ZVOL, mounted under VMWare as a VMFS volume via iSCSI a pair of 10g connections, I get 20K x 20K, roughly. Never over 30K.

Command being used:

sudo fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=16k --iodepth=64 --size=8G --readwrite=randrw --numjobs=8

Any thoughts? I should be able to, without much effort at all, get something around 400K IOPS, even the stick itself has a hero number of 1.6M IOPS.

Let me back down for a sec…I forgot I was running a SMART LONG test on both disks in this mirrored VDEV. I was currently seeing something around 5K total no matter what I was doing.

Letting that complete then will post an update.

Other thing is your RAM size. You probably want to try benchmarks larger than your ARC size or reads may be off.

When you post back, let us know your full setup details so you get better comments

System is a Supermicro SYS-110P-WTR with 256GB ECC RAM and Intel Xeon 4310.

Disks I am using for testing are Samsung 990 Pro M.2 4TB (2) via ASUS bifurcation card; only 2 of the 4 slots are being used. MIrrored VDEV, set recordsize at 512K, compression ZSTD, no dedupe, SHA256.

I also have the bays loaded with WD Red 4TB SATA SSD, and a pair of Micron 1.92TB NVMe U.3, RAID-Z2 for the SATA SSDs, with a mirrored meta VDEV and smallblocks at 256K set for the U.3 disks. Recordsize at 512K, compression ZSTD, no dedupe, SHA256

Running the same test, showing 2.5K read, 2.5K write. Seems just abnormally slow. Copying data to/from doesn’t show same/similar result, just wonder if this is a FIO issue, or because I’m running it in the shell off the web UI (though I get same in SSH)

Think I found the issue; --readwrite=randrw vs --readwrite=rw
Went from around 2.5 R/W to over 400K R/W.

randrw is more towards databases, where rw is more towards sequential IO. Even though these are M.2 SSDs, seems the way ZFS operates leans heavily towards sequential IO which somewhat kills the potential for faster SSDs.

1 Like