So, it’s a block storage. I have very limited experience with zvols… But experts say that block storage on raidz is a huge no-no[1].
AIUI, fio bencmarks file storage. I’m not an expert, but with --iodepth=32 --bs=128k it looks more like a random workload. IMO, to test throughput you should use something like --iodepth=2 --bs=1M.
However, even if fio would show you desired values, this workload is very different from using zvols. For one, zvols have a 16K recordsize by default. Is there any reason why you have decided to use iSCSI instead of SMB on windows?
How are these drives physically connected and interfacing with your sever?
Also what NIC / Switch Ports are these connected on for both sides? Defo 1G+ Ports etc
If for example you’re using a SATA Splitter for this, then I wouldn’t be surprised to see this sort of speed as most commonly they just take one or two SATA connections and literally split it across all these drives, but they are expecting full speed.
Also can also confirm what you get with SMB Sustain Tests? E.g. Copy a Large File (10GB+) to the server, then a lot of small files (10MB or less) and see if the speeds are hit hard or different to the block storage speeds.
also, i did this test and tried to copy a large file into the volume, and it seems to be working a lot faster. it’s not stable and it’s fluctuating a little bit, but all in all the throughput is high. a file of 10gb is copied in 5-10 seconds
I don’t know your use case. Nor what you are trying to achieve. You only said that it is a “backup system”. Not what software, how the storage is mounted, what files you will write, at what sizes, if only write speeds are important or maybe read speeds are even more important because you need fast restores…
If possible I would attach the storage via NFS instead of iSCSI.
Welp, you can just map a network drive (your SMB share). I think it would work ok in most of the cases. Unless your backup software really requires block storage.
AFAIK, SMB is a first-class citizen in windows (compared to NFS).