Expected SATA-III transfer speeds to an 1 x RAIDZ2 | 8 wide array, with or without SLOG

I currently have a dedicated 10-bay SATA-III enclosure with a 10 Gbps USB-3.2 Gen. 2 connection populated with an 8-wide RAIDZ2 vdev. When using SMB to TrueNAS from macOS Sonoma and with a dedicated SATA-III SLOG, I seem to be maxing out at a sustained 400 MB/sec for sustained large-file transfers of around 50 GB per file over an Ethernet 2.5 Gbps link. That seems slow to me since I am getting faster speeds by at least an order of magnitude writing the same transfers to a seperate OWC enclosure over SMB using SoftRAID using the same 2.5 Gbps links targeting a different host. I attribute some of the gain there to Mac-to-Mac and possibly automatic WiFi + Ethernet pathing not explicitly configured by me.

My math says 2.5 GbE equates to 312.5 MB/s without adjusting up or down for jumbo frames, protocol compression, or packet and protocol overhead, so I don’t know how to explain either the 400 MB/s to my TrueNAS server or the GB/s speeds I’m getting to my SoftRAID server over the same switch. I just feel like the performance of TrueNAS shouldn’t be worse than the performance of SoftRAID over the same 2.5 GbE switch if all else is equal.

The speed testing utilities of TrueNAS appear limited. Running hdparm -Tt only runs against a single disk, and even running lsusb -t in a container against the TrueNAS host system only provides some basic information about the attached buses without testing performance. I’m unsure how to accurately assess the actual throughput my system shoudl offer through the 10 Gbps connection or across the vdev to determine what my raw performance with my current hardware really is, or what it should be.

I know that running RAIDZ2 will have an impact due to the higher level of parity, and perhaps the SLOG (currently implementy as a two-disk stripe across 4 TB SATA-III SSDs) could be slowing me down rather than speeding things up. However, I’d still expect an 8-wide vdev to offer a peak rate of 1-2 GB/s or more with such a wide array.

So, this is really a two-part question.

  1. Assuming no other hardware limitations, what sort of speeds should I actually expect from an 8-wide SATA-III based vdev?
  2. If it should be higher than 400 MB/s, what tools does TrueNAS provide to test individual disk, vdev, and dataset performance?
  3. Other than telling me not to use a USB enclosure, while acknowledging that the problem could certainly be a backplane that’s using a SATA-III multiplier rather than multiple controllers (it’s a Sabrent 10-bay enclosure, but that detail is not clearly spelled out in its specifications), how can I determine whether the bottleneck is the USB connection, the SLOG, the SATA controllers, individual disk performance, the additional parity, or something else altogether?
  4. Assuming it’s not the backplane or the USB port/cable limiting my speeds, would removing the SLOG or reconfiguring the vdevs provide a meaningful speed boost? It doesn’t need to be optimal, but certainly needs to be fast enough to read 50 GB files in tens of seconds rather than minutes.
  5. If TrueNAS is already performing optimally and I’m just seeing an artificial speed-up when doing Mac-to-Mac transfers for some reason, how can I determine whether upgrading the switch from 2.5 GbE to 10 GbE would make any difference for the vdev performance? Since the Linux host has a 10 Gbps connection to the enclosure, there should be some way of testing local (rather than network) read/write performance directly on the TrueNAS server.

For comparison, I looked at the iX Systems Mini X+ to see if the hardware specifications defined any performance expectations, but couldn’t find any specs for what read/write speeds one should expect from that setup either. The Mini X+ is roughly 3x the cost of my current setup with only half the 3.5" HDD capacity, but it’s non-obvious whether such a system would actually be any faster. Anyone’s experience with the performance envelope of the Mini X+ would be a welcome addition to my troubleshooting and analysis.

Any multi-disk USB enclosure HAS to be using some type of port multiplier. Whether that is part of the USB chip, or a SATA port multiplier, it is a funnel. Even if the enclosure uses a USB hub to multiple single disk USB to SATA chips, it is simply a USB port multiplier.

USB attached storage is not a reliable method for attaching ZFS data storage devices. We have seen both pool corruption and full pool loss from people using USB attached data storage devices. Part of the issue is that ZFS wants in-order writes, which some hardware RAID and USB enclosures won’t guarantee. But, it is possible you could go the entire life of your TrueNAS server without problems.

ZFS SLOG, (Separate intent LOG, aka off data vDev ZIL), is not a write cache. It is for synchronous writes only, which generally apply to iSCSI, NFS, Database storage and or VM Storage. I don’t know if MacOS uses synchronous writes to SMB. but I vaguely recalls someone saying something about it.

All that said, check to make sure UASP is available on your enclosure, and is being used. That can improve throughput by 10-20% and reduce CPU overhead as well. I glanced at the web site for the device, but it does not mention UASP. Perhaps you can find it in message or such.

If you are getting 400MBytes/per second, over 2.5Gbits/per second networking, that does seem reasonable. Not sure why you think it should be higher.

One last comment. ZFS is not the fastest file system, volume manager and RAID scheme out their. It is designed for data integrity first and foremost. So, their are other solutions out their that are faster.

1 Like

Mmh, how is the OWC enclosure connected?

OWC do mentions the the word Mac quite a lot, maybe softRAID is optimized for MacOS/Apple hardware out of the box more than more “generic” hardware?

Thanks for the thoughtful reply. Here are a few responses and counterpoints.

  • Both the server and the enclosure support UASP.
  • An enclosure can support multiple SATA and USB buses, but the limited TrueNAS diagnostics don’t make it easy to determine if the enclosure has more than one.
  • I currently can’t tell if the enclosure has only one controller for the whole backplane or not. Even if it does, that doesn’t mean such enclosures can’t support 10, 20, or 40 Gbps per controller although I certainly suspect this one maxes out at 10.
  • A USB speed of 10 Gbps easily exceeds the SATA-III speed of any single disk.
  • With 20 Gbps USB and USB 4 at 40 Gbps, despite this enclosure supporting a max of 10 Gbps to the host, it seems like the controller bandwidth and SATA-III speeds (vice striping speed-ups) are the real constraint.

If 400 MB/s is reasonable for the hardware and link, the only other improvement i could think of would be link aggregation. My server has dual NICs, and I have a hardware dual-port LAGG slot on my switch. Im not sure if TrueNAS allows for active/active LAGG configuration when the switch hardware supports it, or if it allows bonding other than failover, but that seems like a potential way to double my speed even with the current hardware.

Maybe; but SoftRAID runs on Windows too. My guess is that Thunderbolt-3 to a multi-controller backplane is at work,but the OWC ThunderBay Flex 8 Isn’t always clear about its hardware internals. It’s the same 2.5 GbE though.