I currently have a dedicated 10-bay SATA-III enclosure with a 10 Gbps USB-3.2 Gen. 2 connection populated with an 8-wide RAIDZ2 vdev. When using SMB to TrueNAS from macOS Sonoma and with a dedicated SATA-III SLOG, I seem to be maxing out at a sustained 400 MB/sec for sustained large-file transfers of around 50 GB per file over an Ethernet 2.5 Gbps link. That seems slow to me since I am getting faster speeds by at least an order of magnitude writing the same transfers to a seperate OWC enclosure over SMB using SoftRAID using the same 2.5 Gbps links targeting a different host. I attribute some of the gain there to Mac-to-Mac and possibly automatic WiFi + Ethernet pathing not explicitly configured by me.
My math says 2.5 GbE equates to 312.5 MB/s without adjusting up or down for jumbo frames, protocol compression, or packet and protocol overhead, so I don’t know how to explain either the 400 MB/s to my TrueNAS server or the GB/s speeds I’m getting to my SoftRAID server over the same switch. I just feel like the performance of TrueNAS shouldn’t be worse than the performance of SoftRAID over the same 2.5 GbE switch if all else is equal.
The speed testing utilities of TrueNAS appear limited. Running hdparm -Tt
only runs against a single disk, and even running lsusb -t
in a container against the TrueNAS host system only provides some basic information about the attached buses without testing performance. I’m unsure how to accurately assess the actual throughput my system shoudl offer through the 10 Gbps connection or across the vdev to determine what my raw performance with my current hardware really is, or what it should be.
I know that running RAIDZ2 will have an impact due to the higher level of parity, and perhaps the SLOG (currently implementy as a two-disk stripe across 4 TB SATA-III SSDs) could be slowing me down rather than speeding things up. However, I’d still expect an 8-wide vdev to offer a peak rate of 1-2 GB/s or more with such a wide array.
So, this is really a two-part question.
- Assuming no other hardware limitations, what sort of speeds should I actually expect from an 8-wide SATA-III based vdev?
- If it should be higher than 400 MB/s, what tools does TrueNAS provide to test individual disk, vdev, and dataset performance?
- Other than telling me not to use a USB enclosure, while acknowledging that the problem could certainly be a backplane that’s using a SATA-III multiplier rather than multiple controllers (it’s a Sabrent 10-bay enclosure, but that detail is not clearly spelled out in its specifications), how can I determine whether the bottleneck is the USB connection, the SLOG, the SATA controllers, individual disk performance, the additional parity, or something else altogether?
- Assuming it’s not the backplane or the USB port/cable limiting my speeds, would removing the SLOG or reconfiguring the vdevs provide a meaningful speed boost? It doesn’t need to be optimal, but certainly needs to be fast enough to read 50 GB files in tens of seconds rather than minutes.
- If TrueNAS is already performing optimally and I’m just seeing an artificial speed-up when doing Mac-to-Mac transfers for some reason, how can I determine whether upgrading the switch from 2.5 GbE to 10 GbE would make any difference for the vdev performance? Since the Linux host has a 10 Gbps connection to the enclosure, there should be some way of testing local (rather than network) read/write performance directly on the TrueNAS server.
For comparison, I looked at the iX Systems Mini X+ to see if the hardware specifications defined any performance expectations, but couldn’t find any specs for what read/write speeds one should expect from that setup either. The Mini X+ is roughly 3x the cost of my current setup with only half the 3.5" HDD capacity, but it’s non-obvious whether such a system would actually be any faster. Anyone’s experience with the performance envelope of the Mini X+ would be a welcome addition to my troubleshooting and analysis.