Truenas Scale Slow SMB read/write speeds

Greetings. I think I’m having issues with read/write speeds the following TrueNAS Scale build:

 - ElectricEel-24.10.0.2
 - i7-7700K CPU @ 4.20GHz
 - 2x16gb of non ECC RAM (I know)
 - 2x1TB disks in a SMB share (WDC_WD10EFRX-68JCSN0 and a ST1000DM010-2EP102). Sync set to Standard on the Dataset. Pool is mirrored.
 - 2x3TB disks in a NFS share (WDC_WD40EFRX-68N32N0 and a WDC_WD40EFRX-68N32N0). Sync set to Standard on the Dataset. Pool is mirrored.
 - genuine i350 based NIC

My problem is that the max read/write speeds I can get on both of those shares (NFS and SMB) are under 90MB/s. I did the following test: I uploaded a folder of images ~17GB on both of those shares, and then download it again immidiately after upload has been completed. I downloaded the folder with the same speed that I uploaded the folder - around 85ish MB/s. I feel like I am not taking advantage of the memory installed on that system at all. The test was performed from a Windows 10 Pro workstation connected to a gigabit switch that is in the same network as the NAS.

I am aware that what I have provided is little to nothing, so incase anyone wants to shed some light here, I can pull up whatever logs/information that is nessessary.

  1. I am unclear whether you are talking about MB (megabytes) or Mb (megabits) but the distinction is critically important.

  2. Just because you have a Gigabit switch does NOT mean that the end-to-end speed is Gigabit. You will actually need to check on TrueNAS and your PC that the negotiated link speed of the NICs is 1Gb full duplex.

  3. You need to separate out network performance from disk performance in order to work out where any bottleneck is. First you should use iperf to check the performance of the network end-to-end. Once you are certain that the network is not the issue, we can look at the disk performance.

  4. Benchmarking disk performance is much much more complicated to do meaningfully than you might think - because ZFS has a lot of memory based performance optimisations. If you have 32GB of memory, then a benchmark size of 17GB probably needs to be several times larger to be meaningful. Also there is more network chit-chat for multiple files than one big one, and (of course) you also need to be mindful that the data needs to be read or written by your PC and the disk performance on that may be the bottleneck rather than on TrueNAS.

  5. The key information missing from your spec is an explicit statement of pool design - whilst it looks like you have two pools (one for SMB, one for NFS) however it is unclear whether these are mirrored (hopefully) or striped (oops). We need this type of information to be able to predict what your disk performance might be from the specs. Good news - all your disks appear to be CMR - though the ST1000DM010 is a consumer Baracuda disk rather than a NAS IronWolf. Note: Whilst this pool design may be great for testing, it is NOT a good design for performance for production.

  6. All that said, the disks probably each have a sustained data rate of c. 200MB/s or c. 1.5Gb/s. Assuming that the pools are striped (bad choice for production), then the maximum sustained write speed to each of the pools is going to be c. 3Gb/s.

  7. You are absolutely write to mention the Sync (write) settings on the datasets - synchronous writes (without an SLOG) will absolutely kill your write performance to HDD, so you need to be absolutely certain that you are doing asynchronous writes. SMB from Windows is always asynchronous, but I am not sure what Sync=Standard does for SMB from Linux or Mac or NFS.

So, my advice, start by running iperf and see what your end-to-end network speed is.

1 Like

Thanks a lot for the reply!

All pools are mirrored, I am refering to MB (Megabytes), and I’ll need to get back to iperf3 topic once I learn how to use the tool. I’ve added these details to my original post for clarity.

85MB/s = c. 700Mb/s.

Since you are on a 1Gb network, this is pretty respectable. What are you expecting?

Assuming minimal seeks, mirrors should have c. 1.5Gb/s sustained write (assuming asynchronous writes) and 3Gb/s sustained reads.

Here are the results from the iperf3 test:

Server side (TrueNAS Scale):

iperf3 -s
-----------------------------------------------------------
Server listening on 5201 (test #1)
-----------------------------------------------------------
Accepted connection from fe80::c055:7074:21a0:1781, port 61294
[  5] local fe80::a236:9fff:fe97:1c68 port 5201 connected to fe80::c055:7074:21a0:1781 port 61191
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-1.00   sec   107 MBytes   902 Mbits/sec  0.058 ms  0/13757 (0%)  
[  5]   1.00-2.00   sec   110 MBytes   922 Mbits/sec  0.063 ms  0/14063 (0%)  
[  5]   2.00-3.00   sec   110 MBytes   924 Mbits/sec  0.062 ms  0/14093 (0%)  
[  5]   3.00-4.00   sec   109 MBytes   912 Mbits/sec  0.063 ms  0/13917 (0%)  
[  5]   4.00-5.00   sec   110 MBytes   919 Mbits/sec  0.066 ms  0/14023 (0%)  
[  5]   5.00-6.00   sec   110 MBytes   925 Mbits/sec  0.062 ms  0/14120 (0%)  
[  5]   6.00-7.00   sec   110 MBytes   921 Mbits/sec  0.053 ms  0/14054 (0%)  
[  5]   7.00-8.00   sec   109 MBytes   918 Mbits/sec  0.066 ms  0/14006 (0%)  
[  5]   8.00-9.00   sec   109 MBytes   918 Mbits/sec  0.063 ms  0/14006 (0%)  
[  5]   9.00-10.00  sec   109 MBytes   914 Mbits/sec  0.072 ms  22/13963 (0.16%)  
[  5]  10.00-10.02  sec  1.77 MBytes   922 Mbits/sec  0.078 ms  0/227 (0%)  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.02  sec  1.07 GBytes   917 Mbits/sec  0.078 ms  22/140229 (0.016%)  receiver
-----------------------------------------------------------
Server listening on 5201 (test #2)
-----------------------------------------------------------

Client side (Windows 10 Pro system):

.\iperf3.exe -b 0 -u -c truenas
Connecting to host truenas, port 5201
[  4] local fe80::c055:7074:21a0:1781 port 61191 connected to fe80::a236:9fff:fe97:1c68 port 5201
[ ID] Interval           Transfer     Bandwidth       Total Datagrams
[  4]   0.00-1.00   sec   109 MBytes   916 Mbits/sec  13980
[  4]   1.00-2.00   sec   110 MBytes   922 Mbits/sec  14060
[  4]   2.00-3.00   sec   110 MBytes   924 Mbits/sec  14100
[  4]   3.00-4.00   sec   109 MBytes   911 Mbits/sec  13900
[  4]   4.00-5.00   sec   110 MBytes   920 Mbits/sec  14030
[  4]   5.00-6.00   sec   110 MBytes   925 Mbits/sec  14120
[  4]   6.00-7.00   sec   110 MBytes   922 Mbits/sec  14060
[  4]   7.00-8.00   sec   109 MBytes   917 Mbits/sec  13990
[  4]   8.00-9.00   sec   110 MBytes   919 Mbits/sec  14020
[  4]   9.00-10.00  sec   109 MBytes   915 Mbits/sec  13970
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  1.07 GBytes   919 Mbits/sec  0.078 ms  22/140229 (0.016%)
[  4] Sent 140229 datagrams

iperf Done.

I expect at least the reads/downloads from my TrueNAS to hit this barrier, since I am running my pools mirrored and I’ve provided the system with 32GB of memory for cache. I am yet to see anything over ~100MB/s coming or going to the NAS.