Super slow SMB speed

Hello everyone, just finished building my truenas scale server, for some reason the smb transfer speed is super low, only getting around 20mb/s. The system has 4 wd redpro 18tb drive in a raidz1 config. It is connected with a 1gb and a 2.5gb ethernet. While I am transfering the file I can see from the network monitoring section there is 300mb/s of data coming in however the speed is just super low.

This is the iperf result
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 176 MBytes 148 Mbits/sec sender
[ 5] 0.00-10.02 sec 174 MBytes 145 Mbits/sec receiver
[ 7] 0.00-10.00 sec 166 MBytes 139 Mbits/sec sender
[ 7] 0.00-10.02 sec 164 MBytes 137 Mbits/sec receiver
[ 9] 0.00-10.00 sec 167 MBytes 140 Mbits/sec sender
[ 9] 0.00-10.02 sec 164 MBytes 137 Mbits/sec receiver
[ 11] 0.00-10.00 sec 167 MBytes 140 Mbits/sec sender
[ 11] 0.00-10.02 sec 165 MBytes 138 Mbits/sec receiver
[SUM] 0.00-10.00 sec 676 MBytes 567 Mbits/sec sender
[SUM] 0.00-10.02 sec 666 MBytes 558 Mbits/sec receiver

iperf Done.

And this is the dd result
admin@truenas[~]$ sudo dd if=/dev/zero of=/mnt/storage bs=30G count=10 oflag=direct iflag=fullblock

10+0 records in

10+0 records out

322122547200 bytes (322 GB, 300 GiB) copied, 140.277 s, 2.3 GB/s
For some reason with the dd test I cannot bypass the cache.

This is the zpool status
pool: Main storage
state: ONLINE
config:

    NAME                                      STATE     READ WRITE CKSUM
    Main storage                              ONLINE       0     0     0
      raidz1-0                                ONLINE       0     0     0
        efa6297d-3bcc-4780-9476-db0ee4f84723  ONLINE       0     0     0
        2fbda198-f257-45d4-929c-cf7f8ea94ccd  ONLINE       0     0     0
        1cd7aec1-f6ff-490b-a206-5a4e61391b20  ONLINE       0     0     0
        fc1206db-d67a-41b3-80f0-3d60f4c516fe  ONLINE       0     0     0

errors: No known data errors

pool: Single HDD
state: ONLINE
config:

    NAME                                    STATE     READ WRITE CKSUM
    Single HDD                              ONLINE       0     0     0
      2d5cc16e-1f6c-4d69-a44f-2158fa6e2a67  ONLINE       0     0     0

errors: No known data errors

pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:00:12 with 0 errors on Fri Nov 15 06:45:13 2024
config:

    NAME         STATE     READ WRITE CKSUM
    boot-pool    ONLINE       0     0     0
      nvme0n1p3  ONLINE       0     0     0

errors: No known data errors

1 Like

We need better hardware details and your network setup. Brand and model of NIC really helps. Connection speeds. Make sure you keep MB and Mb correct when postings. Models of HD may make a difference. We are looking for all drives to be CMR and not SMR tech. Details on how you are testing between devices helps.

The system uses the N5105 board 4 Port i226/ i225 2.5GbE LAN,M.2 NVMe, 6SATA3.0,2DDR4, 1*PCIe4.0. The system has 64GB of ddr4 ram. The dd test and zpool status test was done in truenas web portal. The iperf test is conducted between my linux machine and the truenas server, they are connected with asus wired mesh system, I am getting almost no drop off in terms of the internet speed between notes. The transfer speed utilizing smb is around 26 Mib/s. In the interface monitoring tab I am seeing data received at 250MB/s. All the drives are WD red pro which are CWR drives.

Intel i225 has been problematic. Intel i226 has been okay and is a recommendation on the forums. Your iperf test is slow and not near a 1Gbps connection speed. I would expect 800-900 Mpbs. Looks like a network issue

1 Like

Here are my iperf3 results. Windows 10 to TrueNAS. I have 1Gbps and 10Gbps networks. I did have to run the 10G test twice as the first run was really low. Reported about 1.3Gbps

1 Gig

10Gig fiber

iPerf suggests to me that your network is OK - though on the 1Gb I am a bit worried about the retry counts.

However, your method for attempting to benchmark the disk performance is completely flawed. ZFS is a complex file system with lots of performance enhancements and if you want to benchmark it then you absolutely need to understand both A) how ZFS works and will process your benchmark results and B) exactly what your benchmarking tool is writing and how it will interact with ZFS. In the dd case you are using here are all the factors that will make your results meaningless…

  1. Are you doing synchronous or asynchronous writes with dd? The impact of this simple choice is MASSIVE.

  2. Are you doing compression on your dataset? What will compression do to your writes? Will it even write ANYTHING to disk at all?

  3. How much memory do you have, and how much is going to be used for queuing data for writes? (This is not, as you state, a WRITE CACHE though it may have some characteristics.) How does this compare to the size of the data you are writing?

  4. Only after all these have had their impact will you actually write to disk.

  5. Even then, you need to be VERY careful about what stats you use as the measurement.

In your specific case where you are writing literally zeros, this is EXTREMELY compressible with the default dataset compression, to the point that in some cases literally NOTHING is actually written to disk, because ZFS may create a sparse file where completely empty blocks are just noted as empty and not written and don’t use disk space.

But even if data is actually being written to disk, the DD reported throughput is meaningless as either a measurement of actual disk writes or as a realistic measure of what you will get in practice with genuine (ramdom-like) data.

Are you commenting about the iperf test for the OP? I just did the two screenshots so the OP could get an idea of what the two different network speeds test at compared to the original post.

Oops - yes I was. So ignore the bit about network tests being good.

OP needs to run iPerf to see whether it might be a network issue first. If it isn’t we can look at his disk setup.

I did run the iPerf here is the result,
[SUM] 0.00-10.00 sec 676 MBytes 567 Mbits/sec sender
[SUM] 0.00-10.02 sec 666 MBytes 558 Mbits/sec receiver

It is slower than the 1G rate I am suppose to get however I suppose it does not really explain why my SMB is so slow I am only getting 26Mib/s. I tried different types of files, the speed is even slower with smaller files.

If you want to describe your end-to-end network between TrueNAS and your Windows box, we can try to work out why it is slower than you would expect.

Did you leave the Pool & Dataset options the defaults? Snip from CORE verison

It is a linux machine connected to a asus router(node), the router is then connected using a moca adapter back to the main asus router. The nas server is connected to the main asus router. The iperf is slower than 1g for sure but that still does not explain why the transfer speed is only 26mib/s. Thank you in advance

Yes I left everything default except the smb2/3 option, I turned it on

Try working through the Joes Rules to Asking for Help. There are sections on Drive Speed Testings and Network Issues. You can run those and post all the text from the shell window using Preformatted Text (</>) right above where you put text replies. We also need to know what pool and disks you were testing. The other odd item is your listing of MoCA networking. If only Linux and TrueNAS are active on the network we may be okay but otherwise we are sharing everything on that coax cable.

This is not sufficient detail. I will assume that this is all hardwired ethernet - if not please say? What speed is the ASUS node router? What speed is the Asus main router? What link negotiation speed is the TrueNAS NIC reporting? What link negotiation speed is the Linux NIC reporting? Do the ASUS switches have a UI and if so what links speeds are they reporting? Do either TrueNAS or Linux report any significant number of network retries in their NIC stats? Is there any other significant traffic on your LAN at the same time (e.g. torrents, downloads, uploads, media streaming of any type)?

With regard to tests, I agree that 26MB/s (which is c. 260Mb/s) is significantly less than 1Gb, and significantly less that the iPerf numbers, but diagnosing and solving these can only be done by reviewing each link in the performance chain. And networking is much easier to look at than disk I/O for the reasons I have previously explained.