SMB transfer capped at about 25MB/s

Hi Everyone,

I’ve just installed TrueNAS on a Dell Optiplex with HBA card
I have 14TB mirror and 1 nvme ssd available

I’ve enabled SMB on both and started testing moving data around by mapping the share on two Win 11 PCs (over wifi and ethernet), but it’s always capped at 25MB/s

I tried from my phone using Samsung’s MyFiles app (over wifi of course, the transfer was slow too)

What I’ve done so far:

Summary of Testing and Results

1. Initial Problem:

  • SMB transfer speed is consistently around 25 MB/s for both NVMe (SSD) and HDD storage on TrueNAS.

2. Core System & Network Checks:

  • TrueNAS Installation Type: Confirmed as Bare Metal, not a Virtual Machine. (Eliminates VM overhead as a cause).
  • TrueNAS CPU & RAM Usage During Transfer: Confirmed to be minimal. (Eliminates CPU/RAM as a bottleneck).
  • Network Link Speed (TrueNAS side): Checked via sudo ethtool <interface_name>.
    • Result: Speed: 1000Mb/s, Duplex: Full. (Confirms TrueNAS NIC is operating at Gigabit speed).
  • Raw Network Throughput (iPerf3 test): Performed client-to-TrueNAS iPerf3 test.
    • Result: 858 Mbits/sec (sender), 941 Mbits/sec (receiver). (Confirms your entire network path, from client to TrueNAS, is operating at full Gigabit speeds, ruling out network cables, client NIC, and switch as the bottleneck).

3. Internal TrueNAS Storage Performance (dd command tests):

  • NVMe (SSD) Write Speed: ~354-355 MB/s
  • HDD Write Speed: ~352 MB/s
  • NVMe (SSD) Read Speed: ~4.7 GB/s (4700 MB/s)
  • HDD Read Speed: ~4.7 GB/s (4700 MB/s) (no way this is true, so not sure why dd reported this speed)
  • Inter-Pool Transfer (SSD to HDD): ~566 MB/s
  • Inter-Pool Transfer (HDD to SSD): ~1.6 GB/s (1600 MB/s) (again, (no way this is true, so not sure why dd reported this speed)

While I created the random data files, the speed was about 350MB/s on both HDD and SSD

sudo dd if=/dev/urandom of=/mnt/ssd/test_file_4gb_ssd.bin bs=1M count=4096 status=progress
4246732800 bytes (4.2 GB, 4.0 GiB) copied, 12 s, 354 MB/s
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 12.1363 s, 354 MB/s

sudo dd if=/dev/urandom of=/mnt/hdd/test_file_4gb_hdd.bin bs=1M count=4096 status=progress
4221566976 bytes (4.2 GB, 3.9 GiB) copied, 12 s, 352 MB/s
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 12.2088 s, 352 MB/s

sudo dd if=/dev/urandom of=/mnt/ssd/ssd_write_test.bin bs=1M count=4096 status=progress
4260364288 bytes (4.3 GB, 4.0 GiB) copied, 12 s, 355 MB/s
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 12.0984 s, 355 MB/s

sudo dd if=/mnt/ssd/test_file_4gb_ssd.bin of=/dev/null bs=1M status=progress
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 0.908066 s, 4.7 GB/s

sudo dd if=/dev/urandom of=/mnt/hdd/hdd_write_test.bin bs=1M count=4096 status=progress
4224712704 bytes (4.2 GB, 3.9 GiB) copied, 12 s, 352 MB/s
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 12.1991 s, 352 MB/s

sudo dd if=/mnt/hdd/test_file_4gb_hdd.bin of=/dev/null bs=1M status=progress
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 0.912368 s, 4.7 GB/s

sudo dd if=/mnt/ssd/test_file_4gb_ssd.bin of=/mnt/hdd/copied_ssd_to_hdd.bin bs=1M status=progress
4149215232 bytes (4.1 GB, 3.9 GiB) copied, 7 s, 593 MB/s
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 7.584 s, 566 MB/s

sudo dd if=/mnt/hdd/test_file_4gb_hdd.bin of=/mnt/ssd/copied_hdd_to_ssd.bin bs=1M status=progress
3622830080 bytes (3.6 GB, 3.4 GiB) copied, 2 s, 1.8 GB/s
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 2.74129 s, 1.6 GB/s

5. Client PC SMB Configuration:

  • SMB Signing: Checked via Get-ItemProperty in PowerShell.
    • Result: EnableSecuritySignature: 1 (client signing enabled), RequireSecuritySignature: 0 (client does not require signing).
    • Interpretation: Client is configured to use signing, which should work with TrueNAS’s default/auto settings.

I didn’t have such issues with SMB on my RPi 4 with the same Win 11 machines, saturating-ish the 1Gbps link

I’ve mounted the SMB share on the Rpi and I get 100MB/s:
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 39.7005 s, 108 MB/s

I’m out of ideas and I’d be very grateful if someone can help :slight_smile: It does look like Win 11 and OneUI 7 don’t work well with that particular share.

EDIT and SOLUTION:

Setting the datasets to sync=disabled and back to STANDARD fixed the issue

Those auxiliary parameters are nonsense. AI suggestion? This doesn’t look like a bug, but you really haven’t provided enough info about how you’re actually testing or what you’re testing other than “moving data around”.

1 Like

true, I edited the post, thanks :slight_smile:

yes Gemini suggested that and I’ve seen other posts suggest this, but of course no auxiliary parameters options in the GUI on TN Scale so I added them to a file as Gemini suggested using the command below in cli mode:

sharing smb update 2

That’s just gemini ingesting voodoo that gets reposted on the internet.

1 Like

AI is the ultimate example of garbage-in=garbage-out as it literally has zero intelligence to discriminate between garbage and genuine facts and large number of garbage inputs have greater weight than a few real facts from reputable sites or reputable experts. (For example HoneyBadger’s advice should carry way more weight than mine.)

It seems like you have established that it (probably) isn’t a disk bottleneck, so most likely a network issue. You need to run iperf to establish your actual network speed (as opposed to what you think it should be).

Finally, a few comments as an aside:

  1. Congrats on using /dev/urandom instead of /dev/zero. Writing all zeros engages a ZFS optimisation which would skew the results.

  2. However, your read tests appear to be reading from ARC. So no congrats there. You need to set ARC to cache only metadata if you want to test disk read speeds.

  3. Your speed graph does NOT show the normal network speed to start with followed by a reduction to disk speeds once the ARC write memory allocation is fully used. So this is an immediate pointer to a network issue - and you need to start with iperf to either identify or eliminate the lower level network layers as the issue before looking at SMB.

2 Likes

One thing you need to check is that sync=Standard for the dataset you are writing to. If sync=always is set, then poor performance is not surprising.

1 Like

I actually found Gemini quite useful and it did suggest I do iperf3 test quite early on. I just asked it in the end to give me a summary of what I’d tried so far.

The network test did perform as expected for a 1Gbps link:
"
Raw Network Throughput (iPerf3 test): Performed client-to-TrueNAS iPerf3 test.
Result: 858 Mbits/sec (sender), 941 Mbits/sec (receiver). (Confirms your entire network path, from client to TrueNAS, is operating at full Gigabit speeds, ruling out network cables, client NIC, and switch as the bottleneck)."

I’ll have a look at the sync option.

Thank you for the suggestion :handshake:

EDIT:

@Protopia No joy, the sync is set standard on both datasets, thank you again for the suggestion!

Purely as a test, try setting SYNC to Disabled, re-run the test, and then set SYNC back to Standard. Something on the client side may be asking for a SYNC write. This setting will treat all writes as ASYNC, which is not what you want for normal operations.

I have recently been testing SYNC / SYNC with SLOG / ASYNC all via SMB. I’ll post results when complete.

2 Likes

yep, that did it. I got 113MB/s read and write to the SSD dataset :sunglasses:

thank you! I think I’ll leave it disabled.
the chance of the box losing power are minimal and I won’t be writing data every single day, mostly reading

I wouldn’t recommend that.

For testing and troubleshooting purposes, that’s fine. For data integrity? I would leave it at sync=standard.

With SMB shares by default, metadata writes are done synchronously, even though data is written asynchronously. Setting the dataset to sync=disabled forces (“tricks”) metadata to be written asynchronously, which could cause issues with file integrity.

1 Like

actually, I’ve put it back to standard, rebooted TrueNAS just in case and it’s now 110MB/s read and write… I think there may be a bug somewhere triggered by who knows what.

all is good now. thank you everyone! I’m really grateful :handshake:

I have the exactly same issue. Shares limited to about 25 MiB/s. Switching SYNC off and on did not help.

You need to post details on your system setup like hardware, pool setup, os version, how you test and what you have tried. We can only go off what you post

1 Like