Truenas Scale slow SMB copy performance?

Hello all!

I’m encountering some issues regarding filetransfer speed.

Setup:
Xeon E-2244G
128GB Ram
M1015 HBA
8x Seagate Exos X16 (16GB CMR drives) (5x RAID-Z2, 2P, 1 HS)
Intel X520 10Gbe network
Dragonfish 24.04.2

Desktop:
R9 5950x
64GB ram
Intel x520
Samsung 980Pro Nvme (Pcie 4.0)
Windows 11 24H2

Problem?
Transferring 150GB @250-300MB/s is not what it used to be…?

Read/write from that SMB share, capped at 300MB/s. (which is suspiciously close to SATA2 speeds)

When did this happen?
Really don’t know, but i presume when upgrading from Bluefin. I would have noticed if it has been any longer.

As my internal tests on my desktop confirm, the Nvme drive is performing good:
image

The HDD speeds are somewhere between 140MB to 250MB/s read/write, so i should saturate that 10Gbe link.

Now, when i configured the server, back in 2022, i got the SMB share running and saturating the 10Gbe connection both ways. (running Truenas Core v?)

This thread came up and this is exactly what i’m experiencing, however changing that setting he mentions to something else doesn’t fix the problem or any change in the transfer speeds.

Or perhaps i’m not correct in the way i tried:

Here i tried ‘no presets’, ‘Multi-Protocol Share’ and it was originally on ‘private SMB Datasets and Share’.

All the same speeds.

Investigating and playing around further didn’t help…
(deactivate SMART testing, backup, active connected clients, restart server, restart service, checked my desktop on various things…)

I did checked the connection speed between desktop and the server:

Same results when reversing.
So perfect.

Is Truenas Scale having issues with SMB protocol? As i do see this problem pop up many times in the forum.

Thank you all!

Your Samba config needs to be tuned, I can’t really get into it right now (been in the hospital for 7 days and rehab for 5 with a broken knee, and simply don’t have the energy), but IMO Samba is notoriously slow, especially under Linux.

Generally, if something is consistently beneficial we put it into the default configuration. Most of what I’ve seen from user tunings are either unsafe parameter changes (such as disabling sync) or hard-coding socket options (which arguably shouldn’t be done at the application level - kernel should be doing this).

Sad to hear that.
Thank you for even taking the time to reply when in such a painful thing to happen.

So if i remember correctly, Core was written in Debian and Scale is now Linux, and thus that could account for the loss in performance? (apart from tuning SMB)

Have a speedy recovery!

Meanwhile upgraded to ‘ElectricEel-24.10.0.2’, alas, best scenario got me 325MB/s. Best case scenario maybe +5% but still way off saturating the 10Gbe connection. (in the update notes there were many SMB enhancements/optimizations noted, i did notice more stable throughput. Less of a sine wave bouncing, so there’s that.)

Although everything went fine during upgrade, now i see these errors appearing in the shell:

Nov 29 20:05:40 nas kernel: sr 10:0:0:0: [sr0] CDROM not ready. Make sure there is a disc in the drive.
Nov 29 20:05:49 nas kernel: EDID block 0 is all zeroes

The EDID i understand, but will be ignored. (iGPU used for Plex)
And the virtual CD drive needs to be flushed or deleted. (remnant of installer, i presume)

But none of these will have something to do with the slow SMB performance.

The hunt continues!

Core was based on FreeBSD. Scale is based on Debian Linux

You are performing what appears to be normal for your pool setup.

You may be able to read from your pool and come close to 10Gbs but not writes nor r/w. 7 wide VDEV using Raid-Z2