How to get better performance out of TrueNAS 25.04.2.4

I am at best getting 260MB/s Download and 193MB/s upload. Is there anyway I can configure TrueNAS to perform faster as I am on a 10Gb network everything connected via a switch?

I have tried MTU 9000 but that did not make much of a difference except make my WIFi laggy.

TrueNAS - 25.04.2.4

CPU - AMD Ryzen 3 3200G

RAM – 32GB 2400MHz (2*16GB)

MB - Gigabyte A520I AC

GFX - Vaga

5xSeagate IronWolf 10 TB NAS Internal Hard Drive HDD in RAIDZ2

Network - Intel X540-T1 RJ45

Power supply - Be quiet 300W

those are very respectable numbers.

i would try to find out where your bottleneck is: maybe going to ssd/nvme drives, maybe it is the wiring, …

it is unlikely it is your software. but if you really care, maybe go native and run nothing but just the freebad distro + samba: cannot be much faster than that.

That’s propaby the limit of your hdd pool. If you want more speed you’d need more vdevs

How are you testing (sharing protocol / local, type/size/quantity of files)?
Depending on your testing methodology your results can vary pretty wildly, though what you have currently is certainly nothing to scoff at.

Is the client you’re testing from on wifi? (Your comment re. wifi being laggy with jumbos enabled would suggest so).

If so, those figures are very good. I’d be happy with that.

Just to be clear: 10Gb/s throughput is pretty hard to achieve with consumer equipment. The bottlenecks are all inside your server hardware: the disks, RAM, PCIe bus and network card.

Testing machine is connected via Ethernet and I am getting the throughput from HWiNFO64 after a Macrum Reflect complete backup ~120GB. The WiFi issue was to do with another machine on the network buffering frequently when TrueNAS MTU was changed from the default.

Consensus seems to be my figures are Okay, so I guess I will accept that.

Thank you for all your replies.

Of course it depends on the type of files you are transferring. I dont know how your backup program does it.

But for large, sequential reads i would expect higher numbers.

Do tests with iperf.

My 3 wide raidz1 with 8TB drives reaches 550 MB/s for large sequential reads.

1 Like

My knowledge on Macrum Reflect is zero but assuming it uploads images via SMB from their website.
If you’re on Windows you may be interested in this:

I found myself reading his entire writeup a couple years ago. It was very informative as I’m not super familiar with Windows internals myself.
Was hoping I could find an old comment of mine, no luck but I’m pretty sure there’s a before/after floating around here somewhere where I applied these changes and my SMB performance improved significantly.

(Note: Unsure if this is still relevant as I have been off of windows for a year or so now, but worth having a look seeing as Microsoft does seem to break something with every update..)

1 Like

That was interesting, shame the script no longer works on Windows 11 24h2 and on wards. I checked my device config in Windows and EEE was already disabled, and apparently TrueNAS does not enable it by default.

Was curious on whether it was still working on later versions.

Before:

After:

This is from a completely fresh Windows 11 25H2 VM. Looks like there is still a pretty significant uplift in sequential performance from the base configuration.

I had a similar problem. Did you try to run iperf3 across the devices to see if you get 9.6GB or so first? You also have to have your X240 cards plugged into PCIEx8 or x16 slots to get max bandwidth. Hope it helps some!

I have not yet run ipref3 yet

I think the network card is a PCIe 2.0 x8 plugged into a PCIe 4.0 x16 slot.

iperf3 will verify if your network can reach the stated speed. If it’s OK, then there is a local issue.

Interesting development.

iperf3 performance using tailscale i get

@Ubuntu-PC1*:~$ iperf3 -c 100.***.**.***
Connecting to host 100.***.**.*** port 5201
[ 5] local 100.***.**.*** port 58050 connected to 100.***.**.*** port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 476 MBytes 3.99 Gbits/sec 0 4.19 MBytes
[ 5] 1.00-2.00 sec 547 MBytes 4.59 Gbits/sec 0 4.19 MBytes
[ 5] 2.00-3.00 sec 527 MBytes 4.42 Gbits/sec 0 4.19 MBytes
[ 5] 3.00-4.00 sec 519 MBytes 4.36 Gbits/sec 0 4.19 MBytes
[ 5] 4.00-5.00 sec 522 MBytes 4.38 Gbits/sec 0 4.19 MBytes
[ 5] 5.00-6.00 sec 518 MBytes 4.34 Gbits/sec 0 4.19 MBytes
[ 5] 6.00-7.00 sec 527 MBytes 4.42 Gbits/sec 0 4.19 MBytes
[ 5] 7.00-8.00 sec 504 MBytes 4.23 Gbits/sec 0 4.19 MBytes
[ 5] 8.00-9.00 sec 518 MBytes 4.34 Gbits/sec 0 4.19 MBytes
[ 5] 9.00-10.00 sec 520 MBytes 4.36 Gbits/sec 0 4.19 MBytes


[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 5.06 GBytes 4.34 Gbits/sec 0 sender
[ 5] 0.00-10.01 sec 5.06 GBytes 4.34 Gbits/sec receiver

iperf Done.

Direct to TrueNAS

*@Ubuntu-PC1:~$ iperf3 -c 192.168.1.141
Connecting to host 192.168.1.141, port 5201
[ 5] local 192.168.1.240 port 37664 connected to 192.168.1.141 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.08 GBytes 9.24 Gbits/sec 23 2.15 MBytes
[ 5] 1.00-2.00 sec 1.07 GBytes 9.21 Gbits/sec 5 2.15 MBytes
[ 5] 2.00-3.00 sec 1.07 GBytes 9.20 Gbits/sec 0 2.15 MBytes
[ 5] 3.00-4.00 sec 1.07 GBytes 9.20 Gbits/sec 0 2.15 MBytes
[ 5] 4.00-5.00 sec 1.07 GBytes 9.19 Gbits/sec 11 2.16 MBytes
[ 5] 5.00-6.00 sec 1.07 GBytes 9.20 Gbits/sec 0 2.17 MBytes
[ 5] 6.00-7.00 sec 1.07 GBytes 9.22 Gbits/sec 10 2.17 MBytes
[ 5] 7.00-8.00 sec 1.07 GBytes 9.22 Gbits/sec 0 2.18 MBytes
[ 5] 8.00-9.00 sec 1.07 GBytes 9.21 Gbits/sec 0 2.18 MBytes
[ 5] 9.00-10.00 sec 1.07 GBytes 9.18 Gbits/sec 0 2.20 MBytes


[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 10.7 GBytes 9.21 Gbits/sec 49 sender
[ 5] 0.00-10.00 sec 10.7 GBytes 9.21 Gbits/sec receiver

iperf Done.

I should still be getting more than 280MB’s even with tailscale. I don’t know why tailscale is half the available bandwidth. TrueNAS is the exit node for my phones internet connection. Is this causing TrueNAS to send data to itself?

If your network is connected locally via a switch, why are you using tailscale? A VPN will certainly slow it down. FWIW, I rarely get full 10GB speeds reading and writing from my truenas box due to the slower hard drives and overhead.

I just use Tailscale for everything which I now realise I shouldn’t. I have disconnected the old share and mapped the drive directly to TrueNAS and the difference is considerable.

I would be interested to know why Tailscale performs so badly on a local network. I thought the software would have been better as my network is nowhere near CPU bound. In fact I never see CPU (server) usage go above 50% under load.

1 Like

That looks very reasonable! I tend to use the truenas dashboard ethernet chart to see what kind of real-time bandwidth I’m getting.