Low TrueNAS read/write performance

Platform: TRUENAS-MINI-3.0-X+ Version:
Dragonfish-24.04.2
Please bear with me, I am new to TrueNAS.

I have a newly installed server with a couple of SMB shares and a Gigabit network.

I observe that the read/write performance toan SMB share (via Ethernet) from a Windows workstation is what I’d expect (around 800 Mbit/s)
The iperf3 performance test from an Ubuntu workstation (via Wifi) seems as expected too:
@ubuntu:~$ iperf3 -c myserverIP
Connecting to host myserverIP, port 5201
[ 5] local myclientIP port 54916 connected to myserverIP port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 93.4 MBytes 784 Mbits/sec 0 4.84 MBytes
[ 5] 1.00-2.00 sec 78.8 MBytes 661 Mbits/sec 41 4.10 MBytes
[ 5] 2.00-3.00 sec 93.8 MBytes 786 Mbits/sec 0 4.38 MBytes
[ 5] 3.00-4.00 sec 75.0 MBytes 629 Mbits/sec 48 2.26 MBytes
[ 5] 4.00-5.00 sec 70.0 MBytes 587 Mbits/sec 48 3.33 MBytes
[ 5] 5.00-6.00 sec 101 MBytes 849 Mbits/sec 0 3.49 MBytes
[ 5] 6.00-7.00 sec 97.5 MBytes 818 Mbits/sec 0 3.61 MBytes
[ 5] 7.00-8.00 sec 101 MBytes 849 Mbits/sec 0 3.71 MBytes
[ 5] 8.00-9.00 sec 106 MBytes 891 Mbits/sec 0 3.78 MBytes
[ 5] 9.00-10.00 sec 96.2 MBytes 807 Mbits/sec 0 3.83 MBytes


[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 913 MBytes 766 Mbits/sec 137 sender
[ 5] 0.00-10.02 sec 906 MBytes 759 Mbits/sec receiver

However, when I run a transfer test using SSH from that same Ubuntu workstation the upload speed peaks at around 800 Mbit/s for a couple of seconds, then drops to around 24 Mbits/s.
File transfers using Nemo and the SMB share average/peak at less than 240 Mbits/s

Please help me obtain the transfer speeds that I get using perf3 also with Nemo / file manager and and SMB shares.

Thanks.

Seriously? Nothing? :face_with_raised_eyebrow:

Did you activate and access the SMB share via Nemo itself? Or did you first mount it “properly” with the fstab, mount, or systemd-automount?

The underlying disks and vdev layout(s) also matter greatly. Dataset properties too.

Thanks.
However, like I stated, file transfers from my Windows machine do achieve the expected performance. So so far I have ruled out a configuration issue on the disk/vdef side.
I don’t know what properly mounting means and why fstab.mount would make a difference.

Never mind, then.

I have no idea why I would bring up what I did.

There’s absolutely no difference between accessing an SMB share via GVFS or KIO in Linux, as opposed to using fstab, mount, or systemd-automount with the cifs kernel module.

My apologies.

So, you are comparing a network speed test (iperf3) with a ssh file transfer test? Is that true?

Windows PC uses a cabled ethernet connection. Ubuntu PC uses WiFi?

1 Like

As my problem description does not seem to be clear, I’ll try again:

  1. I achieve expected transfer performance from a Windows PC via Ethernet.
    So I exclude a bad configuration on the NAS from my debugging efforts.
  2. I run the iperf3 test from my Ubuntu laptop (via Wifi). This shows me that the expected transfer rates are possible via Wifi. That allows me to exclude bad Wifi performance from the equation.
  3. The ssh transfer test is strange, as for a couple of seconds I get the expected throughput, and for the rest of the time the throughput is really bad.
    I ran this test to see what other transfer types can achieve in terms of performance.
  4. What I am really interested in is the file transfer, or even directory listing, performance from my Ubuntu laptop, which stays way under the iperf3 test results.
    So my tests eliminated bad NAS configuration and bad Wifi performance and my question is:
    How can I achieve on my Ubuntu laptop the transfer speeds (or something approaching) that I see are possible with Windows or simple speed tests?

No ideas? Really? Kinda shocking.

We’re all stumped. I think it’s safe to assume that Ubuntu is not a compatible client to transfer files to and from a TrueNAS server.

Your tests absolutely verified this.

SFTP and SSHFS don’t seem to work well with TrueNAS from a performance standpoint. at least when compared to SMB. There can be all kinds of reasons for this, so I won’t try to guess, but if you’re trying to maximize your performance:

  1. Configure your TrueNAS and client devices to use jumbo frames, if supported.
  2. Use SMB over protocol 2/3, and bw sure to disable SMB version 1 on your client.
  3. Exclude non-compressible files from attempts to compress on your client.
  4. Tune your encryption and signing options, if necessary.

You simply won’t get the same throughput over SSH as you will over SMB, if SMB is properly tuned. That said, you may be able to set your SFTP or SSHFS options to use ECC or other tunable options to optimize for throughput rather than latency. Unless SMB isn’t an option for you, it’s probably not worth the time to try to figure out why SSH is so much less performant, but at least you know you’re not imagining it. :slight_smile:

Thanks @CodeGnome .

I only ran ssh tests for performance comparison.
I use SMB and that is where performance is low, from my Ubuntu laptop.

As to your suggestions:

  • Configure your TrueNAS and client devices to use jumbo frames, if supported.

I heard about that before and read articles where people lost access to their NAS once they activated Jumbo Frames. So I am really hesitant to try this. What’s more, I cannot activate Jumbo Frames on my Wifi connection.

  • Use SMB over protocol 2/3, and bw sure to disable SMB version 1 on your client.

I think SMB1 is not supported/deactivated on TrueNAS? In that case the client cannot use that either, no?

  • Exclude non-compressible files from attempts to compress on your client.

I do not understand what you are trying to achieve. The client, i.e. my Ubuntu laptop, does not compress files that it transfers via SMB to my NAS?

  • Tune your encryption and signing options, if necessary.

I have no idea where and how I could/should do that.