However, when I run a transfer test using SSH from that same Ubuntu workstation the upload speed peaks at around 800 Mbit/s for a couple of seconds, then drops to around 24 Mbits/s.
File transfers using Nemo and the SMB share average/peak at less than 240 Mbits/s
Please help me obtain the transfer speeds that I get using perf3 also with Nemo / file manager and and SMB shares.
Thanks.
However, like I stated, file transfers from my Windows machine do achieve the expected performance. So so far I have ruled out a configuration issue on the disk/vdef side.
I don’t know what properly mounting means and why fstab.mount would make a difference.
There’s absolutely no difference between accessing an SMB share via GVFS or KIO in Linux, as opposed to using fstab, mount, or systemd-automount with the cifs kernel module.
As my problem description does not seem to be clear, I’ll try again:
I achieve expected transfer performance from a Windows PC via Ethernet.
So I exclude a bad configuration on the NAS from my debugging efforts.
I run the iperf3 test from my Ubuntu laptop (via Wifi). This shows me that the expected transfer rates are possible via Wifi. That allows me to exclude bad Wifi performance from the equation.
The ssh transfer test is strange, as for a couple of seconds I get the expected throughput, and for the rest of the time the throughput is really bad.
I ran this test to see what other transfer types can achieve in terms of performance.
What I am really interested in is the file transfer, or even directory listing, performance from my Ubuntu laptop, which stays way under the iperf3 test results.
So my tests eliminated bad NAS configuration and bad Wifi performance and my question is:
How can I achieve on my Ubuntu laptop the transfer speeds (or something approaching) that I see are possible with Windows or simple speed tests?
SFTP and SSHFS don’t seem to work well with TrueNAS from a performance standpoint. at least when compared to SMB. There can be all kinds of reasons for this, so I won’t try to guess, but if you’re trying to maximize your performance:
Configure your TrueNAS and client devices to use jumbo frames, if supported.
Use SMB over protocol 2/3, and bw sure to disable SMB version 1 on your client.
Exclude non-compressible files from attempts to compress on your client.
Tune your encryption and signing options, if necessary.
You simply won’t get the same throughput over SSH as you will over SMB, if SMB is properly tuned. That said, you may be able to set your SFTP or SSHFS options to use ECC or other tunable options to optimize for throughput rather than latency. Unless SMB isn’t an option for you, it’s probably not worth the time to try to figure out why SSH is so much less performant, but at least you know you’re not imagining it.
I only ran ssh tests for performance comparison.
I use SMB and that is where performance is low, from my Ubuntu laptop.
As to your suggestions:
Configure your TrueNAS and client devices to use jumbo frames, if supported.
I heard about that before and read articles where people lost access to their NAS once they activated Jumbo Frames. So I am really hesitant to try this. What’s more, I cannot activate Jumbo Frames on my Wifi connection.
Use SMB over protocol 2/3, and bw sure to disable SMB version 1 on your client.
I think SMB1 is not supported/deactivated on TrueNAS? In that case the client cannot use that either, no?
Exclude non-compressible files from attempts to compress on your client.
I do not understand what you are trying to achieve. The client, i.e. my Ubuntu laptop, does not compress files that it transfers via SMB to my NAS?
Tune your encryption and signing options, if necessary.
I have no idea where and how I could/should do that.