So I have read numerous posts on the old forum, here and other posts on the internet and have concluded that slow network speeds can be caused by using realtek nics, bad cables, full storage or a misconfiguration. In order to seek help, I can confirm that:
I have CAT 6 cables between my PC and my TN Scale (24.04.02) Server (8 x 3TB Red CMR Z2)
Both connected to intel i226 NICs with a TP-Link SG105-M2 - 5-Port 2.5GbE Desktop Switch in between.
I am getting up to 70Mb/s transfer speed from the source files which are on a 3.5" HDD (SMR) running via USB 3 however I get around 170MB/s from this hub to my pc (noting the units).
What can I review to ensure I don’t have a config issue as I don’t think it is a hardware issue?
This was more of an issue on CORE, not so much on SCALE (not to say that Realtek NICs are brilliant, but they have come a long way).
Slow Motion Recording
What is “this hub”? The USB3.0-connected SMR disk?
What I don’t actually understand here is what the issue is. When you say “source files” do you mean you’ve got this SMR disk hooked up to a client machine, and you’re then transferring files from that external disk (over what I assume is SMB) to the TrueNAS server?
Have you tested this from another source medium? Have you checked your network performance across the link using iperf3?
Yes your assumptions are correct. However when moving files from the external HDD (SMR) on the USB 3.0 hub to the client Windows machine I get over 170MB / s. This is then significantly reduced when transferring to the TrueNAS across 2.5GbE network to around 8MB / s. As you can see the USB 3.0 hub can provide a much larger throughput but the transfer across the network is massively reduced. I haven’t done further testing yet as I am waiting on the files to finish transferring.
So it’s not the network as I get these results from the iperf3 tests. Not sure why I get 170MB/s to the local machine but only 8MB/s across the network though… strange.
So im not sure how to answer your first question regarding the dataset configuration. I have 1 pool with a dataset details as follows:
I am using the “robocopy” command from within Windows and pointing the folder on a USB 3 connected HDD to the truenas dataset over 2.5GbE network. The command im using is:
robocopy i:\Ben \TRUENAS\Ben /z /s
I haven’t tried the last idea. I guess my confusion is that USB —> Windows SSD = 170MB/s whereas USB —> TrueNAS (over 2.5GbE LAN) = 7MB/s and am wondering what about my situation would cause the network speed to be so slow.
Ok so I have worked it out. Following your guidance it logically meant that the issue is with the robocopy command itself and not the hardware or TrueNAS. After 2 min of research I found that removing the restartable switch (/z) increased the throughput from 58Mb/s to nearly 2000Mb/s (2Gb/s). Amazing improvement.
I thought for future readers this may be important.
Thanks again for your help, much appreciated as I wouldn’t have probably worked that out myself.
For the record (all the puns intended), you can eek out a bit more performance by setting the recordsize to 1 MiB. Your dataset is being used as a backup target. I doubt you’re doing many in-place modifications on the files on the server itself.
Your results still show that the external USB drive is shaving off some speed, regardless. (As is expected for USB drives.)
Is there a command to get a true value on what the record size is on the TrueNAS? Its actually my fileserver with a single VM at the moment but in future I will be using it for more apps once 24.10 has been released and shown to be stable. The external HDDs are my back up (cold storage).
Is this to mean that TrueNAS is virtualized on a server. And that this server is only running a single VM instance of TrueNAS SCALE?
Or that TrueNAS SCALE is the server itself, and you have a VM (of something) running on it? (If so, I’m not sure if you’re implying that the target destination of your backups is transferring to the VM? I doubt this, since you mentioned SMB, but I’m asking since you brought up that you have a “VM”.)
The “recordsize policy” is set (or inherited) on a per-dataset level. The default is 128 KiB.
You can check / set it in the GUI from a dataset’s configuration.
Keep in mind that setting a new recordsize policy will not affect existing data blocks.