Hello everyone, just finished building my truenas scale server, for some reason the smb transfer speed is super low, only getting around 20mb/s. The system has 4 wd redpro 18tb drive in a raidz1 config. It is connected with a 1gb and a 2.5gb ethernet. While I am transfering the file I can see from the network monitoring section there is 300mb/s of data coming in however the speed is just super low.
We need better hardware details and your network setup. Brand and model of NIC really helps. Connection speeds. Make sure you keep MB and Mb correct when postings. Models of HD may make a difference. We are looking for all drives to be CMR and not SMR tech. Details on how you are testing between devices helps.
The system uses the N5105 board 4 Port i226/ i225 2.5GbE LAN,M.2 NVMe, 6SATA3.0,2DDR4, 1*PCIe4.0. The system has 64GB of ddr4 ram. The dd test and zpool status test was done in truenas web portal. The iperf test is conducted between my linux machine and the truenas server, they are connected with asus wired mesh system, I am getting almost no drop off in terms of the internet speed between notes. The transfer speed utilizing smb is around 26 Mib/s. In the interface monitoring tab I am seeing data received at 250MB/s. All the drives are WD red pro which are CWR drives.
Intel i225 has been problematic. Intel i226 has been okay and is a recommendation on the forums. Your iperf test is slow and not near a 1Gbps connection speed. I would expect 800-900 Mpbs. Looks like a network issue
Here are my iperf3 results. Windows 10 to TrueNAS. I have 1Gbps and 10Gbps networks. I did have to run the 10G test twice as the first run was really low. Reported about 1.3Gbps
iPerf suggests to me that your network is OK - though on the 1Gb I am a bit worried about the retry counts.
However, your method for attempting to benchmark the disk performance is completely flawed. ZFS is a complex file system with lots of performance enhancements and if you want to benchmark it then you absolutely need to understand both A) how ZFS works and will process your benchmark results and B) exactly what your benchmarking tool is writing and how it will interact with ZFS. In the dd case you are using here are all the factors that will make your results meaningless…
Are you doing synchronous or asynchronous writes with dd? The impact of this simple choice is MASSIVE.
Are you doing compression on your dataset? What will compression do to your writes? Will it even write ANYTHING to disk at all?
How much memory do you have, and how much is going to be used for queuing data for writes? (This is not, as you state, a WRITE CACHE though it may have some characteristics.) How does this compare to the size of the data you are writing?
Only after all these have had their impact will you actually write to disk.
Even then, you need to be VERY careful about what stats you use as the measurement.
In your specific case where you are writing literally zeros, this is EXTREMELY compressible with the default dataset compression, to the point that in some cases literally NOTHING is actually written to disk, because ZFS may create a sparse file where completely empty blocks are just noted as empty and not written and don’t use disk space.
But even if data is actually being written to disk, the DD reported throughput is meaningless as either a measurement of actual disk writes or as a realistic measure of what you will get in practice with genuine (ramdom-like) data.
Are you commenting about the iperf test for the OP? I just did the two screenshots so the OP could get an idea of what the two different network speeds test at compared to the original post.
I did run the iPerf here is the result,
[SUM] 0.00-10.00 sec 676 MBytes 567 Mbits/sec sender
[SUM] 0.00-10.02 sec 666 MBytes 558 Mbits/sec receiver
It is slower than the 1G rate I am suppose to get however I suppose it does not really explain why my SMB is so slow I am only getting 26Mib/s. I tried different types of files, the speed is even slower with smaller files.
It is a linux machine connected to a asus router(node), the router is then connected using a moca adapter back to the main asus router. The nas server is connected to the main asus router. The iperf is slower than 1g for sure but that still does not explain why the transfer speed is only 26mib/s. Thank you in advance
Try working through the Joes Rules to Asking for Help. There are sections on Drive Speed Testings and Network Issues. You can run those and post all the text from the shell window using Preformatted Text (</>) right above where you put text replies. We also need to know what pool and disks you were testing. The other odd item is your listing of MoCA networking. If only Linux and TrueNAS are active on the network we may be okay but otherwise we are sharing everything on that coax cable.
This is not sufficient detail. I will assume that this is all hardwired ethernet - if not please say? What speed is the ASUS node router? What speed is the Asus main router? What link negotiation speed is the TrueNAS NIC reporting? What link negotiation speed is the Linux NIC reporting? Do the ASUS switches have a UI and if so what links speeds are they reporting? Do either TrueNAS or Linux report any significant number of network retries in their NIC stats? Is there any other significant traffic on your LAN at the same time (e.g. torrents, downloads, uploads, media streaming of any type)?
With regard to tests, I agree that 26MB/s (which is c. 260Mb/s) is significantly less than 1Gb, and significantly less that the iPerf numbers, but diagnosing and solving these can only be done by reviewing each link in the performance chain. And networking is much easier to look at than disk I/O for the reasons I have previously explained.