On my linux server i mount a dir using nfs from truenas scale.
When i use dd command writing a 1G file to this dir on the linux server, it says the speed is around 650MB/s, but when i look at the network reporting, it says the max speed is above 9000 Mb/s which is about 1.1GB/s (I’m using a 10G network and a SSD pool).
so can anyone tells me why the network traffic is about 2 times than the dd command shows? And can i make the write speed faster? this is a NVME pool and using dd comand in truenas shell can easily reach above 2GB/s.
Is the NFS mount configured to use TCP and the DD command does not include the additional network overhead and any other traffic? I think I read you can change to UDP, but not sure how that affects data integrity.
Perhaps jumbo frames (MTU=9000) would help, but that requires eveything in the path to support it.
Make sure you are not doing synchronous writes on TrueNAS - I have a feeling that the NFS default is to do synchronous writes but these are only needed for specific types of data access and NOT sequential file writes. You do want a synchronous write for fsync though at the end of each file.
Test how fast dd can read from your local disk because that could be a limiting factor.
ZFS magic…
In which direction are these tests? Reading, it could be compression.
Which dd command?
/dev/zero compresses excessiveluy well while /dev/random is limited by CPU. And ultimately, write speed will be limited by the drive ability once its SLC cache is full.
This is exactly why I caution against people who don’t understand performance testing and who don’t understand ZFS in detail from running ad-hoc tests and drawing incorrect conclusions.
ZFS does NOT actually do any writes when you dd -i /dev/zero!!!
As I previously explained, when ZFS writes all zeros it recognises that there isn’t any data and saves the disk space by simply advancing the file pointer and NOT writing anything to disk. When you read the file again it knows it was sparse and recreates the zeros. (See here for an explanation by an OpenZFS developer.)
So your statement that it’s speed can reach >1.5GB/s is meaningless.
fio is better (because it can use multiple threads) providing that you use the correct parameters to ensure that it is doing synchronous or asynchronous I/Os as appropriate for your test case, but you still need to understand how it will interact with ZFS to know how to interpret the results.