Hello,
I installed my first TrueNAS server today.
In the first test I only achieve write speeds of up to 250 MB/s (would be okay). However, the speed very often drops to 110-130MB/s.
Transfer via SMB.
HDDs: 2x Ironwolf Pro 16TB
Pool: Mirror
CPU: i5-12500t
RAM: 2x 16GB DDR4 3200
10G NIC: HPE 560SFP+ (Client&Server)
What is the reason for these drops and how can I achieve a constant speed?
Please find attached screenshots of the transfer and iostat.
You are writing to two drives. You are normal. Try transferring a large, compressed file. File transfers depend on the size of files and what you are transferring. If you sent a folder of tiny files, it’s slower. It also depend on the read speed of where you are transferring from.
I’m pretty sure that’s the real-world speeds of the drives. I had the same issue, and as I added more HDDs into the vdev it got faster.
1 Like
It is one big zip-file.
But dont know if the content is compressed
How did you more discs?
How is you pool layout? Mirror/Stripe, Raidz?
This will give you a general idea of the performance of different VDEV setups and numbers of drives
https://calomel.org/zfs_raid_speed_capacity.html
SMB network writes from Windows start off at network speeds until the maximum memory limit for async writes is reached and then it slows to disk speeds.
There are some tuneable ZFS system-wide parameters that you can tweak if you want to increase this memory limit, BUT this will come at the expense of ARC entries being trashed thus impacting read performance - so changing these is not recommended.
If you want faster bulk SMB writes then switch from HDDs to large-scale SSDs.
The max speed IIRC for physical disks is around 150MB/s for a 3.5" 7200rpm. If this is RAID1/Mirrored this is the normal speed range you’ll see. It doesn’t matter if you are using a 10GbE connection either as you are limited by the physical speed of the disks.
If this is purely moving large files back and forth you would be better off using NFS to get some more consistent performance.