I am a beginner and have set up a new TrueNAS Scale system with 2x 8TB nas hdds in mirror configuration and a 10gbit ethernet connection.
My mirror reaches the expected & rated 1x device write speed of ~250 MByte/s but read speed does not reach 2x dev-speed.
I am stuck around 300 MB/s while expecting 450-500 MB/s.
Reading both disks at the same time using dd gives full speed for each device, so mainboard should not be the limiting factor here. This seems to be some zfs configuration magic?
I guess this is just some simple misconfiguration… I read about several ideas e.g. atime off, recordsize 128k but these are already default values using Gui.
Any ideas?
More details below:
TrueNAS: Dragonfish-24.04.2.3
PC: modified Fujitsu Esprimo P756
Board: D4301 A
CPU: i5-6400
RAM: 64GB DDR4
boot: 256GB nvme
Eth: 10Gbit (Inspur Dual Port “Intel” X540T-T2 PCIe card, w. ext. fan, second port disabled, mtu 9000)
single mirror vDev created using Gui with standard-config:
2x WD8005FFBX WD Red Pro 8TB (new, rated around 250 MB/s)
I did a few tests to find where too look for but don’t have many ideas left on how to solve it.
I disabled compression on the vDev because I thought CPU might be a problem but I guess the problem is not related to CPU.
The result of various tests is attached as image - here is the explanation to the image:
1)
Using nfs file share first and read uncached 12GB file to my computer using rsync -r --progress --write-devices "/mnt/nas/bigfile.big" /dev/null
→ ~300 MB/s (each disk around 150)
Checking local speed on TrueNAS system using dd in web-shell to read another 12GB file sudo dd if="/mnt/nas/otherbigfile.big" of=/dev/null bs=1M
→ ~300 MB/s (each disk around 150)
Checking local speed on TrueNAS using dd to read only /dev/sda sudo dd if=/dev/sda of=/dev/null bs=1M
→ ~250 MB/s
Checking local speed on TrueNAS using dd to read only /dev/sdb sudo dd if=/dev/sdb of=/dev/null bs=1M
→ ~250 MB/s
Checking local speed on TrueNAS using dd in two web-shells to read /dev/sda and /dev/sdb at the same time sudo dd if=/dev/sdb of=/dev/null bs=1M and sudo dd if=/dev/sda of=/dev/null bs=1M
→ ~250 MB/s each aka. 500 MB/s in sum
I still could not figure it out and it is driving me nuts - please help!
I tried several things like a SSD for metadata and several settings but nothing changed anything.
I even got a LSI 9217/9207 in IT mode but - still same speed.
I found out that the scrub job will use both disks simultanously with full speed. What’s different there?
I guess I’m wondering why you would feel the theoretical maximum would be the real world maximum.
While you may be able to get a little faster read speed transfer, think of it this way…
When you ask for the 12GB file (or any size file), with a Mirror you have two drives spinning. They both attempt to pull the data off the drives, however the maximum speed is 250MB/s with no fragmentation. And ZFS does fragment, the files are placed everywhere. Just the way it is.
Now drive ‘A’ starts to pull data and drive ‘B’ starts to pull data however that does not mean the system is smart enough to read as individual drives, drive ‘A’ reading block 1, drive ‘B’ reading block 2, and then they alternate which would be great, however you have latency caused by the rotating platter and the fact that the data must be in the right physical location under the heads at the exact correct time. Think about that for a while from a physical perspective.
Each drive can provide up to 250MB/s continuous read speed, however building a mirror does not double that. It will increase the throughput above a single drive speed however it does not double it.
How much did you search for how ZFS works in a Mirror, Stripe, RAIDZ?
Do a Google search for “ZFS double speed mirror” and you will find several links which explain it. If you need faster read speeds, you will need to add more Mirror drives, create a 3-way mirror.
You are not the first and will not be the last asking for help on this topic. Use the internet, it is a great tool to answer most of everyone’s questions.
Well I was googling before building this setup. That’s what made me believe it should be a lot faster. For example these benchmarks made me think it would be possible: https://calomel.org/zfs_raid_speed_capacity.html
As a beginner it is really hard to find some actual performance measurements of different setups. I was also thinking about just getting a Synology NAS with mirror speed that will clearly be almost double there (yes, while not having zfs). But I wanted to be more flexible and play around a little.
Right now I see maybe 110 - 120% compared to single dev speed speed being 100%.
Of course I do not expect 200,00% speed as there must be some overhead. But on a brand new and almost empty pool with just very few but huge files I was clearly expecting to be possible to get read speeds of maybe 150-190%?
Disks are able to read 250MB/s and they already have 256 MB HW cache. If truenas/zfs was not able to use this to read with prefetch and put together the chunks for serious performance increase then I would call this a huge design flaw. But I still think (hope) for some misconfiguration on my side.
Well thank’s for referring to “just google”. Looks like googling was wrong! Why not talk about numbers here? How much would your suggested 3-way mirror actually increase speed? Just another 15%?
Best regards,
Alex
There is a resource I think it was on the old forum that discussed these topics of speed vs the ZFS pool type. This isn’t the link I was thinking of however page 2 has a ton of links to explore.