I ran the same tests that @NickF1227 had displayed above, performed on a RAIDZ1 of four Gen4 NVME drives, in an non-compressed dataset and these are my results.
root@truenas[/mnt/farm/nocompress]# time dd if=/dev/zero of=./zeros bs=1M count=50000
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB, 49 GiB) copied, 11.8932 s, 4.4 GB/s
dd if=/dev/zero of=./zeros bs=1M count=50000 0.02s user 6.49s system 54% cpu 11.895 total
root@truenas[/mnt/farm/nocompress]# time dd if=/dev/zero of=./zeros bs=1M count=50000
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB, 49 GiB) copied, 11.9149 s, 4.4 GB/s
dd if=/dev/zero of=./zeros bs=1M count=50000 0.01s user 6.68s system 55% cpu 11.984 total
root@truenas[/mnt/farm/nocompress]# time dd if=./zeros of=/dev/null bs=1M count=50000
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB, 49 GiB) copied, 11.5992 s, 4.5 GB/s
dd if=./zeros of=/dev/null bs=1M count=50000 0.02s user 11.46s system 99% cpu 11.600 total
root@truenas[/mnt/farm/nocompress]# time dd if=./zeros of=/dev/null bs=1M count=50000
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB, 49 GiB) copied, 10.8099 s, 4.9 GB/s
dd if=./zeros of=/dev/null bs=1M count=50000 0.02s user 10.60s system 98% cpu 10.811 total
root@truenas[/mnt/farm/nocompress]#
And to add a bit to these tests, a little random action to simulate real data.
root@truenas[/mnt/farm/nocompress]# time dd if=/dev/random of=./zeros bs=1M count=50000
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB, 49 GiB) copied, 88.5059 s, 592 MB/s
dd if=/dev/random of=./zeros bs=1M count=50000 0.02s user 85.89s system 97% cpu 1:28.55 total
root@truenas[/mnt/farm/nocompress]# time dd if=./zeros of=/dev/null bs=1M count=50000
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB, 49 GiB) copied, 10.1702 s, 5.2 GB/s
dd if=./zeros of=/dev/null bs=1M count=50000 0.02s user 10.08s system 99% cpu 10.171 total
root@truenas[/mnt/farm/nocompress]# time dd if=/dev/random of=./zeros bs=1M count=50000
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB, 49 GiB) copied, 89.7006 s, 584 MB/s
dd if=/dev/random of=./zeros bs=1M count=50000 0.00s user 85.53s system 95% cpu 1:29.78 total
root@truenas[/mnt/farm/nocompress]# time dd if=./zeros of=/dev/null bs=1M count=50000
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB, 49 GiB) copied, 10.8433 s, 4.8 GB/s
dd if=./zeros of=/dev/null bs=1M count=50000 0.01s user 10.62s system 98% cpu 10.844 total
root@truenas[/mnt/farm/nocompress]# time dd if=./zeros of=/dev/null bs=1M count=50000
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB, 49 GiB) copied, 10.7881 s, 4.9 GB/s
dd if=./zeros of=/dev/null bs=1M count=50000 0.02s user 10.55s system 97% cpu 10.789 total
root@truenas[/mnt/farm/nocompress]#
The purpose of using if=/dev/random means the data isn’t a lot of the same character which can be compress, however it does not demonstrate the fastest possible throughput, just closer to real world. I’m not talking about being compressed when writing the data to the drive but rather internally within the system tells the drive to write a zero, 10 bazillion times. The instruction is sent fast, vice having to send each value individually. I’m making it simplistic and taking liberties here just to make it relatable.
Summary: My RAIDZ1 (4 NVME drives) using the zero’s results in 4.4/4.5 GB/s Write to 4.9 GB/s Read. Using random numbers it ranges from 580 MB/s Write to 4.8GB/s Read. So the data makes a difference. I would love to test using a MIRROR configuration however I’m not ready to destroy my pool for that.
4 lanes. The speed is just slower.