@Theo Thank you for the feedback. I’ve made a few changes based on your testing as well as some of my own over the past couple of days.
I’ve updated the dd command in the pool benchmark to use 20GiB files instead of 10 GiB files, and only run it twice instead of 4 times. This seems to have increased consistency in my testing so far.
I’ve updated the dd command for the disk benchmark to read as much data as their is RAM, or if the size of RAM exceeds the size of the disk, it will just read the whole disk. This seems to have drastically changed the behavior and produced much better data.
I do both expect and want ARC to play a role by default, since in real-world uses it will actually be used. However I want to minimize its impact for the sake of more actionable numbers. I think these changes do that. I did however make several changes to the opening message that I think you will appreciate.
root@prod[~]# git clone https://github.com/nickf1227/TN-Bench.git && cd TN-Bench && python3 truenas-bench.py
Cloning into 'TN-Bench'...
remote: Enumerating objects: 85, done.
remote: Counting objects: 100% (85/85), done.
remote: Compressing objects: 100% (85/85), done.
remote: Total 85 (delta 35), reused 0 (delta 0), pack-reused 0 (from 0)
Receiving objects: 100% (85/85), 49.41 KiB | 4.94 MiB/s, done.
Resolving deltas: 100% (35/35), done.
###################################
# #
# TN-Bench v1.05 #
# MONOLITHIC. #
# #
###################################
TN-Bench is an OpenSource Software Script that uses standard tools to Benchmark your System and collect various statistical information via the TrueNAS API.
TN-Bench will make a Dataset in each of your pools for the purposes of this testing that will consume 20 GiB of space for every thread in your system during its run.
After which time we will prompt you to delete the dataset which was created.
###################################
WARNING: This test will make your system EXTREMELY slow during its run. It is recommended to run this test when no other workloads are running.
NOTE: ZFS ARC will also be used and will impact your results. This may be undesirable in some circumstances, and the zfs_arc_max can be set to 1 (which means 1 byte) to prevent ARC from caching.
NOTE: Setting it back to 0 will restore the default behavior, but the system will need to be restarted!
###################################
This in particular is expected behavior based on the output of midclt call pool.query
. The Boot disk is, however, tested in the individual disk benchmark which should be sufficient unless you disagree. I’ve added a note about this in the script itself.
print("\n### Disk Information ###")
print("###################################")
print("\nNOTE: The TrueNAS API will return N/A for the Pool for the boot device(s) as well as any disk is not a member of a pool.")
print("###################################")
It doesn’t have logic to check for this. How did this present for you? Did it run anyway for the pools that are not read-only?
I am running one more, larger, systems now to sanity check the efficacy of the changes made.
###################################
# #
# TN-Bench v1.05 #
# MONOLITHIC. #
# #
###################################
TN-Bench is an OpenSource Software Script that uses standard tools to Benchmark your System and collect various statistical information via the TrueNAS API.
TN-Bench will make a Dataset in each of your pools for the purposes of this testing that will consume 20 GiB of space for every thread in your system during its run.
After which time we will prompt you to delete the dataset which was created.
###################################
WARNING: This test will make your system EXTREMELY slow during its run. It is recommended to run this test when no other workloads are running.
NOTE: ZFS ARC will also be used and will impact your results. This may be undesirable in some circumstances, and the zfs_arc_max can be set to 1 (which means 1 byte) to prevent ARC from caching.
NOTE: Setting it back to 0 will restore the default behavior, but the system will need to be restarted!
###################################
Would you like to continue? (yes/no): yes
### System Information ###
Field | Value
----------------------+---------------------------------------------
Version | TrueNAS-SCALE-25.04.0-MASTER-20250110-005622
Load Average (1m) | 0.06689453125
Load Average (5m) | 0.142578125
Load Average (15m) | 0.15283203125
Model | AMD Ryzen 5 5600G with Radeon Graphics
Cores | 12
Physical Cores | 6
System Product | X570 AORUS ELITE
Physical Memory (GiB) | 30.75
### Pool Information ###
Field | Value
-----------+-------------
Name | inferno
Path | /mnt/inferno
Status | ONLINE
VDEV Count | 1
Disk Count | 5
VDEV Name | Type | Disk Count
-----------+----------------+---------------
raidz1-0 | RAIDZ1 | 5
### Disk Information ###
NOTE: The TrueNAS API will return N/A for the Pool for the boot device(s) as well as the disk name if the disk is not a member of a pool.
Field | Value
-----------+---------------------------
Name | nvme0n1
Model | INTEL SSDPE21D960GA
Serial | PHM29081000X960CGN
ZFS GUID | 212601209224793468
Pool | inferno
Size (GiB) | 894.25
-----------+---------------------------
-----------+---------------------------
Name | nvme3n1
Model | INTEL SSDPE21D960GA
Serial | PHM2913000QM960CGN
ZFS GUID | 16221756077833732578
Pool | inferno
Size (GiB) | 894.25
-----------+---------------------------
-----------+---------------------------
Name | nvme5n1
Model | INTEL SSDPE21D960GA
Serial | PHM2913000YF960CGN
ZFS GUID | 8625327235819249102
Pool | inferno
Size (GiB) | 894.25
-----------+---------------------------
-----------+---------------------------
Name | nvme2n1
Model | INTEL SSDPE21D960GA
Serial | PHM2913000DC960CGN
ZFS GUID | 11750420763846093416
Pool | inferno
Size (GiB) | 894.25
-----------+---------------------------
-----------+---------------------------
Name | nvme4n1
Model | SAMSUNG MZVL2512HCJQ-00BL7
Serial | S64KNX2T216015
ZFS GUID | None
Pool | N/A
Size (GiB) | 476.94
-----------+---------------------------
-----------+---------------------------
Name | nvme1n1
Model | INTEL SSDPE21D960GA
Serial | PHM2908101QG960CGN
ZFS GUID | 10743034860780890768
Pool | inferno
Size (GiB) | 894.25
-----------+---------------------------
-----------+---------------------------
###################################
# #
# DD Benchmark Starting #
# #
###################################
Using 12 threads for the benchmark.
Creating test dataset for pool: inferno
Running benchmarks for pool: inferno
Running DD write benchmark with 1 threads...
Run 1 write speed: 411.17 MB/s
Run 2 write speed: 412.88 MB/s
Average write speed: 412.03 MB/s
Running DD read benchmark with 1 threads...
Run 1 read speed: 6762.11 MB/s
Run 2 read speed: 5073.43 MB/s
Average read speed: 5917.77 MB/s
Running DD write benchmark with 3 threads...
Run 1 write speed: 1195.91 MB/s
Run 2 write speed: 1193.22 MB/s
Average write speed: 1194.56 MB/s
Running DD read benchmark with 3 threads...
Run 1 read speed: 4146.25 MB/s
Run 2 read speed: 4161.19 MB/s
Average read speed: 4153.72 MB/s
Running DD write benchmark with 6 threads...
Run 1 write speed: 2060.54 MB/s
Run 2 write speed: 2058.62 MB/s
Average write speed: 2059.58 MB/s
Running DD read benchmark with 6 threads...
Run 1 read speed: 4209.25 MB/s
Run 2 read speed: 4212.84 MB/s
Average read speed: 4211.05 MB/s
Running DD write benchmark with 12 threads...
Run 1 write speed: 2353.74 MB/s
Run 2 write speed: 2184.07 MB/s
Average write speed: 2268.91 MB/s
Running DD read benchmark with 12 threads...
Run 1 read speed: 4191.27 MB/s
Run 2 read speed: 4199.91 MB/s
Average read speed: 4195.59 MB/s
###################################
# DD Benchmark Results for Pool: inferno #
###################################
# Threads: 1 #
# 1M Seq Write Run 1: 411.17 MB/s #
# 1M Seq Write Run 2: 412.88 MB/s #
# 1M Seq Write Avg: 412.03 MB/s #
# 1M Seq Read Run 1: 6762.11 MB/s #
# 1M Seq Read Run 2: 5073.43 MB/s #
# 1M Seq Read Avg: 5917.77 MB/s #
###################################
# Threads: 3 #
# 1M Seq Write Run 1: 1195.91 MB/s #
# 1M Seq Write Run 2: 1193.22 MB/s #
# 1M Seq Write Avg: 1194.56 MB/s #
# 1M Seq Read Run 1: 4146.25 MB/s #
# 1M Seq Read Run 2: 4161.19 MB/s #
# 1M Seq Read Avg: 4153.72 MB/s #
###################################
# Threads: 6 #
# 1M Seq Write Run 1: 2060.54 MB/s #
# 1M Seq Write Run 2: 2058.62 MB/s #
# 1M Seq Write Avg: 2059.58 MB/s #
# 1M Seq Read Run 1: 4209.25 MB/s #
# 1M Seq Read Run 2: 4212.84 MB/s #
# 1M Seq Read Avg: 4211.05 MB/s #
###################################
# Threads: 12 #
# 1M Seq Write Run 1: 2353.74 MB/s #
# 1M Seq Write Run 2: 2184.07 MB/s #
# 1M Seq Write Avg: 2268.91 MB/s #
# 1M Seq Read Run 1: 4191.27 MB/s #
# 1M Seq Read Run 2: 4199.91 MB/s #
# 1M Seq Read Avg: 4195.59 MB/s #
###################################
Cleaning up test files...
Running disk read benchmark...
This benchmark tests the 4K sequential read performance of each disk in the system using dd. It is run 2 times for each disk and averaged.
In order to work around ARC caching in systems with it still enabled, This benchmark reads data in the amount of total system RAM or the total size of the disk, whichever is smaller.
Testing disk: nvme0n1
Testing disk: nvme0n1
Testing disk: nvme3n1
Testing disk: nvme3n1
Testing disk: nvme5n1
Testing disk: nvme5n1
Testing disk: nvme2n1
Testing disk: nvme2n1
Testing disk: nvme4n1
Testing disk: nvme4n1
Testing disk: nvme1n1
Testing disk: nvme1n1
###################################
# Disk Read Benchmark Results #
###################################
# Disk: nvme0n1 #
# Run 1: 2032.08 MB/s #
# Run 2: 1825.83 MB/s #
# Average: 1928.95 MB/s #
# Disk: nvme3n1 #
# Run 1: 1964.28 MB/s #
# Run 2: 1939.57 MB/s #
# Average: 1951.93 MB/s #
# Disk: nvme5n1 #
# Run 1: 1908.79 MB/s #
# Run 2: 1948.96 MB/s #
# Average: 1928.88 MB/s #
# Disk: nvme2n1 #
# Run 1: 1947.48 MB/s #
# Run 2: 1762.31 MB/s #
# Average: 1854.90 MB/s #
# Disk: nvme4n1 #
# Run 1: 1829.80 MB/s #
# Run 2: 1787.41 MB/s #
# Average: 1808.60 MB/s #
# Disk: nvme1n1 #
# Run 1: 1836.51 MB/s #
# Run 2: 1879.80 MB/s #
# Average: 1858.16 MB/s #
###################################
Total benchmark time: 15.88 minutes
If you can I’d love to see how it runs on your system now