TN-Bench -- An OpenSource Community TrueNAS Benchmarking and Testing Utility

Over the years I’ve spent alot of time playing with benchmarking and min-maxing systems. I cut my teeth on systems engineering the same way I suspect many other Millennials did…overclocking!

In that realm, there has always been a plethora of tools driven by the community. From small projects using X264 to test the stability of Intel CPUs to large ones designed to stress test your RAM.

In the TrueNAS community too, we’ve had varying tools for testing drives like the solnet-array-test and the infamous SLOG Benchmarking thread @HoneyBadger has referenced at least once in the T3 Podcast. However, interests in these older threads have waned as software development on the TrueNAS side has marched along from FreeBSD → Linux. With diskinfo being unavailable in Linux and @jgreco no longer a member of the community, I feel like a hole needs to be plugged.

I’ve begun development on what I’m calling TN-Bench which in it’s current form is a monolithic python script that runs a series of pool and disk based benchmarks to help give users a better understanding of their systems performance.

For now, the script is monolithic and not configurable. It’s designed to get an idea of the maximum performance possible from your TrueNAS pools in the system that they are in. This is done by creating a temporary dataset using the TrueNAS API, which has a 1M block size, compression=none and sync=disabled.

In the future, I plan to make this script more modular so that I may add additional tests for things. The TrueNAS API is the real magic here. In the near future I plan to use it to find a pool with a SLOG device, take it out of the pool, benchmark it, and put it back into the pool. However, I wanted to get this out there for other users to test, compare, and hopefully get some of you to help contribute back.

Please share your results here so we may all benefit from the data. If there’s enough interest I hope to one day create a community repository where users can explore this data inside of a database, but for now just post your results here or use this script as a quick burn-in before lighting up your system.

6 Likes

TN-Bench v1.06

TN-Bench is an OpenSource software script that benchmarks your system and collects various statistical information via the TrueNAS API. It creates a dataset in each of your pools during testing, consuming 20 GiB of space for each thread in your system.

Features

  • Collects system information using TrueNAS API.
  • Benchmarks system performance using dd command.
  • Provides detailed information about system, pools, and disks.
  • Supports multiple pools.

Running the Script with 1M block size

git clone https://github.com/nickf1227/TN-Bench.git && cd TN-Bench && python3 truenas-bench.py

NEW for v1.06 Running the Script with 1M Block size and zstd Compression

git clone https://github.com/nickf1227/TN-Bench.git && cd TN-Bench && python3 truenas-bench-zstd.py

NEW for v1.06 Running the Script with 1M Block size and lz4 Compression

git clone https://github.com/nickf1227/TN-Bench.git && cd TN-Bench && python3 truenas-bench-zstd.py

NOTE: /dev/urandom generates inherently uncompressible data, the the value of the compression options above is minimal in the current form.

The script will display system and pool information, then prompt you to continue with the benchmarks. Follow the prompts to complete the benchmarking process.

Benchmarking Process

  • Dataset Creation: The script creates a temporary dataset in each pool. The dataset is created with a 1M Record Size with no Compression and sync=Disabled using midclt call pool.dataset.create
  • Pool Write Benchmark: The script performs four runs of the write benchmark using dd with varying thread counts. We are using /dev/urandom as our input file, so CPU performance may be relevant. This is by design as /dev/zero is flawed for this purpoose, and CPU stress is expected in real-world use anyway. The data is written in 1M chunks to a dataset with a 1M record size. For each thread, 20G of data is written. This scales with the number of threads, so a system with 16 Threads would write 320G of data.
  • Pool Read Benchmark: The script performs four runs of the read benchmark using dd with varying thread counts. We are using /dev/null as out output file, so RAM speed may be relevant. The data is read in 1M chunks from a dataset with a 1M record size. For each thread, the previously written 20G of data is read. This scales with the number of threads, so a system with 16 Threads would have read 320G of data.

NOTE: ZFS ARC will also be used and will impact your results. This may be undesirable in some circumstances, and the zfs_arc_max can be set to 1 (which means 1 byte) to prevent ARC from caching. Setting it back to 0 will restore the default behavior, but the system will need to be restarted!

I have tested several permutations of file sizes on a dozen systems with varying amount of storage types, space, and RAM. Eventually settled on the current behavior for several reasons. Primarily, I wanted to reduce the impact of, but not REMOVE the ZFS ARC, since in a real world scenario, you would be leveraging the benefits of ARC caching. However, in order to avoid insanely unrealistic results, I needed to use file sizes that saturate the ARC completely. I believe this gives us the best data possible.

Example of arcstat -f time,hit%,dh%,ph%,mh% 10 running while the benchmark is running.

  • Disk Benchmark: The script performs four runs of the read benchmark using dd with varying thread counts. Calculated based on the size of your RAM and the disks, data already on each disk is read in 4K chunks to /dev/null , making this a 4K sequential read test. 4K was chosen because ashift=12 for all recent ZFS pools created in TrueNAS. The reads are so large to try and avoid ARC caching. Run-to-run variance is still expected, particularly on SSDs, as the data ends up inside of internal caches. For this reason, it is run 4 times and averaged.

  • Results: The script displays the results for each run and the average speed. This should give you an idea of the impacts of various thread-counts (as a synthetic representation of client-counts) and the ZFS ARC caching mechanism.

NOTE: The script’s run duration is dependant on the number of threads in your system as well as the number of disks in your system. Small all-flash systems may complete this benchmark in 25 minutes, while larger systems with spinning hardrives may take several hours. The script will not stop other I/O activity on a production system, but will severely limit performance. This benchmark is best run on a system with no other workload. This will give you the best outcome in terms of the accuracy of the data, in addition to not creating angry users.

Cleanup

After the benchmarking is complete, the script prompts you to delete the datasets created during the process.

Example Output


###################################
#                                 #
#          TN-Bench v1.06         #
#          MONOLITHIC.            #
#                                 #
###################################
TN-Bench is an OpenSource Software Script that uses standard tools to Benchmark your System and collect various statistical information via the TrueNAS API.

TN-Bench will make a Dataset in each of your pools for the purposes of this testing that will consume 20 GiB of space for every thread in your system during its run.

After which time we will prompt you to delete the dataset which was created.
###################################

WARNING: This test will make your system EXTREMELY slow during its run. It is recommended to run this test when no other workloads are running.

NOTE: ZFS ARC will also be used and will impact your results. This may be undesirable in some circumstances, and the zfs_arc_max can be set to 1 (which means 1 byte) to prevent ARC from caching.

NOTE: Setting it back to 0 will restore the default behavior, but the system will need to be restarted!
###################################

Would you like to continue? (yes/no): yes

### System Information ###
Field                 | Value                                       
----------------------+---------------------------------------------
Version               | TrueNAS-SCALE-25.04.0-MASTER-20250118-155243
Load Average (1m)     | 0.26123046875                               
Load Average (5m)     | 0.22216796875                               
Load Average (15m)    | 0.185546875                                 
Model                 | AMD Ryzen 5 5600G with Radeon Graphics      
Cores                 | 12                                          
Physical Cores        | 6                                           
System Product        | X570 AORUS ELITE                            
Physical Memory (GiB) | 30.75                                       

### Pool Information ###
Field      | Value       
-----------+-------------
Name       | inferno     
Path       | /mnt/inferno
Status     | ONLINE      
VDEV Count | 1           
Disk Count | 5           

VDEV Name  | Type           | Disk Count
-----------+----------------+---------------
raidz1-0    | RAIDZ1         | 5

### Disk Information ###
###################################

NOTE: The TrueNAS API will return N/A for the Pool for the boot device(s) as well as any disk is not a member of a pool.
###################################
Field      | Value                     
-----------+---------------------------
Name       | nvme0n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM29081000X960CGN        
ZFS GUID   | 212601209224793468        
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme3n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM2913000QM960CGN        
ZFS GUID   | 16221756077833732578      
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme5n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM2913000YF960CGN        
ZFS GUID   | 8625327235819249102       
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme2n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM2913000DC960CGN        
ZFS GUID   | 11750420763846093416      
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme4n1                   
Model      | SAMSUNG MZVL2512HCJQ-00BL7
Serial     | S64KNX2T216015            
ZFS GUID   | None                      
Pool       | N/A                       
Size (GiB) | 476.94                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme1n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM2908101QG960CGN        
ZFS GUID   | 10743034860780890768      
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------

###################################
#                                 #
#       DD Benchmark Starting     #
#                                 #
###################################
Using 12 threads for the benchmark.


Creating test dataset for pool: inferno
Dataset inferno/tn-bench created successfully.

Running benchmarks for pool: inferno
Running DD write benchmark with 1 threads...
Run 1 write speed: 410.96 MB/s
Run 2 write speed: 410.95 MB/s
Average write speed: 410.96 MB/s
Running DD read benchmark with 1 threads...
Run 1 read speed: 4204.60 MB/s
Run 2 read speed: 5508.72 MB/s
Average read speed: 4856.66 MB/s
Running DD write benchmark with 3 threads...
Run 1 write speed: 1179.53 MB/s
Run 2 write speed: 1165.82 MB/s
Average write speed: 1172.67 MB/s
Running DD read benchmark with 3 threads...
Run 1 read speed: 4260.03 MB/s
Run 2 read speed: 4275.41 MB/s
Average read speed: 4267.72 MB/s
Running DD write benchmark with 6 threads...
Run 1 write speed: 1971.18 MB/s
Run 2 write speed: 1936.90 MB/s
Average write speed: 1954.04 MB/s
Running DD read benchmark with 6 threads...
Run 1 read speed: 4237.76 MB/s
Run 2 read speed: 4240.26 MB/s
Average read speed: 4239.01 MB/s
Running DD write benchmark with 12 threads...
Run 1 write speed: 2049.01 MB/s
Run 2 write speed: 1940.13 MB/s
Average write speed: 1994.57 MB/s
Running DD read benchmark with 12 threads...
Run 1 read speed: 4087.74 MB/s
Run 2 read speed: 4092.10 MB/s
Average read speed: 4089.92 MB/s

###################################
#         DD Benchmark Results for Pool: inferno    #
###################################
#    Threads: 1    #
#    1M Seq Write Run 1: 410.96 MB/s     #
#    1M Seq Write Run 2: 410.95 MB/s     #
#    1M Seq Write Avg: 410.96 MB/s #
#    1M Seq Read Run 1: 4204.60 MB/s      #
#    1M Seq Read Run 2: 5508.72 MB/s      #
#    1M Seq Read Avg: 4856.66 MB/s  #
###################################
#    Threads: 3    #
#    1M Seq Write Run 1: 1179.53 MB/s     #
#    1M Seq Write Run 2: 1165.82 MB/s     #
#    1M Seq Write Avg: 1172.67 MB/s #
#    1M Seq Read Run 1: 4260.03 MB/s      #
#    1M Seq Read Run 2: 4275.41 MB/s      #
#    1M Seq Read Avg: 4267.72 MB/s  #
###################################
#    Threads: 6    #
#    1M Seq Write Run 1: 1971.18 MB/s     #
#    1M Seq Write Run 2: 1936.90 MB/s     #
#    1M Seq Write Avg: 1954.04 MB/s #
#    1M Seq Read Run 1: 4237.76 MB/s      #
#    1M Seq Read Run 2: 4240.26 MB/s      #
#    1M Seq Read Avg: 4239.01 MB/s  #
###################################
#    Threads: 12    #
#    1M Seq Write Run 1: 2049.01 MB/s     #
#    1M Seq Write Run 2: 1940.13 MB/s     #
#    1M Seq Write Avg: 1994.57 MB/s #
#    1M Seq Read Run 1: 4087.74 MB/s      #
#    1M Seq Read Run 2: 4092.10 MB/s      #
#    1M Seq Read Avg: 4089.92 MB/s  #
###################################
Cleaning up test files...
Running disk read benchmark...
###################################
This benchmark tests the 4K sequential read performance of each disk in the system using dd. It is run 2 times for each disk and averaged.
In order to work around ARC caching in systems with it still enabled, This benchmark reads data in the amount of total system RAM or the total size of the disk, whichever is smaller.
###################################
Testing disk: nvme0n1
Testing disk: nvme0n1
Testing disk: nvme3n1
Testing disk: nvme3n1
Testing disk: nvme5n1
Testing disk: nvme5n1
Testing disk: nvme2n1
Testing disk: nvme2n1
Testing disk: nvme4n1
Testing disk: nvme4n1
Testing disk: nvme1n1
Testing disk: nvme1n1

###################################
#         Disk Read Benchmark Results   #
###################################
#    Disk: nvme0n1    #
#    Run 1: 2017.87 MB/s     #
#    Run 2: 1924.79 MB/s     #
#    Average: 1971.33 MB/s     #
#    Disk: nvme3n1    #
#    Run 1: 2008.92 MB/s     #
#    Run 2: 1911.36 MB/s     #
#    Average: 1960.14 MB/s     #
#    Disk: nvme5n1    #
#    Run 1: 2044.10 MB/s     #
#    Run 2: 1944.96 MB/s     #
#    Average: 1994.53 MB/s     #
#    Disk: nvme2n1    #
#    Run 1: 2039.12 MB/s     #
#    Run 2: 1943.66 MB/s     #
#    Average: 1991.39 MB/s     #
#    Disk: nvme4n1    #
#    Run 1: 1927.49 MB/s     #
#    Run 2: 1828.96 MB/s     #
#    Average: 1878.23 MB/s     #
#    Disk: nvme1n1    #
#    Run 1: 2031.78 MB/s     #
#    Run 2: 1933.92 MB/s     #
#    Average: 1982.85 MB/s     #
###################################

Total benchmark time: 16.45 minutes
Do you want to delete the testing dataset inferno/tn-bench? (yes/no): yes
Deleting dataset: inferno/tn-bench
Dataset inferno/tn-bench deleted.
 

Contributing

Contributions are welcome! Please open an issue or submit a pull request for any improvements or fixes.

License

This project is licensed under the GPLv3 License - see the LICENSE file for details.

3 Likes

Wow this looks great. Many thanks for creating and sharing. I’m looking forward to trying this.

1 Like

Just an update quick with an example of a larger system. 35 disk system with 32 threads took a little over 6 hours. Interesting that even with a 50 gigabyte read, sata hard drive results still seem skewed by caching. I plan an making some alterations soon.


###################################
#                                 #
#          TN-Bench v1.01         #
#          MONOLITHIC.            #
#                                 #
###################################
TN-Bench is an OpenSource Software Script that uses standard tools to Benchmark your System and collect various statistical information via the TrueNAS API.

TN-Bench will make a Dataset in each of your pools for the purposes of this testing that will consume 10 GiB of space for every thread in your system during its run.

After which time we will prompt you to delete the dataset which was created.

Would you like to continue? (yes/no): yes

### System Information ###
Field                 | Value                          
----------------------+--------------------------------
Version               | TrueNAS-SCALE-24.10.0          
Load Average (1m)     | 4.220703125                    
Load Average (5m)     | 3.66455078125                  
Load Average (15m)    | 6.19482421875                  
Model                 | AMD EPYC 7F52 16-Core Processor
Cores                 | 32                             
Physical Cores        | 16                             
System Product        | Super Server                   
Physical Memory (GiB) | 220.07                         

### Pool Information ###
Field      | Value   
-----------+---------
Name       | ice     
Path       | /mnt/ice
Status     | ONLINE  
VDEV Count | 5       
Disk Count | 35      

VDEV Name  | Type           | Disk Count
-----------+----------------+---------------
raidz2-0    | RAIDZ2         | 7
raidz2-1    | RAIDZ2         | 7
raidz2-2    | RAIDZ2         | 7
raidz2-3    | RAIDZ2         | 7
raidz2-4    | RAIDZ2         | 7

### Disk Information ###
Field    | Value              
---------+--------------------
Name     | nvme0n1            
Model    | INTEL SSDPEK1A058GA
Serial   | BTOC14120Y1T058A   
ZFS GUID | None               
Pool     | N/A                
---------+--------------------
---------+--------------------
Name     | sdn                
Model    | HUS728T8TAL4204    
Serial   | VAHE4AJL           
ZFS GUID | 11464489017973229028
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdo                
Model    | HUS728T8TAL4204    
Serial   | VAH751XL           
ZFS GUID | 12194731234089258709
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdp                
Model    | HUS728T8TAL4204    
Serial   | VAHDEEEL           
ZFS GUID | 4070674839367337299
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sds                
Model    | HUS728T8TAL4204    
Serial   | VAHD99LL           
ZFS GUID | 663480060468884393 
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdq                
Model    | HUS728T8TAL4204    
Serial   | VAHD4V0L           
ZFS GUID | 1890505091264157917
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdr                
Model    | HUS728T8TAL4204    
Serial   | VAHDHLVL           
ZFS GUID | 2813416134184314367
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdu                
Model    | HUS728T8TAL4204    
Serial   | VAH7T9BL           
ZFS GUID | 241834966907461809 
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdx                
Model    | HUS728T8TAL4204    
Serial   | VAHD4ZUL           
ZFS GUID | 2629839678881986450
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdt                
Model    | HUS728T8TAL4204    
Serial   | VAHDXDVL           
ZFS GUID | 12468174715504800729
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdv                
Model    | HUS728T8TAL4204    
Serial   | VAGU6KLL           
ZFS GUID | 8435778198864465328
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdw                
Model    | HUS728T8TAL4204    
Serial   | VAHAHSEL           
ZFS GUID | 6248787858642409255
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdm                
Model    | HUS728T8TAL4204    
Serial   | VAHD4XTL           
ZFS GUID | 6447577595542961760
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sda                
Model    | HUS728T8TAL4204    
Serial   | VAHD406L           
ZFS GUID | 17233219398105449109
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdb                
Model    | HUS728T8TAL4204    
Serial   | VAHEE12L           
ZFS GUID | 14718135334986108667
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdc                
Model    | HUS728T8TAL4204    
Serial   | VAHDPGUL           
ZFS GUID | 6453720879157404243
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdd                
Model    | HUS728T8TAL4204    
Serial   | VAH7XX5L           
ZFS GUID | 2415210037473635969
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sde                
Model    | HUS728T8TAL4204    
Serial   | VAHD06XL           
ZFS GUID | 7980293907302437342
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdf                
Model    | HUS728T8TAL4204    
Serial   | VAH5W6PL           
ZFS GUID | 2650944322410844617
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdh                
Model    | HUS728T8TAL4204    
Serial   | VAHDPS6L           
ZFS GUID | 5227492984876952151
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdg                
Model    | HUS728T8TAL4204    
Serial   | VAHDRZEL           
ZFS GUID | 8709587202117841210
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdi                
Model    | HUS728T8TAL4204    
Serial   | VAHDX95L           
ZFS GUID | 13388807557241155624
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdj                
Model    | HUS728T8TAL4204    
Serial   | VAGEAVDL           
ZFS GUID | 4320819603845537000
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdk                
Model    | HUS728T8TAL4204    
Serial   | VAHE1J1L           
ZFS GUID | 16530722200458359384
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdl                
Model    | HUS728T8TAL4204    
Serial   | VAHDRYYL           
ZFS GUID | 9383799614074970413
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdy                
Model    | HUH721010AL42C0    
Serial   | 2TGU89UD           
ZFS GUID | 2360678580120608870
Pool     | N/A                
---------+--------------------
---------+--------------------
Name     | sdz                
Model    | HUS728T8TAL4204    
Serial   | VAHE4BDL           
ZFS GUID | 12575810268036164475
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdaa               
Model    | HUS728T8TAL4204    
Serial   | VAH7B0EL           
ZFS GUID | 3357271669658868424
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdab               
Model    | HUS728T8TAL4204    
Serial   | VAHD4UXL           
ZFS GUID | 12084474217870916236
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdac               
Model    | HUS728T8TAL4204    
Serial   | VAHE4AEL           
ZFS GUID | 12420098536708636925
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdad               
Model    | HUS728T8TAL4204    
Serial   | VAHE35SL           
ZFS GUID | 15641419920947187991
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdae               
Model    | HUS728T8TAL4204    
Serial   | VAH73TVL           
ZFS GUID | 2321010819975352589
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdaf               
Model    | HUS728T8TAL4204    
Serial   | VAH0LL4L           
ZFS GUID | 7064277241025105086
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdag               
Model    | HUS728T8TAL4204    
Serial   | VAHBHYGL           
ZFS GUID | 9631990446359566766
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdah               
Model    | HUS728T8TAL4204    
Serial   | VAHE7BGL           
ZFS GUID | 10666041267281724571
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdai               
Model    | HUS728T8TAL4204    
Serial   | VAH4T4TL           
ZFS GUID | 15395414914633738779
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdaj               
Model    | HUS728T8TAL4204    
Serial   | VAHDBDXL           
ZFS GUID | 480631239828802416 
Pool     | ice                
---------+--------------------
---------+--------------------

###################################
#                                 #
#       DD Benchmark Starting     #
#                                 #
###################################
Using 32 threads for the benchmark.


Creating test dataset for pool: ice

Running benchmarks for pool: ice
Running DD write benchmark with 1 threads...
Run 1 write speed: 326.08 MB/s
Run 2 write speed: 323.38 MB/s
Run 3 write speed: 324.94 MB/s
Run 4 write speed: 322.05 MB/s
Average write speed: 324.11 MB/s
Running DD read benchmark with 1 threads...
Run 1 read speed: 6698.41 MB/s
Run 2 read speed: 6654.68 MB/s
Run 3 read speed: 6444.83 MB/s
Run 4 read speed: 6570.46 MB/s
Average read speed: 6592.09 MB/s
Running DD write benchmark with 8 threads...
Run 1 write speed: 1919.81 MB/s
Run 2 write speed: 1920.72 MB/s
Run 3 write speed: 1937.88 MB/s
Run 4 write speed: 1983.57 MB/s
Average write speed: 1940.50 MB/s
Running DD read benchmark with 8 threads...
Run 1 read speed: 25551.28 MB/s
Run 2 read speed: 27919.16 MB/s
Run 3 read speed: 28371.19 MB/s
Run 4 read speed: 28435.76 MB/s
Average read speed: 27569.35 MB/s
Running DD write benchmark with 16 threads...
Run 1 write speed: 1980.08 MB/s
Run 2 write speed: 1850.96 MB/s
Run 3 write speed: 1889.51 MB/s
Run 4 write speed: 1864.92 MB/s
Average write speed: 1896.37 MB/s
Running DD read benchmark with 16 threads...
Run 1 read speed: 2493.91 MB/s
Run 2 read speed: 2541.70 MB/s
Run 3 read speed: 2613.01 MB/s
Run 4 read speed: 4549.56 MB/s
Average read speed: 3049.55 MB/s
Running DD write benchmark with 32 threads...
Run 1 write speed: 1805.01 MB/s
Run 2 write speed: 1657.31 MB/s
Run 3 write speed: 1644.15 MB/s
Run 4 write speed: 1650.92 MB/s
Average write speed: 1689.35 MB/s
Running DD read benchmark with 32 threads...
Run 1 read speed: 2148.35 MB/s
Run 2 read speed: 2136.61 MB/s
Run 3 read speed: 2197.37 MB/s
Run 4 read speed: 2197.61 MB/s
Average read speed: 2169.98 MB/s

###################################
#         DD Benchmark Results for Pool: ice    #
###################################
#    Threads: 1    #
#    1M Seq Write Run 1: 326.08 MB/s     #
#    1M Seq Write Run 2: 323.38 MB/s     #
#    1M Seq Write Run 3: 324.94 MB/s     #
#    1M Seq Write Run 4: 322.05 MB/s     #
#    1M Seq Write Avg: 324.11 MB/s #
#    1M Seq Read Run 1: 6698.41 MB/s      #
#    1M Seq Read Run 2: 6654.68 MB/s      #
#    1M Seq Read Run 3: 6444.83 MB/s      #
#    1M Seq Read Run 4: 6570.46 MB/s      #
#    1M Seq Read Avg: 6592.09 MB/s  #
###################################
#    Threads: 8    #
#    1M Seq Write Run 1: 1919.81 MB/s     #
#    1M Seq Write Run 2: 1920.72 MB/s     #
#    1M Seq Write Run 3: 1937.88 MB/s     #
#    1M Seq Write Run 4: 1983.57 MB/s     #
#    1M Seq Write Avg: 1940.50 MB/s #
#    1M Seq Read Run 1: 25551.28 MB/s      #
#    1M Seq Read Run 2: 27919.16 MB/s      #
#    1M Seq Read Run 3: 28371.19 MB/s      #
#    1M Seq Read Run 4: 28435.76 MB/s      #
#    1M Seq Read Avg: 27569.35 MB/s  #
###################################
#    Threads: 16    #
#    1M Seq Write Run 1: 1980.08 MB/s     #
#    1M Seq Write Run 2: 1850.96 MB/s     #
#    1M Seq Write Run 3: 1889.51 MB/s     #
#    1M Seq Write Run 4: 1864.92 MB/s     #
#    1M Seq Write Avg: 1896.37 MB/s #
#    1M Seq Read Run 1: 2493.91 MB/s      #
#    1M Seq Read Run 2: 2541.70 MB/s      #
#    1M Seq Read Run 3: 2613.01 MB/s      #
#    1M Seq Read Run 4: 4549.56 MB/s      #
#    1M Seq Read Avg: 3049.55 MB/s  #
###################################
#    Threads: 32    #
#    1M Seq Write Run 1: 1805.01 MB/s     #
#    1M Seq Write Run 2: 1657.31 MB/s     #
#    1M Seq Write Run 3: 1644.15 MB/s     #
#    1M Seq Write Run 4: 1650.92 MB/s     #
#    1M Seq Write Avg: 1689.35 MB/s #
#    1M Seq Read Run 1: 2148.35 MB/s      #
#    1M Seq Read Run 2: 2136.61 MB/s      #
#    1M Seq Read Run 3: 2197.37 MB/s      #
#    1M Seq Read Run 4: 2197.61 MB/s      #
#    1M Seq Read Avg: 2169.98 MB/s  #
###################################
Cleaning up test files...
Running disk read benchmark...
This benchmark tests the 4K sequential read performance of each disk in the system using dd. It is run 4 times for each disk and averaged.
This benchmark is useful for comparing disks within the same pool, to identify potential issues and bottlenecks.
Testing disk: nvme0n1
Testing disk: nvme0n1
Testing disk: nvme0n1
Testing disk: nvme0n1
Testing disk: sdn
Testing disk: sdn
Testing disk: sdn
Testing disk: sdn
Testing disk: sdo
Testing disk: sdo
Testing disk: sdo
Testing disk: sdo
Testing disk: sdp
Testing disk: sdp
Testing disk: sdp
Testing disk: sdp
Testing disk: sds
Testing disk: sds
Testing disk: sds
Testing disk: sds
Testing disk: sdq
Testing disk: sdq
Testing disk: sdq
Testing disk: sdq
Testing disk: sdr
Testing disk: sdr
Testing disk: sdr
Testing disk: sdr
Testing disk: sdu
Testing disk: sdu
Testing disk: sdu
Testing disk: sdu
Testing disk: sdx
Testing disk: sdx
Testing disk: sdx
Testing disk: sdx
Testing disk: sdt
Testing disk: sdt
Testing disk: sdt
Testing disk: sdt
Testing disk: sdv
Testing disk: sdv
Testing disk: sdv
Testing disk: sdv
Testing disk: sdw
Testing disk: sdw
Testing disk: sdw
Testing disk: sdw
Testing disk: sdm
Testing disk: sdm
Testing disk: sdm
Testing disk: sdm
Testing disk: sda
Testing disk: sda
Testing disk: sda
Testing disk: sda
Testing disk: sdb
Testing disk: sdb
Testing disk: sdb
Testing disk: sdb
Testing disk: sdc
Testing disk: sdc
Testing disk: sdc
Testing disk: sdc
Testing disk: sdd
Testing disk: sdd
Testing disk: sdd
Testing disk: sdd
Testing disk: sde
Testing disk: sde
Testing disk: sde
Testing disk: sde
Testing disk: sdf
Testing disk: sdf
Testing disk: sdf
Testing disk: sdf
Testing disk: sdh
Testing disk: sdh
Testing disk: sdh
Testing disk: sdh
Testing disk: sdg
Testing disk: sdg
Testing disk: sdg
Testing disk: sdg
Testing disk: sdi
Testing disk: sdi
Testing disk: sdi
Testing disk: sdi
Testing disk: sdj
Testing disk: sdj
Testing disk: sdj
Testing disk: sdj
Testing disk: sdk
Testing disk: sdk
Testing disk: sdk
Testing disk: sdk
Testing disk: sdl
Testing disk: sdl
Testing disk: sdl
Testing disk: sdl
Testing disk: sdy
Testing disk: sdy
Testing disk: sdy
Testing disk: sdy
Testing disk: sdz
Testing disk: sdz
Testing disk: sdz
Testing disk: sdz
Testing disk: sdaa
Testing disk: sdaa
Testing disk: sdaa
Testing disk: sdaa
Testing disk: sdab
Testing disk: sdab
Testing disk: sdab
Testing disk: sdab
Testing disk: sdac
Testing disk: sdac
Testing disk: sdac
Testing disk: sdac
Testing disk: sdad
Testing disk: sdad
Testing disk: sdad
Testing disk: sdad
Testing disk: sdae
Testing disk: sdae
Testing disk: sdae
Testing disk: sdae
Testing disk: sdaf
Testing disk: sdaf
Testing disk: sdaf
Testing disk: sdaf
Testing disk: sdag
Testing disk: sdag
Testing disk: sdag
Testing disk: sdag
Testing disk: sdah
Testing disk: sdah
Testing disk: sdah
Testing disk: sdah
Testing disk: sdai
Testing disk: sdai
Testing disk: sdai
Testing disk: sdai
Testing disk: sdaj
Testing disk: sdaj
Testing disk: sdaj
Testing disk: sdaj

###################################
#         Disk Read Benchmark Results   #
###################################
#    Disk: nvme0n1    #
#    Run 1: 1515.81 MB/s     #
#    Run 2: 1340.11 MB/s     #
#    Run 3: 1351.25 MB/s     #
#    Run 4: 1439.81 MB/s     #
#    Average: 1411.74 MB/s     #
#    Disk: sdn    #
#    Run 1: 234.90 MB/s     #
#    Run 2: 232.80 MB/s     #
#    Run 3: 2916.88 MB/s     #
#    Run 4: 3096.13 MB/s     #
#    Average: 1620.18 MB/s     #
#    Disk: sdo    #
#    Run 1: 229.76 MB/s     #
#    Run 2: 225.23 MB/s     #
#    Run 3: 3071.81 MB/s     #
#    Run 4: 3096.38 MB/s     #
#    Average: 1655.80 MB/s     #
#    Disk: sdp    #
#    Run 1: 222.71 MB/s     #
#    Run 2: 429.55 MB/s     #
#    Run 3: 3106.89 MB/s     #
#    Run 4: 3083.21 MB/s     #
#    Average: 1710.59 MB/s     #
#    Disk: sds    #
#    Run 1: 230.78 MB/s     #
#    Run 2: 224.99 MB/s     #
#    Run 3: 3090.27 MB/s     #
#    Run 4: 3076.32 MB/s     #
#    Average: 1655.59 MB/s     #
#    Disk: sdq    #
#    Run 1: 221.95 MB/s     #
#    Run 2: 2616.89 MB/s     #
#    Run 3: 3090.24 MB/s     #
#    Run 4: 3080.20 MB/s     #
#    Average: 2252.32 MB/s     #
#    Disk: sdr    #
#    Run 1: 230.52 MB/s     #
#    Run 2: 230.57 MB/s     #
#    Run 3: 3077.81 MB/s     #
#    Run 4: 3069.08 MB/s     #
#    Average: 1652.00 MB/s     #
#    Disk: sdu    #
#    Run 1: 225.57 MB/s     #
#    Run 2: 2589.45 MB/s     #
#    Run 3: 3069.44 MB/s     #
#    Run 4: 3081.54 MB/s     #
#    Average: 2241.50 MB/s     #
#    Disk: sdx    #
#    Run 1: 231.11 MB/s     #
#    Run 2: 235.51 MB/s     #
#    Run 3: 3040.96 MB/s     #
#    Run 4: 3075.13 MB/s     #
#    Average: 1645.68 MB/s     #
#    Disk: sdt    #
#    Run 1: 236.26 MB/s     #
#    Run 2: 2602.36 MB/s     #
#    Run 3: 3066.71 MB/s     #
#    Run 4: 3089.58 MB/s     #
#    Average: 2248.73 MB/s     #
#    Disk: sdv    #
#    Run 1: 244.73 MB/s     #
#    Run 2: 247.89 MB/s     #
#    Run 3: 2818.79 MB/s     #
#    Run 4: 3044.16 MB/s     #
#    Average: 1588.89 MB/s     #
#    Disk: sdw    #
#    Run 1: 225.35 MB/s     #
#    Run 2: 220.10 MB/s     #
#    Run 3: 3097.11 MB/s     #
#    Run 4: 3083.58 MB/s     #
#    Average: 1656.54 MB/s     #
#    Disk: sdm    #
#    Run 1: 235.51 MB/s     #
#    Run 2: 2600.26 MB/s     #
#    Run 3: 3077.70 MB/s     #
#    Run 4: 3096.77 MB/s     #
#    Average: 2252.56 MB/s     #
#    Disk: sda    #
#    Run 1: 223.70 MB/s     #
#    Run 2: 225.74 MB/s     #
#    Run 3: 2843.69 MB/s     #
#    Run 4: 3044.02 MB/s     #
#    Average: 1584.29 MB/s     #
#    Disk: sdb    #
#    Run 1: 226.62 MB/s     #
#    Run 2: 225.88 MB/s     #
#    Run 3: 3049.45 MB/s     #
#    Run 4: 3064.97 MB/s     #
#    Average: 1641.73 MB/s     #
#    Disk: sdc    #
#    Run 1: 232.36 MB/s     #
#    Run 2: 232.86 MB/s     #
#    Run 3: 3021.33 MB/s     #
#    Run 4: 3064.97 MB/s     #
#    Average: 1637.88 MB/s     #
#    Disk: sdd    #
#    Run 1: 235.27 MB/s     #
#    Run 2: 236.96 MB/s     #
#    Run 3: 3030.66 MB/s     #
#    Run 4: 3056.96 MB/s     #
#    Average: 1639.96 MB/s     #
#    Disk: sde    #
#    Run 1: 232.43 MB/s     #
#    Run 2: 235.92 MB/s     #
#    Run 3: 2993.68 MB/s     #
#    Run 4: 3040.23 MB/s     #
#    Average: 1625.56 MB/s     #
#    Disk: sdf    #
#    Run 1: 236.74 MB/s     #
#    Run 2: 239.68 MB/s     #
#    Run 3: 2961.91 MB/s     #
#    Run 4: 3038.94 MB/s     #
#    Average: 1619.32 MB/s     #
#    Disk: sdh    #
#    Run 1: 228.78 MB/s     #
#    Run 2: 229.14 MB/s     #
#    Run 3: 2913.63 MB/s     #
#    Run 4: 3014.87 MB/s     #
#    Average: 1596.61 MB/s     #
#    Disk: sdg    #
#    Run 1: 214.93 MB/s     #
#    Run 2: 188.62 MB/s     #
#    Run 3: 2281.16 MB/s     #
#    Run 4: 3028.12 MB/s     #
#    Average: 1428.21 MB/s     #
#    Disk: sdi    #
#    Run 1: 183.42 MB/s     #
#    Run 2: 187.41 MB/s     #
#    Run 3: 495.49 MB/s     #
#    Run 4: 3029.64 MB/s     #
#    Average: 973.99 MB/s     #
#    Disk: sdj    #
#    Run 1: 187.01 MB/s     #
#    Run 2: 248.63 MB/s     #
#    Run 3: 529.65 MB/s     #
#    Run 4: 3024.64 MB/s     #
#    Average: 997.48 MB/s     #
#    Disk: sdk    #
#    Run 1: 238.47 MB/s     #
#    Run 2: 240.28 MB/s     #
#    Run 3: 302.07 MB/s     #
#    Run 4: 2849.10 MB/s     #
#    Average: 907.48 MB/s     #
#    Disk: sdl    #
#    Run 1: 238.28 MB/s     #
#    Run 2: 238.67 MB/s     #
#    Run 3: 276.98 MB/s     #
#    Run 4: 2833.97 MB/s     #
#    Average: 896.97 MB/s     #
#    Disk: sdy    #
#    Run 1: 237.97 MB/s     #
#    Run 2: 237.64 MB/s     #
#    Run 3: 238.21 MB/s     #
#    Run 4: 238.31 MB/s     #
#    Average: 238.03 MB/s     #
#    Disk: sdz    #
#    Run 1: 236.30 MB/s     #
#    Run 2: 237.11 MB/s     #
#    Run 3: 259.58 MB/s     #
#    Run 4: 1494.90 MB/s     #
#    Average: 556.97 MB/s     #
#    Disk: sdaa    #
#    Run 1: 233.77 MB/s     #
#    Run 2: 236.46 MB/s     #
#    Run 3: 247.06 MB/s     #
#    Run 4: 250.19 MB/s     #
#    Average: 241.87 MB/s     #
#    Disk: sdab    #
#    Run 1: 234.54 MB/s     #
#    Run 2: 233.06 MB/s     #
#    Run 3: 371.31 MB/s     #
#    Run 4: 2891.81 MB/s     #
#    Average: 932.68 MB/s     #
#    Disk: sdac    #
#    Run 1: 234.53 MB/s     #
#    Run 2: 234.97 MB/s     #
#    Run 3: 234.95 MB/s     #
#    Run 4: 235.07 MB/s     #
#    Average: 234.88 MB/s     #
#    Disk: sdad    #
#    Run 1: 236.75 MB/s     #
#    Run 2: 237.15 MB/s     #
#    Run 3: 2774.73 MB/s     #
#    Run 4: 3006.34 MB/s     #
#    Average: 1563.74 MB/s     #
#    Disk: sdae    #
#    Run 1: 238.56 MB/s     #
#    Run 2: 238.23 MB/s     #
#    Run 3: 239.97 MB/s     #
#    Run 4: 240.13 MB/s     #
#    Average: 239.22 MB/s     #
#    Disk: sdaf    #
#    Run 1: 251.04 MB/s     #
#    Run 2: 254.38 MB/s     #
#    Run 3: 1693.78 MB/s     #
#    Run 4: 3005.52 MB/s     #
#    Average: 1301.18 MB/s     #
#    Disk: sdag    #
#    Run 1: 242.64 MB/s     #
#    Run 2: 243.03 MB/s     #
#    Run 3: 243.35 MB/s     #
#    Run 4: 234.01 MB/s     #
#    Average: 240.76 MB/s     #
#    Disk: sdah    #
#    Run 1: 241.66 MB/s     #
#    Run 2: 243.91 MB/s     #
#    Run 3: 729.58 MB/s     #
#    Run 4: 2211.85 MB/s     #
#    Average: 856.75 MB/s     #
#    Disk: sdai    #
#    Run 1: 236.64 MB/s     #
#    Run 2: 235.42 MB/s     #
#    Run 3: 234.79 MB/s     #
#    Run 4: 232.29 MB/s     #
#    Average: 234.79 MB/s     #
#    Disk: sdaj    #
#    Run 1: 230.71 MB/s     #
#    Run 2: 231.02 MB/s     #
#    Run 3: 232.86 MB/s     #
#    Run 4: 234.65 MB/s     #
#    Average: 232.31 MB/s     #
###################################

Total benchmark time: 372.75 minutes

Thanks Nick, this is a great idea and works out of the box. A couple of items for me:

  1. This does not recognize that a pool is read only (my secondary NAS is holding my primaries replicated datasets) and attempts to create the tn-bench dataset and run tests against it

  2. The read speed test most definitely can be inaccurate, especially with 1 and 4 threads, since if you have enough memory, all of the data written is cached already.

###################################
#         DD Benchmark Results for Pool: wtrpool    #
###################################
#    Threads: 1    #
#    1M Seq Write Run 1: 552.61 MB/s     #
#    1M Seq Write Run 2: 546.81 MB/s     #
#    1M Seq Write Run 3: 551.42 MB/s     #
#    1M Seq Write Run 4: 549.24 MB/s     #
#    1M Seq Write Avg: 550.02 MB/s #
#    1M Seq Read Run 1: 9577.44 MB/s      #
#    1M Seq Read Run 2: 10809.15 MB/s      #
#    1M Seq Read Run 3: 10979.96 MB/s      #
#    1M Seq Read Run 4: 9556.07 MB/s      #
#    1M Seq Read Avg: 10230.65 MB/s  #
###################################
#    Threads: 8    #
#    1M Seq Write Run 1: 755.58 MB/s     #
#    1M Seq Write Run 2: 693.80 MB/s     #
#    1M Seq Write Run 3: 666.78 MB/s     #
#    1M Seq Write Run 4: 763.02 MB/s     #
#    1M Seq Write Avg: 719.80 MB/s #
#    1M Seq Read Run 1: 650.08 MB/s      #
#    1M Seq Read Run 2: 622.37 MB/s      #
#    1M Seq Read Run 3: 660.73 MB/s      #
#    1M Seq Read Run 4: 666.42 MB/s      #
#    1M Seq Read Avg: 649.90 MB/s  #
###################################

The only other item is maybe put a prompt/warning that the user has to confirm “This will significantly impact performance of your system while running”. I was so excited to run this I made my ChannelsDVR recordings unwatchable when this was running. :smiley:

1 Like

Additionally, TN-Bench does not correctly identify the boot-pool (this might be as designed) as it shows both drives as N/A for pool:

---------+---------------------------
Name     | sdi
Model    | CT1000BX500SSD1
Serial   | 2418E8AC3C51
ZFS GUID | None
Pool     | N/A
---------+---------------------------

sudo zpool status -v
  pool: boot-pool
 state: ONLINE
config:

        NAME           STATE     READ WRITE CKSUM
        boot-pool      ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            sdi3       ONLINE       0     0     0
            nvme0n1p3  ONLINE       0     0     0
1 Like

@Theo Thank you for the feedback. I’ve made a few changes based on your testing as well as some of my own over the past couple of days.

I’ve updated the dd command in the pool benchmark to use 20GiB files instead of 10 GiB files, and only run it twice instead of 4 times. This seems to have increased consistency in my testing so far.

I’ve updated the dd command for the disk benchmark to read as much data as their is RAM, or if the size of RAM exceeds the size of the disk, it will just read the whole disk. This seems to have drastically changed the behavior and produced much better data.

I do both expect and want ARC to play a role by default, since in real-world uses it will actually be used. However I want to minimize its impact for the sake of more actionable numbers. I think these changes do that. I did however make several changes to the opening message that I think you will appreciate.

root@prod[~]# git clone https://github.com/nickf1227/TN-Bench.git && cd TN-Bench && python3 truenas-bench.py
Cloning into 'TN-Bench'...
remote: Enumerating objects: 85, done.
remote: Counting objects: 100% (85/85), done.
remote: Compressing objects: 100% (85/85), done.
remote: Total 85 (delta 35), reused 0 (delta 0), pack-reused 0 (from 0)
Receiving objects: 100% (85/85), 49.41 KiB | 4.94 MiB/s, done.
Resolving deltas: 100% (35/35), done.

###################################
#                                 #
#          TN-Bench v1.05         #
#          MONOLITHIC.            #
#                                 #
###################################
TN-Bench is an OpenSource Software Script that uses standard tools to Benchmark your System and collect various statistical information via the TrueNAS API.

TN-Bench will make a Dataset in each of your pools for the purposes of this testing that will consume 20 GiB of space for every thread in your system during its run.

After which time we will prompt you to delete the dataset which was created.
###################################

WARNING: This test will make your system EXTREMELY slow during its run. It is recommended to run this test when no other workloads are running.

NOTE: ZFS ARC will also be used and will impact your results. This may be undesirable in some circumstances, and the zfs_arc_max can be set to 1 (which means 1 byte) to prevent ARC from caching.

NOTE: Setting it back to 0 will restore the default behavior, but the system will need to be restarted!
###################################

This in particular is expected behavior based on the output of midclt call pool.query. The Boot disk is, however, tested in the individual disk benchmark which should be sufficient unless you disagree. I’ve added a note about this in the script itself.

    print("\n### Disk Information ###")
    print("###################################")
    print("\nNOTE: The TrueNAS API will return N/A for the Pool for the boot device(s) as well as any disk is not a member of a pool.")
    print("###################################")

It doesn’t have logic to check for this. How did this present for you? Did it run anyway for the pools that are not read-only?

I am running one more, larger, systems now to sanity check the efficacy of the changes made.

    ###################################
#                                 #
#          TN-Bench v1.05         #
#          MONOLITHIC.            #
#                                 #
###################################
TN-Bench is an OpenSource Software Script that uses standard tools to Benchmark your System and collect various statistical information via the TrueNAS API.

TN-Bench will make a Dataset in each of your pools for the purposes of this testing that will consume 20 GiB of space for every thread in your system during its run.

After which time we will prompt you to delete the dataset which was created.
###################################

WARNING: This test will make your system EXTREMELY slow during its run. It is recommended to run this test when no other workloads are running.

NOTE: ZFS ARC will also be used and will impact your results. This may be undesirable in some circumstances, and the zfs_arc_max can be set to 1 (which means 1 byte) to prevent ARC from caching.

NOTE: Setting it back to 0 will restore the default behavior, but the system will need to be restarted!
###################################

Would you like to continue? (yes/no): yes

### System Information ###
Field                 | Value                                       
----------------------+---------------------------------------------
Version               | TrueNAS-SCALE-25.04.0-MASTER-20250110-005622
Load Average (1m)     | 0.06689453125                               
Load Average (5m)     | 0.142578125                                 
Load Average (15m)    | 0.15283203125                               
Model                 | AMD Ryzen 5 5600G with Radeon Graphics      
Cores                 | 12                                          
Physical Cores        | 6                                           
System Product        | X570 AORUS ELITE                            
Physical Memory (GiB) | 30.75                                       

### Pool Information ###
Field      | Value       
-----------+-------------
Name       | inferno     
Path       | /mnt/inferno
Status     | ONLINE      
VDEV Count | 1           
Disk Count | 5           

VDEV Name  | Type           | Disk Count
-----------+----------------+---------------
raidz1-0    | RAIDZ1         | 5

### Disk Information ###

NOTE: The TrueNAS API will return N/A for the Pool for the boot device(s) as well as the disk name if the disk is not a member of a pool.
Field      | Value                     
-----------+---------------------------
Name       | nvme0n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM29081000X960CGN        
ZFS GUID   | 212601209224793468        
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme3n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM2913000QM960CGN        
ZFS GUID   | 16221756077833732578      
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme5n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM2913000YF960CGN        
ZFS GUID   | 8625327235819249102       
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme2n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM2913000DC960CGN        
ZFS GUID   | 11750420763846093416      
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme4n1                   
Model      | SAMSUNG MZVL2512HCJQ-00BL7
Serial     | S64KNX2T216015            
ZFS GUID   | None                      
Pool       | N/A                       
Size (GiB) | 476.94                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme1n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM2908101QG960CGN        
ZFS GUID   | 10743034860780890768      
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------

###################################
#                                 #
#       DD Benchmark Starting     #
#                                 #
###################################
Using 12 threads for the benchmark.


Creating test dataset for pool: inferno

Running benchmarks for pool: inferno
Running DD write benchmark with 1 threads...
Run 1 write speed: 411.17 MB/s
Run 2 write speed: 412.88 MB/s
Average write speed: 412.03 MB/s
Running DD read benchmark with 1 threads...
Run 1 read speed: 6762.11 MB/s
Run 2 read speed: 5073.43 MB/s
Average read speed: 5917.77 MB/s
Running DD write benchmark with 3 threads...
Run 1 write speed: 1195.91 MB/s
Run 2 write speed: 1193.22 MB/s
Average write speed: 1194.56 MB/s
Running DD read benchmark with 3 threads...
Run 1 read speed: 4146.25 MB/s
Run 2 read speed: 4161.19 MB/s
Average read speed: 4153.72 MB/s
Running DD write benchmark with 6 threads...
Run 1 write speed: 2060.54 MB/s
Run 2 write speed: 2058.62 MB/s
Average write speed: 2059.58 MB/s
Running DD read benchmark with 6 threads...
Run 1 read speed: 4209.25 MB/s
Run 2 read speed: 4212.84 MB/s
Average read speed: 4211.05 MB/s
Running DD write benchmark with 12 threads...
Run 1 write speed: 2353.74 MB/s
Run 2 write speed: 2184.07 MB/s
Average write speed: 2268.91 MB/s
Running DD read benchmark with 12 threads...
Run 1 read speed: 4191.27 MB/s
Run 2 read speed: 4199.91 MB/s
Average read speed: 4195.59 MB/s

###################################
#         DD Benchmark Results for Pool: inferno    #
###################################
#    Threads: 1    #
#    1M Seq Write Run 1: 411.17 MB/s     #
#    1M Seq Write Run 2: 412.88 MB/s     #
#    1M Seq Write Avg: 412.03 MB/s #
#    1M Seq Read Run 1: 6762.11 MB/s      #
#    1M Seq Read Run 2: 5073.43 MB/s      #
#    1M Seq Read Avg: 5917.77 MB/s  #
###################################
#    Threads: 3    #
#    1M Seq Write Run 1: 1195.91 MB/s     #
#    1M Seq Write Run 2: 1193.22 MB/s     #
#    1M Seq Write Avg: 1194.56 MB/s #
#    1M Seq Read Run 1: 4146.25 MB/s      #
#    1M Seq Read Run 2: 4161.19 MB/s      #
#    1M Seq Read Avg: 4153.72 MB/s  #
###################################
#    Threads: 6    #
#    1M Seq Write Run 1: 2060.54 MB/s     #
#    1M Seq Write Run 2: 2058.62 MB/s     #
#    1M Seq Write Avg: 2059.58 MB/s #
#    1M Seq Read Run 1: 4209.25 MB/s      #
#    1M Seq Read Run 2: 4212.84 MB/s      #
#    1M Seq Read Avg: 4211.05 MB/s  #
###################################
#    Threads: 12    #
#    1M Seq Write Run 1: 2353.74 MB/s     #
#    1M Seq Write Run 2: 2184.07 MB/s     #
#    1M Seq Write Avg: 2268.91 MB/s #
#    1M Seq Read Run 1: 4191.27 MB/s      #
#    1M Seq Read Run 2: 4199.91 MB/s      #
#    1M Seq Read Avg: 4195.59 MB/s  #
###################################
Cleaning up test files...
Running disk read benchmark...
This benchmark tests the 4K sequential read performance of each disk in the system using dd. It is run 2 times for each disk and averaged.
In order to work around ARC caching in systems with it still enabled, This benchmark reads data in the amount of total system RAM or the total size of the disk, whichever is smaller.
Testing disk: nvme0n1
Testing disk: nvme0n1
Testing disk: nvme3n1
Testing disk: nvme3n1
Testing disk: nvme5n1
Testing disk: nvme5n1
Testing disk: nvme2n1
Testing disk: nvme2n1
Testing disk: nvme4n1
Testing disk: nvme4n1
Testing disk: nvme1n1
Testing disk: nvme1n1

###################################
#         Disk Read Benchmark Results   #
###################################
#    Disk: nvme0n1    #
#    Run 1: 2032.08 MB/s     #
#    Run 2: 1825.83 MB/s     #
#    Average: 1928.95 MB/s     #
#    Disk: nvme3n1    #
#    Run 1: 1964.28 MB/s     #
#    Run 2: 1939.57 MB/s     #
#    Average: 1951.93 MB/s     #
#    Disk: nvme5n1    #
#    Run 1: 1908.79 MB/s     #
#    Run 2: 1948.96 MB/s     #
#    Average: 1928.88 MB/s     #
#    Disk: nvme2n1    #
#    Run 1: 1947.48 MB/s     #
#    Run 2: 1762.31 MB/s     #
#    Average: 1854.90 MB/s     #
#    Disk: nvme4n1    #
#    Run 1: 1829.80 MB/s     #
#    Run 2: 1787.41 MB/s     #
#    Average: 1808.60 MB/s     #
#    Disk: nvme1n1    #
#    Run 1: 1836.51 MB/s     #
#    Run 2: 1879.80 MB/s     #
#    Average: 1858.16 MB/s     #
###################################

Total benchmark time: 15.88 minutes

If you can I’d love to see how it runs on your system now

It seemed undeterred and acted like it was running tests on the read only pool. I checked to see if the tn-bench dataset was there and it was not. At that point I canned the process and proceeded to run it on my production NAS (and got in trouble with my wife :slight_smile: )

Nice changes in the 1.05 version (testing it now). For a future version, I think setting max threads in a configuration to command line parameter would be smart. my backup system takes 3 1/2 years to do the 8 thread read/write test. (Okay, several hours)

While I do plan on making things more configurable at some point, the default threading behavior was intentional.

As an example, I have an all NVME system with a large amount of threads and alot of RAM bandwidth (Top example, 2 CPUs Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
) and another all-NVME system (Bottom Example, 1 CPU, AMD Ryzen 5 5600G with Radeon Graphics) with alot less RAM and RAM bandwidth, but much faster cores.

###################################
#         DD Benchmark Results for Pool: fire    #
###################################
#    Threads: 1    #
#    1M Seq Write Run 1: 237.04 MB/s     #
#    1M Seq Write Run 2: 223.83 MB/s     #
#    1M Seq Write Avg: 230.43 MB/s #
#    1M Seq Read Run 1: 2756.02 MB/s      #
#    1M Seq Read Run 2: 2732.92 MB/s      #
#    1M Seq Read Avg: 2744.47 MB/s  #
###################################
#    Threads: 10    #
#    1M Seq Write Run 1: 2076.10 MB/s     #
#    1M Seq Write Run 2: 2092.43 MB/s     #
#    1M Seq Write Avg: 2084.26 MB/s #
#    1M Seq Read Run 1: 6059.59 MB/s      #
#    1M Seq Read Run 2: 6060.71 MB/s      #
#    1M Seq Read Avg: 6060.15 MB/s  #
###################################
#    Threads: 20    #
#    1M Seq Write Run 1: 2925.10 MB/s     #
#    1M Seq Write Run 2: 2871.85 MB/s     #
#    1M Seq Write Avg: 2898.48 MB/s #
#    1M Seq Read Run 1: 6406.70 MB/s      #
#    1M Seq Read Run 2: 6442.41 MB/s      #
#    1M Seq Read Avg: 6424.56 MB/s  #
###################################
#    Threads: 40    #
#    1M Seq Write Run 1: 2923.48 MB/s     #
#    1M Seq Write Run 2: 2969.82 MB/s     #
#    1M Seq Write Avg: 2946.65 MB/s #
#    1M Seq Read Run 1: 6514.30 MB/s      #
#    1M Seq Read Run 2: 6571.73 MB/s      #
#    1M Seq Read Avg: 6543.02 MB/s  #
###################################

###################################
#         DD Benchmark Results for Pool: inferno    #
###################################
#    Threads: 1    #
#    1M Seq Write Run 1: 411.17 MB/s     #
#    1M Seq Write Run 2: 412.88 MB/s     #
#    1M Seq Write Avg: 412.03 MB/s #
#    1M Seq Read Run 1: 6762.11 MB/s      #
#    1M Seq Read Run 2: 5073.43 MB/s      #
#    1M Seq Read Avg: 5917.77 MB/s  #
###################################
#    Threads: 3    #
#    1M Seq Write Run 1: 1195.91 MB/s     #
#    1M Seq Write Run 2: 1193.22 MB/s     #
#    1M Seq Write Avg: 1194.56 MB/s #
#    1M Seq Read Run 1: 4146.25 MB/s      #
#    1M Seq Read Run 2: 4161.19 MB/s      #
#    1M Seq Read Avg: 4153.72 MB/s  #
###################################
#    Threads: 6    #
#    1M Seq Write Run 1: 2060.54 MB/s     #
#    1M Seq Write Run 2: 2058.62 MB/s     #
#    1M Seq Write Avg: 2059.58 MB/s #
#    1M Seq Read Run 1: 4209.25 MB/s      #
#    1M Seq Read Run 2: 4212.84 MB/s      #
#    1M Seq Read Avg: 4211.05 MB/s  #
###################################
#    Threads: 12    #
#    1M Seq Write Run 1: 2353.74 MB/s     #
#    1M Seq Write Run 2: 2184.07 MB/s     #
#    1M Seq Write Avg: 2268.91 MB/s #
#    1M Seq Read Run 1: 4191.27 MB/s      #
#    1M Seq Read Run 2: 4199.91 MB/s      #
#    1M Seq Read Avg: 4195.59 MB/s  #
###################################

The pools are not apples-to-apples, different drives, different vdev topology. But what I found may be interesting is that having the various thread counts (1 thread, 1/4 of the threads in your system, 1/2 of the threads in your system, all of the threads in your system) will help find where your bottleneck is.

Looking at Threads = 1 between those two all NVME systems, we can see that there is likely a pretty substantial impact in having higher CPU Frequencies and higher IPC.
With only 120% (Both RaidZ1, 4vs5 disks) as many disks, I am seeing a 140% increase in write performance and a 170% increase in read performance on the bottom system vs the top system.

Then, when the thread count kicks up, the results are flipped on their head because of the additional RAM capacity and additional RAM bandwidth. Comparing 40t vs 32t in those results shows the system with 120% (Both RaidZ1, 4vs5 disks) as many disks fall way behind. It is 61% as fast at writes and only 36% as fast for reads.

You can also see different performance characteristics for each run, particularly interesting is 1/4 threads on one of these systems doesnt scale in higher thread counts, where as the other system does. Which probably speaks pretty loudly to the lack of memory capacity and bandwidth…

Small update to take into consideration spaces in pool names, as well as a handler for a read only pool.

I’ve tried running this and it seems to fail. It’s givving me results on the order of Terabytes/second. Is there anything special I need to do to run this?

Can you provide the output of the script?

Looks like a bunch of permissions errors. I probably don’t have acl’s set correctly.

It also does not delete the datasets at the end of the run either.

tn-bench output.txt (189.0 KB)

Try to run as root (sudo su) instead of just sudo, but from a location you have write access to. Like sudo su then mkdir /mnt/ssd_pool/scripts && cd /mnt/ssd_pool/scripts and then run git clone https://github.com/nickf1227/TN-Bench.git && cd TN-Bench && python3 truenas-bench.py

I’ll add a handler for this in the next version to make it more obvious we need elevated privelages.

dd: failed to open '/mnt/ssd_pool/tn-bench/file_0.dat': Permission denied
dd: failed to open '/mnt/ssd_pool/tn-bench/file_3.dat': Permission denied
dd: failed to open '/mnt/ssd_pool/tn-bench/file_2.dat': Permission denied
dd: failed to open '/mnt/ssd_pool/tn-bench/file_1.dat': Permission denied

Same goes for the dataset cleanup and removal. If its busy or you dont have permission, it won’t delete the dataset. I’ll see if I can make this more obvious.

Yup, looks like that fixes it. I need to learn permissions better.

1 Like

Nice, best of luck for this :slight_smile:

Made a similar attempt years ago as a winter holiday project. but did not keep it up, so outdated now.

One thing I’d recommend is you moving to fio instead of dd, thats a much more professional tool and has a lot more options

(If you want me to take out the link let me know, not trying to steal your thunder here, just provided for you to have a look at if you care)

Question, it’s writing 20gb of data for each thread, correct? I’m running into space limitations on my machine. I have a SSD pool with around 1TB of free space, and 56 threads. Is there anything built in to make sure it doesn’t absolutely fill up the pool?

There is currently no space allocation check in place. Please stop the script from running and I can work on this in next version.

1 Like