TN-Bench -- An OpenSource Community TrueNAS Benchmarking and Testing Utility

This is a community resource that is not officially endorsed by, or supported by iXsystems. Instead it is a personal passion project, and all of my opinions are my own.

Over the years I’ve spent alot of time playing with benchmarking and min-maxing systems. I cut my teeth on systems engineering the same way I suspect many other Millennials did…overclocking!

In that realm, there has always been a plethora of tools driven by the community. From small projects using X264 to test the stability of Intel CPUs to large ones designed to stress test your RAM.

In the TrueNAS community too, we’ve had varying tools for testing drives like the solnet-array-test and the infamous SLOG Benchmarking thread @HoneyBadger has referenced at least once in the T3 Podcast. However, interests in these older threads have waned as software development on the TrueNAS side has marched along from FreeBSD → Linux. With diskinfo being unavailable in Linux and @jgreco no longer a member of the community, I feel like a hole needs to be plugged.

I’ve begun development on what I’m calling TN-Bench which in it’s current form is a monolithic python script that runs a series of pool and disk based benchmarks to help give users a better understanding of their systems performance.

For now, the script is monolithic and not configurable. It’s designed to get an idea of the maximum performance possible from your TrueNAS pools in the system that they are in. This is done by creating a temporary dataset using the TrueNAS API, which has a 1M block size, compression=none and sync=disabled.

In the future, I plan to make this script more modular so that I may add additional tests for things. The TrueNAS API is the real magic here. In the near future I plan to use it to find a pool with a SLOG device, take it out of the pool, benchmark it, and put it back into the pool. However, I wanted to get this out there for other users to test, compare, and hopefully get some of you to help contribute back.

Please share your results here so we may all benefit from the data. If there’s enough interest I hope to one day create a community repository where users can explore this data inside of a database, but for now just post your results here or use this script as a quick burn-in before lighting up your system.

9 Likes

tn-bench v1.11

Open Source TrueNAS Benchmarking Tool

tn-bench is an open source software script that benchmarks your TrueNAS system and collects comprehensive statistical information via the TrueNAS API. During testing, it creates temporary datasets in your pools, consuming 20 GiB of space per system thread.

Key Features

  • System Information Collection:

    • OS version, CPU model, core counts
    • Memory capacity and load averages
    • System product information
  • Storage Benchmarking:

    • Pool performance testing with variable thread counts
    • Individual disk 4K sequential read benchmarks
    • Multiple iteration support for accuracy
  • Structured Output:

    • Versioned JSON schema (v1.0) for forward/backward compatibility
    • Comprehensive metadata including timestamps and duration
    • Hierarchical data organization
  • Validation & Safety:

    • Space availability verification
    • User confirmation prompts
    • Cleanup options for test datasets

What’s Changed

Full Changelog: Comparing 1.07...1.11 · nickf1227/tn-bench · GitHub

Running the Script

git clone -b monolithic-version-1.07 https://github.com/nickf1227/TN-Bench.git && cd TN-Bench && python3 truenas-bench.py

NOTE: /dev/urandom generates inherently uncompressible data, the the value of the compression options above is minimal in the current form.

The script will display system and pool information, then prompt you to continue with the benchmarks. Follow the prompts to complete the benchmarking process.

Benchmarking Process

  • Dataset Creation: The script creates a temporary dataset in each pool. The dataset is created with a 1M Record Size with no Compression and sync=Disabled using midclt call pool.dataset.create
  • Pool Write Benchmark: The script performs four runs of the write benchmark using dd with varying thread counts. We are using /dev/urandom as our input file, so CPU performance may be relevant. This is by design as /dev/zero is flawed for this purpoose, and CPU stress is expected in real-world use anyway. The data is written in 1M chunks to a dataset with a 1M record size. For each thread, 20G of data is written. This scales with the number of threads, so a system with 16 Threads would write 320G of data.
  • Pool Read Benchmark: The script performs four runs of the read benchmark using dd with varying thread counts. We are using /dev/null as out output file, so RAM speed may be relevant. The data is read in 1M chunks from a dataset with a 1M record size. For each thread, the previously written 20G of data is read. This scales with the number of threads, so a system with 16 Threads would have read 320G of data.

NOTE: ZFS ARC will also be used and will impact your results. This may be undesirable in some circumstances, and the zfs_arc_max can be set to 1 (which means 1 byte) to prevent ARC from caching. Setting it back to 0 will restore the default behavior, but the system will need to be restarted!

I have tested several permutations of file sizes on a dozen systems with varying amount of storage types, space, and RAM. Eventually settled on the current behavior for several reasons. Primarily, I wanted to reduce the impact of, but not REMOVE the ZFS ARC, since in a real world scenario, you would be leveraging the benefits of ARC caching. However, in order to avoid insanely unrealistic results, I needed to use file sizes that saturate the ARC completely. I believe this gives us the best data possible.

Example of arcstat -f time,hit%,dh%,ph%,mh% 10 running while the benchmark is running.

  • Disk Benchmark: The script performs four runs of the read benchmark using dd with varying thread counts. Calculated based on the size of your RAM and the disks, data already on each disk is read in 4K chunks to /dev/null , making this a 4K sequential read test. 4K was chosen because ashift=12 for all recent ZFS pools created in TrueNAS. The amount of data read is so large to try and avoid ARC caching. Run-to-run variance is still expected, particularly on SSDs, as the data ends up inside of internal caches. For this reason, it is run 4 times and averaged.

  • Results: The script displays the results for each run and the average speed. This should give you an idea of the impacts of various thread-counts (as a synthetic representation of client-counts) and the ZFS ARC caching mechanism.

NOTE: The script’s run duration is dependant on the number of threads in your system as well as the number of disks in your system. Small all-flash systems may complete this benchmark in 25 minutes, while larger systems with spinning hardrives may take several hours. The script will not stop other I/O activity on a production system, but will severely limit performance. This benchmark is best run on a system with no other workload. This will give you the best outcome in terms of the accuracy of the data, in addition to not creating angry users.

Performance Considerations

ARC Behavior

  • ARC hit rate decreases as working set exceeds cache size, which TN-Bench intentionally causes.
  • Results reflect mixed cache hit/miss scenarios, not neccesarily indicative of a real world workload.

Resource Requirements

Resource Type Requirement
Pool Test Space 20 GiB per thread

Execution Time

  • Small all-flash systems: ~10-30 minutes
  • Large HDD arrays: Several hours or more
  • Progress indicators: Provided at each stage
  • Status updates: For each benchmark operation

Cleanup Options

The script provides interactive prompts to delete test datasets after benchmarking. All temporary files are automatically removed.

Delete testing dataset fire/tn-bench? (yes/no): yes
✓ Dataset fire/tn-bench deleted.

UI Enhancement

The script is now colorized and more human readable.

Output file

python3 truenas-bench.py [--output /root/my_results.json]

A shareable JSON file can be generated, we have an initial version 1.0 schema, with the intention of eventually adding new fields without breaking the existing structure.

{
  "schema_version": "1.0",
  "metadata": {
    "start_timestamp": "2025-03-15T14:30:00",
    "end_timestamp": "2025-03-15T15:15:00",
    "duration_minutes": 45.0,
    "benchmark_config": {
      "selected_pools": ["tank", "backups"],
      "disk_benchmark_run": true,
      "zfs_iterations": 2,
      "disk_iterations": 1
    }
  },
  "system": {
    "os_version": "25.04.1",
    "load_average_1m": 0.85,
    "load_average_5m": 1.2,
    "load_average_15m": 1.1,
    "cpu_model": "Intel Xeon Silver 4210",
    "logical_cores": 40,
    "physical_cores": 20,
    "system_product": "TRUENAS-M50",
    "memory_gib": 251.56
  },
  "pools": [
    {
      "name": "tank",
      "path": "/mnt/tank",
      "status": "ONLINE",
      "vdevs": [
        {"name": "raidz2-0", "type": "RAIDZ2", "disk_count": 8}
      ],
      "benchmark": [
        {
          "threads": 1,
          "write_speeds": [205.57, 209.95],
          "average_write_speed": 207.76,
          "read_speeds": [4775.63, 5029.35],
          "average_read_speed": 4902.49,
          "iterations": 2
        }
      ]
    }
  ],
  "disks": [
    {
      "name": "ada0",
      "model": "ST12000VN0008",
      "serial": "ABC123",
      "zfs_guid": "1234567890",
      "pool": "tank",
      "size_gib": 10999.99,
      "benchmark": {
        "speeds": [210.45],
        "average_speed": 210.45,
        "iterations": 1
      }
    }
  ]
}

Example Output (example test was performed on a busy system, don’t do that)


############################################################
#                      TN-Bench v1.11                      #
############################################################

TN-Bench is an OpenSource Software Script that uses standard tools to
Benchmark your System and collect various statistical information via
the TrueNAS API.

* TN-Bench will create a Dataset in each of your pools for testing purposes
* that will consume 20 GiB of space for every thread in your system.

! WARNING: This test will make your system EXTREMELY slow during its run.
! WARNING: It is recommended to run this test when no other workloads are running.

* ZFS ARC will impact your results. You can set zfs_arc_max to 1 to prevent ARC caching.
* Setting it back to 0 restores default behavior but requires a system restart.

============================================================
 Confirmation
============================================================

Would you like to continue? (yes/no): yes

------------------------------------------------------------
|                    System Information                    |
------------------------------------------------------------

Field                 | Value
----------------------+-------------------------------------------
Version               | 25.04.1
Load Average (1m)     | 25.52880859375
Load Average (5m)     | 27.32177734375
Load Average (15m)    | 30.61474609375
Model                 | Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
Cores                 | 40
Physical Cores        | 20
System Product        | TRUENAS-M50-S
Physical Memory (GiB) | 251.56

------------------------------------------------------------
|                     Pool Information                     |
------------------------------------------------------------

Field      | Value
-----------+----------
Name       | fire
Path       | /mnt/fire
Status     | ONLINE
VDEV Count | 1
Disk Count | 4

VDEV Name  | Type           | Disk Count
-----------+----------------+---------------
raidz1-0    | RAIDZ1         | 4

------------------------------------------------------------
|                     Pool Information                     |
------------------------------------------------------------

Field      | Value
-----------+---------
Name       | ice
Path       | /mnt/ice
Status     | ONLINE
VDEV Count | 5
Disk Count | 35

VDEV Name  | Type           | Disk Count
-----------+----------------+---------------
raidz2-0    | RAIDZ2         | 7
raidz2-1    | RAIDZ2         | 7
raidz2-2    | RAIDZ2         | 7
raidz2-3    | RAIDZ2         | 7
raidz2-4    | RAIDZ2         | 7

------------------------------------------------------------
|                     Disk Information                     |
------------------------------------------------------------

* The TrueNAS API returns N/A for the Pool for boot devices and disks not in a pool.
Field      | Value
-----------+---------------------------
Name       | sdam
Model      | KINGSTON_SA400S37120G
Serial     | 50026B7784064E49
ZFS GUID   | None
Pool       | N/A
Size (GiB) | 111.79
-----------+---------------------------
Name       | nvme0n1
Model      | INTEL SSDPE2KE016T8
Serial     | PHLN013100MD1P6AGN
ZFS GUID   | 17475493647287877073
Pool       | fire
Size (GiB) | 1400.00
-----------+---------------------------
Name       | nvme1n1
Model      | INTEL SSDPE2KE016T8
Serial     | PHLN931600FE1P6AGN
ZFS GUID   | 11275382002255862348
Pool       | fire
Size (GiB) | 1400.00
-----------+---------------------------
Name       | nvme2n1
Model      | SAMSUNG MZWLL1T6HEHP-00003
Serial     | S3HDNX0KB01220
ZFS GUID   | 4368323531340162613
Pool       | fire
Size (GiB) | 1399.22
-----------+---------------------------
Name       | nvme3n1
Model      | SAMSUNG MZWLL1T6HEHP-00003
Serial     | S3HDNX0KB01248
ZFS GUID   | 3818548647571812337
Pool       | fire
Size (GiB) | 1399.22
-----------+---------------------------
Name       | sdh
Model      | HUSMH842_CLAR200
Serial     | 0LX1V8ZA
ZFS GUID   | 1629581284555035932
Pool       | N/A
Size (GiB) | 186.31
-----------+---------------------------
Name       | sda
Model      | HUSMH842_CLAR200
Serial     | 0LX1V4NA
ZFS GUID   | 8800999671142185461
Pool       | N/A
Size (GiB) | 186.31
-----------+---------------------------
Name       | sdv
Model      | HUS728T8TAL4204
Serial     | VAHD4XTL
ZFS GUID   | 6447577595542961760
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdab
Model      | HUS728T8TAL4204
Serial     | VAHE4AJL
ZFS GUID   | 11464489017973229028
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdx
Model      | HUS728T8TAL4204
Serial     | VAHD4ZUL
ZFS GUID   | 2629839678881986450
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdaf
Model      | HUS728T8TAL4204
Serial     | VAHAHSEL
ZFS GUID   | 6248787858642409255
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdt
Model      | HUS728T8TAL4204
Serial     | VAH751XL
ZFS GUID   | 12194731234089258709
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdn
Model      | HUS728T8TAL4204
Serial     | VAHDEEEL
ZFS GUID   | 4070674839367337299
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdl
Model      | HUS728T8TAL4204
Serial     | VAHD4V0L
ZFS GUID   | 1890505091264157917
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdp
Model      | HUS728T8TAL4204
Serial     | VAHDHLVL
ZFS GUID   | 2813416134184314367
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdr
Model      | HUS728T8TAL4204
Serial     | VAHD99LL
ZFS GUID   | 663480060468884393
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sds
Model      | HUS728T8TAL4204
Serial     | VAHDXDVL
ZFS GUID   | 12468174715504800729
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdw
Model      | HUS728T8TAL4204
Serial     | VAH7T9BL
ZFS GUID   | 241834966907461809
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdu
Model      | HUS728T8TAL4204
Serial     | VAGU6KLL
ZFS GUID   | 8435778198864465328
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdy
Model      | HUH721010AL42C0
Serial     | 2TGU89UD
ZFS GUID   | 10368835707209052527
Pool       | ice
Size (GiB) | 9314.00
-----------+---------------------------
Name       | sdz
Model      | HUS728T8TAL4204
Serial     | VAHE4BDL
ZFS GUID   | 12575810268036164475
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdak
Model      | HUS728T8TAL4204
Serial     | VAH4T4TL
ZFS GUID   | 15395414914633738779
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdal
Model      | HUS728T8TAL4204
Serial     | VAHDBDXL
ZFS GUID   | 480631239828802416
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdaa
Model      | HUS728T8TAL4204
Serial     | VAH7B0EL
ZFS GUID   | 3357271669658868424
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdae
Model      | HUS728T8TAL4204
Serial     | VAHD4UXL
ZFS GUID   | 12084474217870916236
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdag
Model      | HUS728T8TAL4204
Serial     | VAHE4AEL
ZFS GUID   | 12420098536708636925
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdac
Model      | HUS728T8TAL4204
Serial     | VAHE35SL
ZFS GUID   | 15641419920947187991
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdad
Model      | HUS728T8TAL4204
Serial     | VAH73TVL
ZFS GUID   | 2321010819975352589
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdah
Model      | HUS728T8TAL4204
Serial     | VAH0LL4L
ZFS GUID   | 7064277241025105086
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdai
Model      | HUS728T8TAL4204
Serial     | VAHBHYGL
ZFS GUID   | 9631990446359566766
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdaj
Model      | HUS728T8TAL4204
Serial     | VAHE7BGL
ZFS GUID   | 10666041267281724571
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdb
Model      | HUS728T8TAL4204
Serial     | VAHD406L
ZFS GUID   | 17233219398105449109
Pool       | N/A
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdc
Model      | HUS728T8TAL4204
Serial     | VAHEE12L
ZFS GUID   | 14718135334986108667
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdj
Model      | HUS728T8TAL4204
Serial     | VAHE1J1L
ZFS GUID   | 16530722200458359384
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdo
Model      | HUS728T8TAL4204
Serial     | VAHDRYYL
ZFS GUID   | 9383799614074970413
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sde
Model      | HUS728T8TAL4204
Serial     | VAHDPGUL
ZFS GUID   | 6453720879157404243
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdd
Model      | HUS728T8TAL4204
Serial     | VAH7XX5L
ZFS GUID   | 2415210037473635969
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdf
Model      | HUS728T8TAL4204
Serial     | VAHD06XL
ZFS GUID   | 7980293907302437342
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdg
Model      | HUS728T8TAL4204
Serial     | VAH5W6PL
ZFS GUID   | 2650944322410844617
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdi
Model      | HUS728T8TAL4204
Serial     | VAHDRZEL
ZFS GUID   | 8709587202117841210
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdm
Model      | HUS728T8TAL4204
Serial     | VAHDPS6L
ZFS GUID   | 5227492984876952151
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdk
Model      | HUS728T8TAL4204
Serial     | VAHDX95L
ZFS GUID   | 13388807557241155624
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------
Name       | sdq
Model      | HUS728T8TAL4204
Serial     | VAGEAVDL
ZFS GUID   | 4320819603845537000
Pool       | ice
Size (GiB) | 7452.04
-----------+---------------------------

############################################################
#                      Pool Selection                      #
############################################################

* Available pools:
• 1. fire
• 2. ice
* Options:
• 1. Enter specific pool numbers (comma separated)
• 2. Type 'all' to test all pools
• 3. Type 'none' to skip pool testing

Enter your choice [all]: 1

############################################################
#              ZFS Pool Benchmark Iterations               #
############################################################

* How many times should we run each test?
• 1. Run each test once (faster)
• 2. Run each test twice (default, more accurate)

Enter iteration count (1 or 2) [2]: 2

############################################################
#                Individual Disk Benchmark                 #
############################################################

Run individual disk read benchmark? (yes/no) [yes]: no
* Skipping individual disk benchmark.

############################################################
#                  DD Benchmark Starting                   #
############################################################

* Using 40 threads for the benchmark.
* ZFS tests will run 2 time(s) per configuration

############################################################
#                    Testing Pool: fire                    #
############################################################

* Creating test dataset for pool: fire
✓ Dataset fire/tn-bench created successfully.

============================================================
 Space Verification
============================================================

* Available space: 2837.35 GiB
* Space required:  800.00 GiB (20 GiB/thread × 40 threads)
✓ Sufficient space available - proceeding with benchmarks

============================================================
 Testing Pool: fire - Threads: 1
============================================================

* Running DD write benchmark with 1 threads...
* Run 1 write speed: 204.96 MB/s
* Run 2 write speed: 202.36 MB/s
✓ Average write speed: 203.66 MB/s
* Running DD read benchmark with 1 threads...
* Run 1 read speed: 4863.65 MB/s
* Run 2 read speed: 5009.58 MB/s
✓ Average read speed: 4936.62 MB/s

============================================================
 Testing Pool: fire - Threads: 10
============================================================

* Running DD write benchmark with 10 threads...
* Run 1 write speed: 1678.29 MB/s
* Run 2 write speed: 1644.88 MB/s
✓ Average write speed: 1661.58 MB/s
* Running DD read benchmark with 10 threads...
* Run 1 read speed: 15826.33 MB/s
* Run 2 read speed: 15528.85 MB/s
✓ Average read speed: 15677.59 MB/s

============================================================
 Testing Pool: fire - Threads: 20
============================================================

* Running DD write benchmark with 20 threads...
* Run 1 write speed: 2185.88 MB/s
* Run 2 write speed: 2278.53 MB/s
✓ Average write speed: 2232.20 MB/s
* Running DD read benchmark with 20 threads...
* Run 1 read speed: 12733.72 MB/s
* Run 2 read speed: 12943.13 MB/s
✓ Average read speed: 12838.42 MB/s

============================================================
 Testing Pool: fire - Threads: 40
============================================================

* Running DD write benchmark with 40 threads...
* Run 1 write speed: 2669.99 MB/s
* Run 2 write speed: 2813.70 MB/s
✓ Average write speed: 2741.84 MB/s
* Running DD read benchmark with 40 threads...
* Run 1 read speed: 12787.97 MB/s
* Run 2 read speed: 12562.84 MB/s
✓ Average read speed: 12675.40 MB/s

############################################################
#           DD Benchmark Results for Pool: fire            #
############################################################


------------------------------------------------------------
|                        Threads: 1                        |
------------------------------------------------------------

• 1M Seq Write Run 1: 204.96 MB/s
• 1M Seq Write Run 2: 202.36 MB/s
• 1M Seq Write Avg: 203.66 MB/s
• 1M Seq Read Run 1: 4863.65 MB/s
• 1M Seq Read Run 2: 5009.58 MB/s
• 1M Seq Read Avg: 4936.62 MB/s

------------------------------------------------------------
|                       Threads: 10                        |
------------------------------------------------------------

• 1M Seq Write Run 1: 1678.29 MB/s
• 1M Seq Write Run 2: 1644.88 MB/s
• 1M Seq Write Avg: 1661.58 MB/s
• 1M Seq Read Run 1: 15826.33 MB/s
• 1M Seq Read Run 2: 15528.85 MB/s
• 1M Seq Read Avg: 15677.59 MB/s

------------------------------------------------------------
|                       Threads: 20                        |
------------------------------------------------------------

• 1M Seq Write Run 1: 2185.88 MB/s
• 1M Seq Write Run 2: 2278.53 MB/s
• 1M Seq Write Avg: 2232.20 MB/s
• 1M Seq Read Run 1: 12733.72 MB/s
• 1M Seq Read Run 2: 12943.13 MB/s
• 1M Seq Read Avg: 12838.42 MB/s

------------------------------------------------------------
|                       Threads: 40                        |
------------------------------------------------------------

• 1M Seq Write Run 1: 2669.99 MB/s
• 1M Seq Write Run 2: 2813.70 MB/s
• 1M Seq Write Avg: 2741.84 MB/s
• 1M Seq Read Run 1: 12787.97 MB/s
• 1M Seq Read Run 2: 12562.84 MB/s
• 1M Seq Read Avg: 12675.40 MB/s
* Cleaning up test files...

############################################################
#                    Benchmark Complete                    #
############################################################

✓ Total benchmark time: 16.01 minutes
 

Contributing

Contributions are welcome! Please open an issue or submit a pull request for any improvements or fixes.

License

This project is licensed under the GPLv3 License - see the LICENSE file for details.

5 Likes

Wow this looks great. Many thanks for creating and sharing. I’m looking forward to trying this.

2 Likes

Just an update quick with an example of a larger system. 35 disk system with 32 threads took a little over 6 hours. Interesting that even with a 50 gigabyte read, sata hard drive results still seem skewed by caching. I plan an making some alterations soon.


###################################
#                                 #
#          TN-Bench v1.01         #
#          MONOLITHIC.            #
#                                 #
###################################
TN-Bench is an OpenSource Software Script that uses standard tools to Benchmark your System and collect various statistical information via the TrueNAS API.

TN-Bench will make a Dataset in each of your pools for the purposes of this testing that will consume 10 GiB of space for every thread in your system during its run.

After which time we will prompt you to delete the dataset which was created.

Would you like to continue? (yes/no): yes

### System Information ###
Field                 | Value                          
----------------------+--------------------------------
Version               | TrueNAS-SCALE-24.10.0          
Load Average (1m)     | 4.220703125                    
Load Average (5m)     | 3.66455078125                  
Load Average (15m)    | 6.19482421875                  
Model                 | AMD EPYC 7F52 16-Core Processor
Cores                 | 32                             
Physical Cores        | 16                             
System Product        | Super Server                   
Physical Memory (GiB) | 220.07                         

### Pool Information ###
Field      | Value   
-----------+---------
Name       | ice     
Path       | /mnt/ice
Status     | ONLINE  
VDEV Count | 5       
Disk Count | 35      

VDEV Name  | Type           | Disk Count
-----------+----------------+---------------
raidz2-0    | RAIDZ2         | 7
raidz2-1    | RAIDZ2         | 7
raidz2-2    | RAIDZ2         | 7
raidz2-3    | RAIDZ2         | 7
raidz2-4    | RAIDZ2         | 7

### Disk Information ###
Field    | Value              
---------+--------------------
Name     | nvme0n1            
Model    | INTEL SSDPEK1A058GA
Serial   | BTOC14120Y1T058A   
ZFS GUID | None               
Pool     | N/A                
---------+--------------------
---------+--------------------
Name     | sdn                
Model    | HUS728T8TAL4204    
Serial   | VAHE4AJL           
ZFS GUID | 11464489017973229028
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdo                
Model    | HUS728T8TAL4204    
Serial   | VAH751XL           
ZFS GUID | 12194731234089258709
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdp                
Model    | HUS728T8TAL4204    
Serial   | VAHDEEEL           
ZFS GUID | 4070674839367337299
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sds                
Model    | HUS728T8TAL4204    
Serial   | VAHD99LL           
ZFS GUID | 663480060468884393 
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdq                
Model    | HUS728T8TAL4204    
Serial   | VAHD4V0L           
ZFS GUID | 1890505091264157917
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdr                
Model    | HUS728T8TAL4204    
Serial   | VAHDHLVL           
ZFS GUID | 2813416134184314367
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdu                
Model    | HUS728T8TAL4204    
Serial   | VAH7T9BL           
ZFS GUID | 241834966907461809 
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdx                
Model    | HUS728T8TAL4204    
Serial   | VAHD4ZUL           
ZFS GUID | 2629839678881986450
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdt                
Model    | HUS728T8TAL4204    
Serial   | VAHDXDVL           
ZFS GUID | 12468174715504800729
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdv                
Model    | HUS728T8TAL4204    
Serial   | VAGU6KLL           
ZFS GUID | 8435778198864465328
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdw                
Model    | HUS728T8TAL4204    
Serial   | VAHAHSEL           
ZFS GUID | 6248787858642409255
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdm                
Model    | HUS728T8TAL4204    
Serial   | VAHD4XTL           
ZFS GUID | 6447577595542961760
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sda                
Model    | HUS728T8TAL4204    
Serial   | VAHD406L           
ZFS GUID | 17233219398105449109
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdb                
Model    | HUS728T8TAL4204    
Serial   | VAHEE12L           
ZFS GUID | 14718135334986108667
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdc                
Model    | HUS728T8TAL4204    
Serial   | VAHDPGUL           
ZFS GUID | 6453720879157404243
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdd                
Model    | HUS728T8TAL4204    
Serial   | VAH7XX5L           
ZFS GUID | 2415210037473635969
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sde                
Model    | HUS728T8TAL4204    
Serial   | VAHD06XL           
ZFS GUID | 7980293907302437342
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdf                
Model    | HUS728T8TAL4204    
Serial   | VAH5W6PL           
ZFS GUID | 2650944322410844617
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdh                
Model    | HUS728T8TAL4204    
Serial   | VAHDPS6L           
ZFS GUID | 5227492984876952151
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdg                
Model    | HUS728T8TAL4204    
Serial   | VAHDRZEL           
ZFS GUID | 8709587202117841210
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdi                
Model    | HUS728T8TAL4204    
Serial   | VAHDX95L           
ZFS GUID | 13388807557241155624
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdj                
Model    | HUS728T8TAL4204    
Serial   | VAGEAVDL           
ZFS GUID | 4320819603845537000
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdk                
Model    | HUS728T8TAL4204    
Serial   | VAHE1J1L           
ZFS GUID | 16530722200458359384
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdl                
Model    | HUS728T8TAL4204    
Serial   | VAHDRYYL           
ZFS GUID | 9383799614074970413
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdy                
Model    | HUH721010AL42C0    
Serial   | 2TGU89UD           
ZFS GUID | 2360678580120608870
Pool     | N/A                
---------+--------------------
---------+--------------------
Name     | sdz                
Model    | HUS728T8TAL4204    
Serial   | VAHE4BDL           
ZFS GUID | 12575810268036164475
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdaa               
Model    | HUS728T8TAL4204    
Serial   | VAH7B0EL           
ZFS GUID | 3357271669658868424
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdab               
Model    | HUS728T8TAL4204    
Serial   | VAHD4UXL           
ZFS GUID | 12084474217870916236
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdac               
Model    | HUS728T8TAL4204    
Serial   | VAHE4AEL           
ZFS GUID | 12420098536708636925
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdad               
Model    | HUS728T8TAL4204    
Serial   | VAHE35SL           
ZFS GUID | 15641419920947187991
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdae               
Model    | HUS728T8TAL4204    
Serial   | VAH73TVL           
ZFS GUID | 2321010819975352589
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdaf               
Model    | HUS728T8TAL4204    
Serial   | VAH0LL4L           
ZFS GUID | 7064277241025105086
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdag               
Model    | HUS728T8TAL4204    
Serial   | VAHBHYGL           
ZFS GUID | 9631990446359566766
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdah               
Model    | HUS728T8TAL4204    
Serial   | VAHE7BGL           
ZFS GUID | 10666041267281724571
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdai               
Model    | HUS728T8TAL4204    
Serial   | VAH4T4TL           
ZFS GUID | 15395414914633738779
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdaj               
Model    | HUS728T8TAL4204    
Serial   | VAHDBDXL           
ZFS GUID | 480631239828802416 
Pool     | ice                
---------+--------------------
---------+--------------------

###################################
#                                 #
#       DD Benchmark Starting     #
#                                 #
###################################
Using 32 threads for the benchmark.


Creating test dataset for pool: ice

Running benchmarks for pool: ice
Running DD write benchmark with 1 threads...
Run 1 write speed: 326.08 MB/s
Run 2 write speed: 323.38 MB/s
Run 3 write speed: 324.94 MB/s
Run 4 write speed: 322.05 MB/s
Average write speed: 324.11 MB/s
Running DD read benchmark with 1 threads...
Run 1 read speed: 6698.41 MB/s
Run 2 read speed: 6654.68 MB/s
Run 3 read speed: 6444.83 MB/s
Run 4 read speed: 6570.46 MB/s
Average read speed: 6592.09 MB/s
Running DD write benchmark with 8 threads...
Run 1 write speed: 1919.81 MB/s
Run 2 write speed: 1920.72 MB/s
Run 3 write speed: 1937.88 MB/s
Run 4 write speed: 1983.57 MB/s
Average write speed: 1940.50 MB/s
Running DD read benchmark with 8 threads...
Run 1 read speed: 25551.28 MB/s
Run 2 read speed: 27919.16 MB/s
Run 3 read speed: 28371.19 MB/s
Run 4 read speed: 28435.76 MB/s
Average read speed: 27569.35 MB/s
Running DD write benchmark with 16 threads...
Run 1 write speed: 1980.08 MB/s
Run 2 write speed: 1850.96 MB/s
Run 3 write speed: 1889.51 MB/s
Run 4 write speed: 1864.92 MB/s
Average write speed: 1896.37 MB/s
Running DD read benchmark with 16 threads...
Run 1 read speed: 2493.91 MB/s
Run 2 read speed: 2541.70 MB/s
Run 3 read speed: 2613.01 MB/s
Run 4 read speed: 4549.56 MB/s
Average read speed: 3049.55 MB/s
Running DD write benchmark with 32 threads...
Run 1 write speed: 1805.01 MB/s
Run 2 write speed: 1657.31 MB/s
Run 3 write speed: 1644.15 MB/s
Run 4 write speed: 1650.92 MB/s
Average write speed: 1689.35 MB/s
Running DD read benchmark with 32 threads...
Run 1 read speed: 2148.35 MB/s
Run 2 read speed: 2136.61 MB/s
Run 3 read speed: 2197.37 MB/s
Run 4 read speed: 2197.61 MB/s
Average read speed: 2169.98 MB/s

###################################
#         DD Benchmark Results for Pool: ice    #
###################################
#    Threads: 1    #
#    1M Seq Write Run 1: 326.08 MB/s     #
#    1M Seq Write Run 2: 323.38 MB/s     #
#    1M Seq Write Run 3: 324.94 MB/s     #
#    1M Seq Write Run 4: 322.05 MB/s     #
#    1M Seq Write Avg: 324.11 MB/s #
#    1M Seq Read Run 1: 6698.41 MB/s      #
#    1M Seq Read Run 2: 6654.68 MB/s      #
#    1M Seq Read Run 3: 6444.83 MB/s      #
#    1M Seq Read Run 4: 6570.46 MB/s      #
#    1M Seq Read Avg: 6592.09 MB/s  #
###################################
#    Threads: 8    #
#    1M Seq Write Run 1: 1919.81 MB/s     #
#    1M Seq Write Run 2: 1920.72 MB/s     #
#    1M Seq Write Run 3: 1937.88 MB/s     #
#    1M Seq Write Run 4: 1983.57 MB/s     #
#    1M Seq Write Avg: 1940.50 MB/s #
#    1M Seq Read Run 1: 25551.28 MB/s      #
#    1M Seq Read Run 2: 27919.16 MB/s      #
#    1M Seq Read Run 3: 28371.19 MB/s      #
#    1M Seq Read Run 4: 28435.76 MB/s      #
#    1M Seq Read Avg: 27569.35 MB/s  #
###################################
#    Threads: 16    #
#    1M Seq Write Run 1: 1980.08 MB/s     #
#    1M Seq Write Run 2: 1850.96 MB/s     #
#    1M Seq Write Run 3: 1889.51 MB/s     #
#    1M Seq Write Run 4: 1864.92 MB/s     #
#    1M Seq Write Avg: 1896.37 MB/s #
#    1M Seq Read Run 1: 2493.91 MB/s      #
#    1M Seq Read Run 2: 2541.70 MB/s      #
#    1M Seq Read Run 3: 2613.01 MB/s      #
#    1M Seq Read Run 4: 4549.56 MB/s      #
#    1M Seq Read Avg: 3049.55 MB/s  #
###################################
#    Threads: 32    #
#    1M Seq Write Run 1: 1805.01 MB/s     #
#    1M Seq Write Run 2: 1657.31 MB/s     #
#    1M Seq Write Run 3: 1644.15 MB/s     #
#    1M Seq Write Run 4: 1650.92 MB/s     #
#    1M Seq Write Avg: 1689.35 MB/s #
#    1M Seq Read Run 1: 2148.35 MB/s      #
#    1M Seq Read Run 2: 2136.61 MB/s      #
#    1M Seq Read Run 3: 2197.37 MB/s      #
#    1M Seq Read Run 4: 2197.61 MB/s      #
#    1M Seq Read Avg: 2169.98 MB/s  #
###################################
Cleaning up test files...
Running disk read benchmark...
This benchmark tests the 4K sequential read performance of each disk in the system using dd. It is run 4 times for each disk and averaged.
This benchmark is useful for comparing disks within the same pool, to identify potential issues and bottlenecks.
Testing disk: nvme0n1
Testing disk: nvme0n1
Testing disk: nvme0n1
Testing disk: nvme0n1
Testing disk: sdn
Testing disk: sdn
Testing disk: sdn
Testing disk: sdn
Testing disk: sdo
Testing disk: sdo
Testing disk: sdo
Testing disk: sdo
Testing disk: sdp
Testing disk: sdp
Testing disk: sdp
Testing disk: sdp
Testing disk: sds
Testing disk: sds
Testing disk: sds
Testing disk: sds
Testing disk: sdq
Testing disk: sdq
Testing disk: sdq
Testing disk: sdq
Testing disk: sdr
Testing disk: sdr
Testing disk: sdr
Testing disk: sdr
Testing disk: sdu
Testing disk: sdu
Testing disk: sdu
Testing disk: sdu
Testing disk: sdx
Testing disk: sdx
Testing disk: sdx
Testing disk: sdx
Testing disk: sdt
Testing disk: sdt
Testing disk: sdt
Testing disk: sdt
Testing disk: sdv
Testing disk: sdv
Testing disk: sdv
Testing disk: sdv
Testing disk: sdw
Testing disk: sdw
Testing disk: sdw
Testing disk: sdw
Testing disk: sdm
Testing disk: sdm
Testing disk: sdm
Testing disk: sdm
Testing disk: sda
Testing disk: sda
Testing disk: sda
Testing disk: sda
Testing disk: sdb
Testing disk: sdb
Testing disk: sdb
Testing disk: sdb
Testing disk: sdc
Testing disk: sdc
Testing disk: sdc
Testing disk: sdc
Testing disk: sdd
Testing disk: sdd
Testing disk: sdd
Testing disk: sdd
Testing disk: sde
Testing disk: sde
Testing disk: sde
Testing disk: sde
Testing disk: sdf
Testing disk: sdf
Testing disk: sdf
Testing disk: sdf
Testing disk: sdh
Testing disk: sdh
Testing disk: sdh
Testing disk: sdh
Testing disk: sdg
Testing disk: sdg
Testing disk: sdg
Testing disk: sdg
Testing disk: sdi
Testing disk: sdi
Testing disk: sdi
Testing disk: sdi
Testing disk: sdj
Testing disk: sdj
Testing disk: sdj
Testing disk: sdj
Testing disk: sdk
Testing disk: sdk
Testing disk: sdk
Testing disk: sdk
Testing disk: sdl
Testing disk: sdl
Testing disk: sdl
Testing disk: sdl
Testing disk: sdy
Testing disk: sdy
Testing disk: sdy
Testing disk: sdy
Testing disk: sdz
Testing disk: sdz
Testing disk: sdz
Testing disk: sdz
Testing disk: sdaa
Testing disk: sdaa
Testing disk: sdaa
Testing disk: sdaa
Testing disk: sdab
Testing disk: sdab
Testing disk: sdab
Testing disk: sdab
Testing disk: sdac
Testing disk: sdac
Testing disk: sdac
Testing disk: sdac
Testing disk: sdad
Testing disk: sdad
Testing disk: sdad
Testing disk: sdad
Testing disk: sdae
Testing disk: sdae
Testing disk: sdae
Testing disk: sdae
Testing disk: sdaf
Testing disk: sdaf
Testing disk: sdaf
Testing disk: sdaf
Testing disk: sdag
Testing disk: sdag
Testing disk: sdag
Testing disk: sdag
Testing disk: sdah
Testing disk: sdah
Testing disk: sdah
Testing disk: sdah
Testing disk: sdai
Testing disk: sdai
Testing disk: sdai
Testing disk: sdai
Testing disk: sdaj
Testing disk: sdaj
Testing disk: sdaj
Testing disk: sdaj

###################################
#         Disk Read Benchmark Results   #
###################################
#    Disk: nvme0n1    #
#    Run 1: 1515.81 MB/s     #
#    Run 2: 1340.11 MB/s     #
#    Run 3: 1351.25 MB/s     #
#    Run 4: 1439.81 MB/s     #
#    Average: 1411.74 MB/s     #
#    Disk: sdn    #
#    Run 1: 234.90 MB/s     #
#    Run 2: 232.80 MB/s     #
#    Run 3: 2916.88 MB/s     #
#    Run 4: 3096.13 MB/s     #
#    Average: 1620.18 MB/s     #
#    Disk: sdo    #
#    Run 1: 229.76 MB/s     #
#    Run 2: 225.23 MB/s     #
#    Run 3: 3071.81 MB/s     #
#    Run 4: 3096.38 MB/s     #
#    Average: 1655.80 MB/s     #
#    Disk: sdp    #
#    Run 1: 222.71 MB/s     #
#    Run 2: 429.55 MB/s     #
#    Run 3: 3106.89 MB/s     #
#    Run 4: 3083.21 MB/s     #
#    Average: 1710.59 MB/s     #
#    Disk: sds    #
#    Run 1: 230.78 MB/s     #
#    Run 2: 224.99 MB/s     #
#    Run 3: 3090.27 MB/s     #
#    Run 4: 3076.32 MB/s     #
#    Average: 1655.59 MB/s     #
#    Disk: sdq    #
#    Run 1: 221.95 MB/s     #
#    Run 2: 2616.89 MB/s     #
#    Run 3: 3090.24 MB/s     #
#    Run 4: 3080.20 MB/s     #
#    Average: 2252.32 MB/s     #
#    Disk: sdr    #
#    Run 1: 230.52 MB/s     #
#    Run 2: 230.57 MB/s     #
#    Run 3: 3077.81 MB/s     #
#    Run 4: 3069.08 MB/s     #
#    Average: 1652.00 MB/s     #
#    Disk: sdu    #
#    Run 1: 225.57 MB/s     #
#    Run 2: 2589.45 MB/s     #
#    Run 3: 3069.44 MB/s     #
#    Run 4: 3081.54 MB/s     #
#    Average: 2241.50 MB/s     #
#    Disk: sdx    #
#    Run 1: 231.11 MB/s     #
#    Run 2: 235.51 MB/s     #
#    Run 3: 3040.96 MB/s     #
#    Run 4: 3075.13 MB/s     #
#    Average: 1645.68 MB/s     #
#    Disk: sdt    #
#    Run 1: 236.26 MB/s     #
#    Run 2: 2602.36 MB/s     #
#    Run 3: 3066.71 MB/s     #
#    Run 4: 3089.58 MB/s     #
#    Average: 2248.73 MB/s     #
#    Disk: sdv    #
#    Run 1: 244.73 MB/s     #
#    Run 2: 247.89 MB/s     #
#    Run 3: 2818.79 MB/s     #
#    Run 4: 3044.16 MB/s     #
#    Average: 1588.89 MB/s     #
#    Disk: sdw    #
#    Run 1: 225.35 MB/s     #
#    Run 2: 220.10 MB/s     #
#    Run 3: 3097.11 MB/s     #
#    Run 4: 3083.58 MB/s     #
#    Average: 1656.54 MB/s     #
#    Disk: sdm    #
#    Run 1: 235.51 MB/s     #
#    Run 2: 2600.26 MB/s     #
#    Run 3: 3077.70 MB/s     #
#    Run 4: 3096.77 MB/s     #
#    Average: 2252.56 MB/s     #
#    Disk: sda    #
#    Run 1: 223.70 MB/s     #
#    Run 2: 225.74 MB/s     #
#    Run 3: 2843.69 MB/s     #
#    Run 4: 3044.02 MB/s     #
#    Average: 1584.29 MB/s     #
#    Disk: sdb    #
#    Run 1: 226.62 MB/s     #
#    Run 2: 225.88 MB/s     #
#    Run 3: 3049.45 MB/s     #
#    Run 4: 3064.97 MB/s     #
#    Average: 1641.73 MB/s     #
#    Disk: sdc    #
#    Run 1: 232.36 MB/s     #
#    Run 2: 232.86 MB/s     #
#    Run 3: 3021.33 MB/s     #
#    Run 4: 3064.97 MB/s     #
#    Average: 1637.88 MB/s     #
#    Disk: sdd    #
#    Run 1: 235.27 MB/s     #
#    Run 2: 236.96 MB/s     #
#    Run 3: 3030.66 MB/s     #
#    Run 4: 3056.96 MB/s     #
#    Average: 1639.96 MB/s     #
#    Disk: sde    #
#    Run 1: 232.43 MB/s     #
#    Run 2: 235.92 MB/s     #
#    Run 3: 2993.68 MB/s     #
#    Run 4: 3040.23 MB/s     #
#    Average: 1625.56 MB/s     #
#    Disk: sdf    #
#    Run 1: 236.74 MB/s     #
#    Run 2: 239.68 MB/s     #
#    Run 3: 2961.91 MB/s     #
#    Run 4: 3038.94 MB/s     #
#    Average: 1619.32 MB/s     #
#    Disk: sdh    #
#    Run 1: 228.78 MB/s     #
#    Run 2: 229.14 MB/s     #
#    Run 3: 2913.63 MB/s     #
#    Run 4: 3014.87 MB/s     #
#    Average: 1596.61 MB/s     #
#    Disk: sdg    #
#    Run 1: 214.93 MB/s     #
#    Run 2: 188.62 MB/s     #
#    Run 3: 2281.16 MB/s     #
#    Run 4: 3028.12 MB/s     #
#    Average: 1428.21 MB/s     #
#    Disk: sdi    #
#    Run 1: 183.42 MB/s     #
#    Run 2: 187.41 MB/s     #
#    Run 3: 495.49 MB/s     #
#    Run 4: 3029.64 MB/s     #
#    Average: 973.99 MB/s     #
#    Disk: sdj    #
#    Run 1: 187.01 MB/s     #
#    Run 2: 248.63 MB/s     #
#    Run 3: 529.65 MB/s     #
#    Run 4: 3024.64 MB/s     #
#    Average: 997.48 MB/s     #
#    Disk: sdk    #
#    Run 1: 238.47 MB/s     #
#    Run 2: 240.28 MB/s     #
#    Run 3: 302.07 MB/s     #
#    Run 4: 2849.10 MB/s     #
#    Average: 907.48 MB/s     #
#    Disk: sdl    #
#    Run 1: 238.28 MB/s     #
#    Run 2: 238.67 MB/s     #
#    Run 3: 276.98 MB/s     #
#    Run 4: 2833.97 MB/s     #
#    Average: 896.97 MB/s     #
#    Disk: sdy    #
#    Run 1: 237.97 MB/s     #
#    Run 2: 237.64 MB/s     #
#    Run 3: 238.21 MB/s     #
#    Run 4: 238.31 MB/s     #
#    Average: 238.03 MB/s     #
#    Disk: sdz    #
#    Run 1: 236.30 MB/s     #
#    Run 2: 237.11 MB/s     #
#    Run 3: 259.58 MB/s     #
#    Run 4: 1494.90 MB/s     #
#    Average: 556.97 MB/s     #
#    Disk: sdaa    #
#    Run 1: 233.77 MB/s     #
#    Run 2: 236.46 MB/s     #
#    Run 3: 247.06 MB/s     #
#    Run 4: 250.19 MB/s     #
#    Average: 241.87 MB/s     #
#    Disk: sdab    #
#    Run 1: 234.54 MB/s     #
#    Run 2: 233.06 MB/s     #
#    Run 3: 371.31 MB/s     #
#    Run 4: 2891.81 MB/s     #
#    Average: 932.68 MB/s     #
#    Disk: sdac    #
#    Run 1: 234.53 MB/s     #
#    Run 2: 234.97 MB/s     #
#    Run 3: 234.95 MB/s     #
#    Run 4: 235.07 MB/s     #
#    Average: 234.88 MB/s     #
#    Disk: sdad    #
#    Run 1: 236.75 MB/s     #
#    Run 2: 237.15 MB/s     #
#    Run 3: 2774.73 MB/s     #
#    Run 4: 3006.34 MB/s     #
#    Average: 1563.74 MB/s     #
#    Disk: sdae    #
#    Run 1: 238.56 MB/s     #
#    Run 2: 238.23 MB/s     #
#    Run 3: 239.97 MB/s     #
#    Run 4: 240.13 MB/s     #
#    Average: 239.22 MB/s     #
#    Disk: sdaf    #
#    Run 1: 251.04 MB/s     #
#    Run 2: 254.38 MB/s     #
#    Run 3: 1693.78 MB/s     #
#    Run 4: 3005.52 MB/s     #
#    Average: 1301.18 MB/s     #
#    Disk: sdag    #
#    Run 1: 242.64 MB/s     #
#    Run 2: 243.03 MB/s     #
#    Run 3: 243.35 MB/s     #
#    Run 4: 234.01 MB/s     #
#    Average: 240.76 MB/s     #
#    Disk: sdah    #
#    Run 1: 241.66 MB/s     #
#    Run 2: 243.91 MB/s     #
#    Run 3: 729.58 MB/s     #
#    Run 4: 2211.85 MB/s     #
#    Average: 856.75 MB/s     #
#    Disk: sdai    #
#    Run 1: 236.64 MB/s     #
#    Run 2: 235.42 MB/s     #
#    Run 3: 234.79 MB/s     #
#    Run 4: 232.29 MB/s     #
#    Average: 234.79 MB/s     #
#    Disk: sdaj    #
#    Run 1: 230.71 MB/s     #
#    Run 2: 231.02 MB/s     #
#    Run 3: 232.86 MB/s     #
#    Run 4: 234.65 MB/s     #
#    Average: 232.31 MB/s     #
###################################

Total benchmark time: 372.75 minutes

Thanks Nick, this is a great idea and works out of the box. A couple of items for me:

  1. This does not recognize that a pool is read only (my secondary NAS is holding my primaries replicated datasets) and attempts to create the tn-bench dataset and run tests against it

  2. The read speed test most definitely can be inaccurate, especially with 1 and 4 threads, since if you have enough memory, all of the data written is cached already.

###################################
#         DD Benchmark Results for Pool: wtrpool    #
###################################
#    Threads: 1    #
#    1M Seq Write Run 1: 552.61 MB/s     #
#    1M Seq Write Run 2: 546.81 MB/s     #
#    1M Seq Write Run 3: 551.42 MB/s     #
#    1M Seq Write Run 4: 549.24 MB/s     #
#    1M Seq Write Avg: 550.02 MB/s #
#    1M Seq Read Run 1: 9577.44 MB/s      #
#    1M Seq Read Run 2: 10809.15 MB/s      #
#    1M Seq Read Run 3: 10979.96 MB/s      #
#    1M Seq Read Run 4: 9556.07 MB/s      #
#    1M Seq Read Avg: 10230.65 MB/s  #
###################################
#    Threads: 8    #
#    1M Seq Write Run 1: 755.58 MB/s     #
#    1M Seq Write Run 2: 693.80 MB/s     #
#    1M Seq Write Run 3: 666.78 MB/s     #
#    1M Seq Write Run 4: 763.02 MB/s     #
#    1M Seq Write Avg: 719.80 MB/s #
#    1M Seq Read Run 1: 650.08 MB/s      #
#    1M Seq Read Run 2: 622.37 MB/s      #
#    1M Seq Read Run 3: 660.73 MB/s      #
#    1M Seq Read Run 4: 666.42 MB/s      #
#    1M Seq Read Avg: 649.90 MB/s  #
###################################

The only other item is maybe put a prompt/warning that the user has to confirm “This will significantly impact performance of your system while running”. I was so excited to run this I made my ChannelsDVR recordings unwatchable when this was running. :smiley:

1 Like

Additionally, TN-Bench does not correctly identify the boot-pool (this might be as designed) as it shows both drives as N/A for pool:

---------+---------------------------
Name     | sdi
Model    | CT1000BX500SSD1
Serial   | 2418E8AC3C51
ZFS GUID | None
Pool     | N/A
---------+---------------------------

sudo zpool status -v
  pool: boot-pool
 state: ONLINE
config:

        NAME           STATE     READ WRITE CKSUM
        boot-pool      ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            sdi3       ONLINE       0     0     0
            nvme0n1p3  ONLINE       0     0     0
1 Like

@Theo Thank you for the feedback. I’ve made a few changes based on your testing as well as some of my own over the past couple of days.

I’ve updated the dd command in the pool benchmark to use 20GiB files instead of 10 GiB files, and only run it twice instead of 4 times. This seems to have increased consistency in my testing so far.

I’ve updated the dd command for the disk benchmark to read as much data as their is RAM, or if the size of RAM exceeds the size of the disk, it will just read the whole disk. This seems to have drastically changed the behavior and produced much better data.

I do both expect and want ARC to play a role by default, since in real-world uses it will actually be used. However I want to minimize its impact for the sake of more actionable numbers. I think these changes do that. I did however make several changes to the opening message that I think you will appreciate.

root@prod[~]# git clone https://github.com/nickf1227/TN-Bench.git && cd TN-Bench && python3 truenas-bench.py
Cloning into 'TN-Bench'...
remote: Enumerating objects: 85, done.
remote: Counting objects: 100% (85/85), done.
remote: Compressing objects: 100% (85/85), done.
remote: Total 85 (delta 35), reused 0 (delta 0), pack-reused 0 (from 0)
Receiving objects: 100% (85/85), 49.41 KiB | 4.94 MiB/s, done.
Resolving deltas: 100% (35/35), done.

###################################
#                                 #
#          TN-Bench v1.05         #
#          MONOLITHIC.            #
#                                 #
###################################
TN-Bench is an OpenSource Software Script that uses standard tools to Benchmark your System and collect various statistical information via the TrueNAS API.

TN-Bench will make a Dataset in each of your pools for the purposes of this testing that will consume 20 GiB of space for every thread in your system during its run.

After which time we will prompt you to delete the dataset which was created.
###################################

WARNING: This test will make your system EXTREMELY slow during its run. It is recommended to run this test when no other workloads are running.

NOTE: ZFS ARC will also be used and will impact your results. This may be undesirable in some circumstances, and the zfs_arc_max can be set to 1 (which means 1 byte) to prevent ARC from caching.

NOTE: Setting it back to 0 will restore the default behavior, but the system will need to be restarted!
###################################

This in particular is expected behavior based on the output of midclt call pool.query. The Boot disk is, however, tested in the individual disk benchmark which should be sufficient unless you disagree. I’ve added a note about this in the script itself.

    print("\n### Disk Information ###")
    print("###################################")
    print("\nNOTE: The TrueNAS API will return N/A for the Pool for the boot device(s) as well as any disk is not a member of a pool.")
    print("###################################")

It doesn’t have logic to check for this. How did this present for you? Did it run anyway for the pools that are not read-only?

I am running one more, larger, systems now to sanity check the efficacy of the changes made.

    ###################################
#                                 #
#          TN-Bench v1.05         #
#          MONOLITHIC.            #
#                                 #
###################################
TN-Bench is an OpenSource Software Script that uses standard tools to Benchmark your System and collect various statistical information via the TrueNAS API.

TN-Bench will make a Dataset in each of your pools for the purposes of this testing that will consume 20 GiB of space for every thread in your system during its run.

After which time we will prompt you to delete the dataset which was created.
###################################

WARNING: This test will make your system EXTREMELY slow during its run. It is recommended to run this test when no other workloads are running.

NOTE: ZFS ARC will also be used and will impact your results. This may be undesirable in some circumstances, and the zfs_arc_max can be set to 1 (which means 1 byte) to prevent ARC from caching.

NOTE: Setting it back to 0 will restore the default behavior, but the system will need to be restarted!
###################################

Would you like to continue? (yes/no): yes

### System Information ###
Field                 | Value                                       
----------------------+---------------------------------------------
Version               | TrueNAS-SCALE-25.04.0-MASTER-20250110-005622
Load Average (1m)     | 0.06689453125                               
Load Average (5m)     | 0.142578125                                 
Load Average (15m)    | 0.15283203125                               
Model                 | AMD Ryzen 5 5600G with Radeon Graphics      
Cores                 | 12                                          
Physical Cores        | 6                                           
System Product        | X570 AORUS ELITE                            
Physical Memory (GiB) | 30.75                                       

### Pool Information ###
Field      | Value       
-----------+-------------
Name       | inferno     
Path       | /mnt/inferno
Status     | ONLINE      
VDEV Count | 1           
Disk Count | 5           

VDEV Name  | Type           | Disk Count
-----------+----------------+---------------
raidz1-0    | RAIDZ1         | 5

### Disk Information ###

NOTE: The TrueNAS API will return N/A for the Pool for the boot device(s) as well as the disk name if the disk is not a member of a pool.
Field      | Value                     
-----------+---------------------------
Name       | nvme0n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM29081000X960CGN        
ZFS GUID   | 212601209224793468        
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme3n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM2913000QM960CGN        
ZFS GUID   | 16221756077833732578      
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme5n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM2913000YF960CGN        
ZFS GUID   | 8625327235819249102       
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme2n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM2913000DC960CGN        
ZFS GUID   | 11750420763846093416      
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme4n1                   
Model      | SAMSUNG MZVL2512HCJQ-00BL7
Serial     | S64KNX2T216015            
ZFS GUID   | None                      
Pool       | N/A                       
Size (GiB) | 476.94                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme1n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM2908101QG960CGN        
ZFS GUID   | 10743034860780890768      
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------

###################################
#                                 #
#       DD Benchmark Starting     #
#                                 #
###################################
Using 12 threads for the benchmark.


Creating test dataset for pool: inferno

Running benchmarks for pool: inferno
Running DD write benchmark with 1 threads...
Run 1 write speed: 411.17 MB/s
Run 2 write speed: 412.88 MB/s
Average write speed: 412.03 MB/s
Running DD read benchmark with 1 threads...
Run 1 read speed: 6762.11 MB/s
Run 2 read speed: 5073.43 MB/s
Average read speed: 5917.77 MB/s
Running DD write benchmark with 3 threads...
Run 1 write speed: 1195.91 MB/s
Run 2 write speed: 1193.22 MB/s
Average write speed: 1194.56 MB/s
Running DD read benchmark with 3 threads...
Run 1 read speed: 4146.25 MB/s
Run 2 read speed: 4161.19 MB/s
Average read speed: 4153.72 MB/s
Running DD write benchmark with 6 threads...
Run 1 write speed: 2060.54 MB/s
Run 2 write speed: 2058.62 MB/s
Average write speed: 2059.58 MB/s
Running DD read benchmark with 6 threads...
Run 1 read speed: 4209.25 MB/s
Run 2 read speed: 4212.84 MB/s
Average read speed: 4211.05 MB/s
Running DD write benchmark with 12 threads...
Run 1 write speed: 2353.74 MB/s
Run 2 write speed: 2184.07 MB/s
Average write speed: 2268.91 MB/s
Running DD read benchmark with 12 threads...
Run 1 read speed: 4191.27 MB/s
Run 2 read speed: 4199.91 MB/s
Average read speed: 4195.59 MB/s

###################################
#         DD Benchmark Results for Pool: inferno    #
###################################
#    Threads: 1    #
#    1M Seq Write Run 1: 411.17 MB/s     #
#    1M Seq Write Run 2: 412.88 MB/s     #
#    1M Seq Write Avg: 412.03 MB/s #
#    1M Seq Read Run 1: 6762.11 MB/s      #
#    1M Seq Read Run 2: 5073.43 MB/s      #
#    1M Seq Read Avg: 5917.77 MB/s  #
###################################
#    Threads: 3    #
#    1M Seq Write Run 1: 1195.91 MB/s     #
#    1M Seq Write Run 2: 1193.22 MB/s     #
#    1M Seq Write Avg: 1194.56 MB/s #
#    1M Seq Read Run 1: 4146.25 MB/s      #
#    1M Seq Read Run 2: 4161.19 MB/s      #
#    1M Seq Read Avg: 4153.72 MB/s  #
###################################
#    Threads: 6    #
#    1M Seq Write Run 1: 2060.54 MB/s     #
#    1M Seq Write Run 2: 2058.62 MB/s     #
#    1M Seq Write Avg: 2059.58 MB/s #
#    1M Seq Read Run 1: 4209.25 MB/s      #
#    1M Seq Read Run 2: 4212.84 MB/s      #
#    1M Seq Read Avg: 4211.05 MB/s  #
###################################
#    Threads: 12    #
#    1M Seq Write Run 1: 2353.74 MB/s     #
#    1M Seq Write Run 2: 2184.07 MB/s     #
#    1M Seq Write Avg: 2268.91 MB/s #
#    1M Seq Read Run 1: 4191.27 MB/s      #
#    1M Seq Read Run 2: 4199.91 MB/s      #
#    1M Seq Read Avg: 4195.59 MB/s  #
###################################
Cleaning up test files...
Running disk read benchmark...
This benchmark tests the 4K sequential read performance of each disk in the system using dd. It is run 2 times for each disk and averaged.
In order to work around ARC caching in systems with it still enabled, This benchmark reads data in the amount of total system RAM or the total size of the disk, whichever is smaller.
Testing disk: nvme0n1
Testing disk: nvme0n1
Testing disk: nvme3n1
Testing disk: nvme3n1
Testing disk: nvme5n1
Testing disk: nvme5n1
Testing disk: nvme2n1
Testing disk: nvme2n1
Testing disk: nvme4n1
Testing disk: nvme4n1
Testing disk: nvme1n1
Testing disk: nvme1n1

###################################
#         Disk Read Benchmark Results   #
###################################
#    Disk: nvme0n1    #
#    Run 1: 2032.08 MB/s     #
#    Run 2: 1825.83 MB/s     #
#    Average: 1928.95 MB/s     #
#    Disk: nvme3n1    #
#    Run 1: 1964.28 MB/s     #
#    Run 2: 1939.57 MB/s     #
#    Average: 1951.93 MB/s     #
#    Disk: nvme5n1    #
#    Run 1: 1908.79 MB/s     #
#    Run 2: 1948.96 MB/s     #
#    Average: 1928.88 MB/s     #
#    Disk: nvme2n1    #
#    Run 1: 1947.48 MB/s     #
#    Run 2: 1762.31 MB/s     #
#    Average: 1854.90 MB/s     #
#    Disk: nvme4n1    #
#    Run 1: 1829.80 MB/s     #
#    Run 2: 1787.41 MB/s     #
#    Average: 1808.60 MB/s     #
#    Disk: nvme1n1    #
#    Run 1: 1836.51 MB/s     #
#    Run 2: 1879.80 MB/s     #
#    Average: 1858.16 MB/s     #
###################################

Total benchmark time: 15.88 minutes

If you can I’d love to see how it runs on your system now

It seemed undeterred and acted like it was running tests on the read only pool. I checked to see if the tn-bench dataset was there and it was not. At that point I canned the process and proceeded to run it on my production NAS (and got in trouble with my wife :slight_smile: )

Nice changes in the 1.05 version (testing it now). For a future version, I think setting max threads in a configuration to command line parameter would be smart. my backup system takes 3 1/2 years to do the 8 thread read/write test. (Okay, several hours)

While I do plan on making things more configurable at some point, the default threading behavior was intentional.

As an example, I have an all NVME system with a large amount of threads and alot of RAM bandwidth (Top example, 2 CPUs Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
) and another all-NVME system (Bottom Example, 1 CPU, AMD Ryzen 5 5600G with Radeon Graphics) with alot less RAM and RAM bandwidth, but much faster cores.

###################################
#         DD Benchmark Results for Pool: fire    #
###################################
#    Threads: 1    #
#    1M Seq Write Run 1: 237.04 MB/s     #
#    1M Seq Write Run 2: 223.83 MB/s     #
#    1M Seq Write Avg: 230.43 MB/s #
#    1M Seq Read Run 1: 2756.02 MB/s      #
#    1M Seq Read Run 2: 2732.92 MB/s      #
#    1M Seq Read Avg: 2744.47 MB/s  #
###################################
#    Threads: 10    #
#    1M Seq Write Run 1: 2076.10 MB/s     #
#    1M Seq Write Run 2: 2092.43 MB/s     #
#    1M Seq Write Avg: 2084.26 MB/s #
#    1M Seq Read Run 1: 6059.59 MB/s      #
#    1M Seq Read Run 2: 6060.71 MB/s      #
#    1M Seq Read Avg: 6060.15 MB/s  #
###################################
#    Threads: 20    #
#    1M Seq Write Run 1: 2925.10 MB/s     #
#    1M Seq Write Run 2: 2871.85 MB/s     #
#    1M Seq Write Avg: 2898.48 MB/s #
#    1M Seq Read Run 1: 6406.70 MB/s      #
#    1M Seq Read Run 2: 6442.41 MB/s      #
#    1M Seq Read Avg: 6424.56 MB/s  #
###################################
#    Threads: 40    #
#    1M Seq Write Run 1: 2923.48 MB/s     #
#    1M Seq Write Run 2: 2969.82 MB/s     #
#    1M Seq Write Avg: 2946.65 MB/s #
#    1M Seq Read Run 1: 6514.30 MB/s      #
#    1M Seq Read Run 2: 6571.73 MB/s      #
#    1M Seq Read Avg: 6543.02 MB/s  #
###################################

###################################
#         DD Benchmark Results for Pool: inferno    #
###################################
#    Threads: 1    #
#    1M Seq Write Run 1: 411.17 MB/s     #
#    1M Seq Write Run 2: 412.88 MB/s     #
#    1M Seq Write Avg: 412.03 MB/s #
#    1M Seq Read Run 1: 6762.11 MB/s      #
#    1M Seq Read Run 2: 5073.43 MB/s      #
#    1M Seq Read Avg: 5917.77 MB/s  #
###################################
#    Threads: 3    #
#    1M Seq Write Run 1: 1195.91 MB/s     #
#    1M Seq Write Run 2: 1193.22 MB/s     #
#    1M Seq Write Avg: 1194.56 MB/s #
#    1M Seq Read Run 1: 4146.25 MB/s      #
#    1M Seq Read Run 2: 4161.19 MB/s      #
#    1M Seq Read Avg: 4153.72 MB/s  #
###################################
#    Threads: 6    #
#    1M Seq Write Run 1: 2060.54 MB/s     #
#    1M Seq Write Run 2: 2058.62 MB/s     #
#    1M Seq Write Avg: 2059.58 MB/s #
#    1M Seq Read Run 1: 4209.25 MB/s      #
#    1M Seq Read Run 2: 4212.84 MB/s      #
#    1M Seq Read Avg: 4211.05 MB/s  #
###################################
#    Threads: 12    #
#    1M Seq Write Run 1: 2353.74 MB/s     #
#    1M Seq Write Run 2: 2184.07 MB/s     #
#    1M Seq Write Avg: 2268.91 MB/s #
#    1M Seq Read Run 1: 4191.27 MB/s      #
#    1M Seq Read Run 2: 4199.91 MB/s      #
#    1M Seq Read Avg: 4195.59 MB/s  #
###################################

The pools are not apples-to-apples, different drives, different vdev topology. But what I found may be interesting is that having the various thread counts (1 thread, 1/4 of the threads in your system, 1/2 of the threads in your system, all of the threads in your system) will help find where your bottleneck is.

Looking at Threads = 1 between those two all NVME systems, we can see that there is likely a pretty substantial impact in having higher CPU Frequencies and higher IPC.
With only 120% (Both RaidZ1, 4vs5 disks) as many disks, I am seeing a 140% increase in write performance and a 170% increase in read performance on the bottom system vs the top system.

Then, when the thread count kicks up, the results are flipped on their head because of the additional RAM capacity and additional RAM bandwidth. Comparing 40t vs 32t in those results shows the system with 120% (Both RaidZ1, 4vs5 disks) as many disks fall way behind. It is 61% as fast at writes and only 36% as fast for reads.

You can also see different performance characteristics for each run, particularly interesting is 1/4 threads on one of these systems doesnt scale in higher thread counts, where as the other system does. Which probably speaks pretty loudly to the lack of memory capacity and bandwidth…

Small update to take into consideration spaces in pool names, as well as a handler for a read only pool.

I’ve tried running this and it seems to fail. It’s givving me results on the order of Terabytes/second. Is there anything special I need to do to run this?

Can you provide the output of the script?

Looks like a bunch of permissions errors. I probably don’t have acl’s set correctly.

It also does not delete the datasets at the end of the run either.

tn-bench output.txt (189.0 KB)

This is a community resource that is not officially endorsed by, or supported by iXsystems. Instead it is a personal passion project, and all of my opinions are my own.

Try to run as root (sudo su) instead of just sudo, but from a location you have write access to. Like sudo su then mkdir /mnt/ssd_pool/scripts && cd /mnt/ssd_pool/scripts and then run git clone https://github.com/nickf1227/TN-Bench.git && cd TN-Bench && python3 truenas-bench.py

I’ll add a handler for this in the next version to make it more obvious we need elevated privelages.

dd: failed to open '/mnt/ssd_pool/tn-bench/file_0.dat': Permission denied
dd: failed to open '/mnt/ssd_pool/tn-bench/file_3.dat': Permission denied
dd: failed to open '/mnt/ssd_pool/tn-bench/file_2.dat': Permission denied
dd: failed to open '/mnt/ssd_pool/tn-bench/file_1.dat': Permission denied

Same goes for the dataset cleanup and removal. If its busy or you dont have permission, it won’t delete the dataset. I’ll see if I can make this more obvious.

Yup, looks like that fixes it. I need to learn permissions better.

1 Like

Nice, best of luck for this :slight_smile:

Made a similar attempt years ago as a winter holiday project. but did not keep it up, so outdated now.

One thing I’d recommend is you moving to fio instead of dd, thats a much more professional tool and has a lot more options

(If you want me to take out the link let me know, not trying to steal your thunder here, just provided for you to have a look at if you care)

Question, it’s writing 20gb of data for each thread, correct? I’m running into space limitations on my machine. I have a SSD pool with around 1TB of free space, and 56 threads. Is there anything built in to make sure it doesn’t absolutely fill up the pool?

There is currently no space allocation check in place. Please stop the script from running and I can work on this in next version.

1 Like

-squint- Hey wait a minute- TrueNAS staff? When did that happen @NickF1227?

Oh and cool script. I will give this a shot!

1 Like

@koop :slight_smile: Much more recently than most of my community involvement, but long enough that it was time to put on the badge.

@Surgikill The new version has a calculator for space allocation. Thank you for the feedback.

1 Like