TN-Bench -- An OpenSource Community TrueNAS Benchmarking and Testing Utility

Disclaimer: This is a community resource that is not officially endorsed by, or supported by iXsystems. Instead it is a personal passion project, and all of my opinions are my own.

Background

Over the years I’ve spent a lot of time playing with benchmarking and min-maxing systems. I cut my teeth on systems engineering the same way I suspect many other Millennials did… overclocking!

In that realm, there has always been a plethora of tools driven by the community. From small projects using X264 to test the stability of Intel CPUs to large ones designed to stress test your RAM.

The TrueNAS Benchmarking Gap

In the TrueNAS community too, we’ve had varying tools for testing drives like the solnet-array-test and the infamous SLOG Benchmarking thread @wendell has referenced at least once in the T3 Podcast. However, interests in these older threads have waned as software development on the TrueNAS side has marched along from FreeBSD → Linux.

With diskinfo being unavailable in Linux and @jgreco no longer a member of the community, I feel like a hole needs to be plugged.

What is tn-bench?

I’ve begun development on what I’m calling tn-bench which in its current form is a monolithic python script that runs a series of pool and disk based benchmarks to help give users a better understanding of their systems performance.

Current State

For now, the script is monolithic and not configurable. It’s designed to get an idea of the maximum performance possible from your TrueNAS pools in the system that they are in. This is done by creating a temporary dataset using the TrueNAS API, which has:

  • recordsize=1M
  • compression=none
  • sync=disabled

Future Plans

In the future, I plan to make this script more modular so that I may add additional tests for things. The TrueNAS API is the real magic here.

Idea: In the near future I plan to use it to find a pool with a SLOG device, take it out of the pool, benchmark it, and put it back into the pool.

However, I wanted to get this out there for other users to test, compare, and hopefully get some of you to help contribute back.

Share Your Results!

Please share your results here so we may all benefit from the data. If there’s enough interest I hope to one day create a community repository where users can explore this data inside of a database, but for now just post your results here or use this script as a quick burn-in before lighting up your system.

:link: Original release: GitHub - nickf1227/tn-bench at monolithic-version-1.11


Update: TN-Bench v2.2 is now available!

It’s been quite a journey since that first monolithic script I threw together in a weekend. What started as a “let’s see how fast this goes” tool has grown into something I’m genuinely proud of — and couldn’t have done without the community feedback and testing from folks running it out there in the wild.

The codebase has been completely refactored into a modular architecture (v2.0/v2.1), and v2.2 brings some major new capabilities I’ve been wanting for a while.

What’s New in v2.2

ARC Statistics Telemetry — Ever wonder how much your ZFS cache is actually helping during those read benchmarks? TN-Bench now collects real-time ARC data using arcstat during READ phases. You’ll see hit rates, MRU/MFU distribution, prefetch effectiveness, and L2ARC metrics (only if you have one — it auto-detects and skips L2ARC metrics otherwise, so no clutter on systems without cache devices).

Configurable Block Sizes — The dataset isn’t hardcoded to 1M anymore. You can now test with block sizes ranging from 16k all the way up to 16M. Want to see how your pool handles small metadata-heavy workloads versus large sequential throughput? Now you can. The dataset recordsize and dd block size stay in sync automatically, and we still write exactly 20 GiB per thread regardless of block size (the math just changes — 16k = 1,310,720 blocks, 1M = 20,480 blocks, etc.).

Comprehensive Telemetry Collection — Back in v1.11, all you got was the final throughput numbers. Now? We’re collecting detailed telemetry throughout the entire benchmark using both zpool iostat and arcstat.

Why? Because throughput alone doesn’t tell the whole story. The zpool iostat collector captures IOPS, bandwidth, and latency metrics at 1-second intervals during each test phase. Meanwhile, the ARC collector watches your ZFS cache behavior during READ operations — showing you exactly when your working set fits in cache versus when you’re hitting the actual disks. This is the data you need to understand why your pool performs the way it does, not just how fast it goes.

Both collectors feed into a unified analytics engine that generates per-thread statistics (mean, median, P99, CV% consistency ratings) and saves everything alongside your results. No more guessing if that performance drop was real or just a measurement blip.

Get It

# Clone the main branch (now at v2.2)
git clone https://github.com/nickf1227/tn-bench.git

# Or grab a specific release
git clone -b v2.2.0 https://github.com/nickf1227/tn-bench.git

As always, this is still a nights-and-weekends project. No official support, no warranty, just a tool I built because I wanted better data and figured others might too.

If you run it, share your results! I’m collecting data across different pool configurations to eventually build a community performance database. The more weird and wonderful setups you throw at this — striped mirrors of SMR drives, dual-actuator pools, some USB abomination…whatever you’ve got…the more we have the better we are.

Latest release: Release tn-bench v2.2.0 · nickf1227/tn-bench · GitHub

—Nick

10 Likes

tn-bench v2.2

tn-bench is an OpenSource software script that benchmarks your system and collects various statistical information via the TrueNAS API. It creates a dataset in each of your pools during testing, consuming 20 GiB of space for each thread in your system.

:new_button: What’s New in v2.2

ARC Statistics Telemetry (arcstat)

  • Real-time ZFS ARC monitoring during READ benchmark phases
  • Measures cache hit rate, ARC size, MRU/MFU distribution, and prefetch effectiveness
  • Auto-detects L2ARC presence — L2ARC metrics omitted entirely on systems without cache devices
  • Per-thread-count analysis shows how ARC performance changes with workload scale

Enhanced Zpool Latency Analytics

  • Fixed critical column mapping bug: zpool iostat -l fields are interleaved read/write pairs, not grouped by type
  • Latency unit auto-scaling: Displays μs when mean < 1ms (NVMe-class storage), ms otherwise
  • Per-thread-count latency breakdown with P99 ratings and CV% consistency metrics

L2ARC Auto-Detection

  • Detects cache devices via zpool status before starting telemetry collection
  • Prevents arcstat crashes on systems without L2ARC hardware
  • Dynamic field list: 18 fields (core + zfetch) without L2ARC, 21 fields with L2ARC

Previous: What’s New in v2.1

Automatic Analytics

  • Post-benchmark analysis automatically identifies scaling patterns
  • Generates _analytics.json with structured performance data
  • Generates _report.md with human-readable markdown tables
  • Neutral data presentation — reports observations without judgment

Delta-Based Scaling Analysis

  • Tracks performance changes between thread count steps
  • Identifies optimal thread count for each pool
  • Shows thread efficiency (MB/s per thread at peak)
  • Highlights notable transitions (gains, losses, plateaus)

Per-Disk Pool Comparison

  • Compares individual disk performance to pool average
  • Shows variance percentage within each pool
  • Identifies outliers using % of pool max metric

Unified Telemetry Formatter

  • Single source of truth for console UI and markdown reports
  • Console output is now a “live preview” of the report content
  • Consistent formatting, CV% ratings, and table layouts
  • Future changes only need to happen in one place

Codebase Audit & Cleanup

  • Consolidated disk benchmark modules (removed disk_raw.py)
  • Removed ~250 lines of dead/stale code
  • Unified duplicate formatting logic
  • Reduced total module count from 16 to 15
  • Fixed edge-case bug in error handling

Previous: What’s New in v2.0

Modular Architecture

tn-bench v2.0 has been completely refactored into a modular architecture. While the user experience remains identical to v1.x, the underlying codebase is now organized into clean, maintainable modules:

tn-bench/
├── truenas-bench.py          # Main coordinator (thin UI layer)
├── core/                     # Core functionality
│   ├── __init__.py          # System/pool/disk API calls
│   ├── dataset.py           # Dataset lifecycle management
│   ├── results.py           # JSON output handling
│   ├── analytics.py         # Scaling analysis engine (v2.1)
│   ├── report_generator.py  # Markdown report generation (v2.1)
│   ├── telemetry_formatter.py  # Unified console/markdown formatter (v2.1)
│   └── zpool_iostat_collector.py  # ZFS pool iostat telemetry (v2.1)
├── benchmarks/              # Benchmark implementations
│   ├── __init__.py          # Exports benchmark classes
│   ├── base.py              # Abstract base class
│   ├── zfs_pool.py          # ZFS pool write/read benchmark
│   └── disk_enhanced.py     # Individual disk benchmark (v2.0)
└── utils/                   # Common utilities
    └── __init__.py          # Colors, formatting, print functions

Benefits of this design:

  • Easier Maintenance: Each component is isolated and testable
  • Simple Extensibility: New benchmarks can be added by inheriting from BenchmarkBase
  • Clear Separation: UI, core logic, and benchmarks are cleanly separated
  • Reusable Components: Core utilities can be shared across benchmarks

See ARCHITECTURE.md for detailed documentation on the modular design.

Features

  • Modular Architecture: Clean separation between UI, core logic, and benchmarks
  • Enhanced Disk Benchmarking: Multiple test modes (serial, parallel, seek-stress) and configurable block sizes
  • Collects system information using TrueNAS API.
  • Benchmarks system performance using dd command.
  • Provides detailed information about system, pools, and disks.
  • Supports multiple pools with interactive selection.
  • Configurable iteration counts for both pool and disk benchmarks.
  • Space validation before running benchmarks.
  • Drive Writes Per Day (DWPD) calculation for pool benchmarks.
  • Colorized output for better readability.
  • JSON output with structured schema for sharing results.
  • Extensible: Easy to add new benchmark types via the BenchmarkBase class

Running the Script is a simple git clone

Please note, this script needs to be run as root.

git clone -b tn-bench-2.2 https://github.com/nickf1227/tn-bench.git && cd tn-bench && python3 truenas-bench.py

NOTE: /dev/urandom generates inherently uncompressible data, the the value of the compression options above is minimal in the current form.

The script will display system and pool information, then prompt you to continue with the benchmarks. Follow the prompts to complete the benchmarking process.

Benchmarking Process

  • Dataset Creation: The script creates a temporary dataset in each pool. The dataset is created with a 1M Record Size with no Compression and sync=Disabled using midclt call pool.dataset.create
  • Space Validation: Before running benchmarks, the script checks available space in the dataset and warns if insufficient (requires 20 GiB × thread count). You can choose to proceed anyway or skip the pool.
  • Pool Write Benchmark: The script performs write benchmarks using dd across four thread-count configurations (1, cores÷4, cores÷2, and cores). Each configuration runs N times (configurable, default 2). We use /dev/urandom as our input file, so CPU performance may be relevant. This is by design as /dev/zero is flawed for this purpose, and CPU stress is expected in real-world use anyway. The data is written in 1M chunks to a dataset with a 1M record size. For each thread, 20G of data is written. This scales with the number of threads, so a system with 16 Threads would write 320G of data per iteration.
  • Pool Read Benchmark: The script performs read benchmarks using dd across the same four thread-count configurations. We are using /dev/null as our output file, so RAM speed may be relevant. The data is read in 1M chunks from a dataset with a 1M record size. For each thread, the previously written 20G of data is read.
  • DWPD Calculation: After each pool’s benchmarks complete, the script calculates Drive Writes Per Day (DWPD) based on total data written, pool capacity, and test duration.

NOTE: ZFS ARC will also be used and will impact your results. This may be undesirable in some circumstances, and the zfs_arc_max can be set to 1 (which means 1 byte) to prevent ARC from caching. Setting it back to 0 will restore the default behavior, but the system will need to be restarted!

I have tested several permutations of file sizes on a dozen systems with varying amount of storage types, space, and RAM. Eventually settled on the current behavior for several reasons. Primarily, I wanted to reduce the impact of, but not REMOVE the ZFS ARC, since in a real world scenario, you would be leveraging the benefits of ARC caching. However, in order to avoid insanely unrealistic results, I needed to use file sizes that saturate the ARC completely. I believe this gives us the best data possible.

Example of arcstat -f time,hit%,dh%,ph%,mh% 10 running while the benchmark is running.

  • Disk Benchmark: The script performs sequential read benchmarks on individual disks using dd. The read size is calculated as min(system RAM, disk size) to work around ARC caching. Data is read in 4K chunks to /dev/null, making this a 4K sequential read test. 4K was chosen because ashift=12 for all recent ZFS pools created in TrueNAS. The number of iterations is configurable (default 2). Run-to-run variance is expected, particularly on SSDs, as data may end up in internal caches.

Enhanced Disk Benchmark (v2.0)

tn-bench v2.0 introduces an enhanced disk benchmark with multiple test modes and configurable block sizes:

Test Modes:

  • SERIAL (default): Test disks one at a time

    • Best for baseline performance measurements
    • Minimal system impact
    • Recommended for production systems
  • PARALLEL: Test all disks simultaneously

    • Stress tests storage controllers and backplanes
    • Higher resource usage than serial mode
    • Useful for identifying controller bottlenecks
  • SEEK_STRESS: Multiple threads per disk

    • Heavy stress on disk seek mechanisms
    • Can saturate CPU cores
    • May cause system instability on busy systems
    • Not recommended for production use

Block Size Options:

  • 4K (small random I/O)

  • 32K (medium I/O)

  • 128K (large sequential)

  • 1M (very large sequential)

  • Results: The script displays the results for each run and the average speed. This should give you an idea of the impacts of various thread-counts (as a synthetic representation of client-counts) and the ZFS ARC caching mechanism.

NOTE: The script’s run duration is dependant on the number of threads in your system as well as the number of disks in your system. Small all-flash systems may complete this benchmark in 25 minutes, while larger systems with spinning hardrives may take several hours. The script will not stop other I/O activity on a production system, but will severely limit performance. This benchmark is best run on a system with no other workload. This will give you the best outcome in terms of the accuracy of the data, in addition to not creating angry users.

Performance Considerations

ARC Behavior

  • ARC hit rate decreases as working set exceeds cache size, which tn-bench intentionally causes.
  • Results reflect mixed cache hit/miss scenarios, not neccesarily indicative of a real world workload.

Resource Requirements

Resource Type Requirement Notes
Pool Test Space 20 GiB per thread Space freed between iterations (v2.0+)
Thread Configurations 4 (1, cores÷4, cores÷2, cores) For ZFS pool benchmarks
Default Iterations 2 per configuration Configurable 1-100
Disk Serial Mode Low impact Default, safe for production
Disk Parallel Mode Moderate controller load All disks simultaneously
Disk Seek-Stress Mode High CPU usage :warning: Multiple threads per disk, may saturate CPU

:warning: Resource Allocation Warnings

SEEK_STRESS Mode:

  • Uses multiple concurrent threads per disk (4 threads default)
  • Can saturate all CPU cores
  • May cause system instability on heavily loaded systems
  • Not recommended for production systems
  • Only use on dedicated test systems with no other workloads

PARALLEL Mode:

  • Tests all disks simultaneously
  • Heavy load on storage controllers and backplanes
  • May impact other I/O operations
  • Use with caution on production systems

SERIAL Mode (Recommended):

  • Tests one disk at a time
  • Minimal system impact
  • Safe for production use
  • Best for baseline performance measurements

Execution Time

  • Small all-flash systems: ~10-30 minutes
  • Large HDD arrays: Several hours or more
  • Progress indicators: Provided at each stage
  • Status updates: For each benchmark operation

Cleanup Options

The script provides interactive prompts to delete test datasets after benchmarking. All temporary files are automatically removed.

Delete testing dataset fire/tn-bench? (yes/no): yes
✓ Dataset fire/tn-bench deleted.

UI Enhancement

The script is now colorized and more human readable.

Output Files

python3 truenas-bench.py [--output /root/my_results.json]

tn-bench generates three files for each benchmark run:

File Suffix Description
Results .json Raw benchmark data with system info, pool benchmarks, and disk benchmarks
Analytics _analytics.json Structured analysis of scaling patterns and per-disk performance
Report _report.md Human-readable markdown report with tables and observations

Example

python3 truenas-bench.py --output results.json

Generates:

  • results.json — Raw benchmark data
  • results_analytics.json — Scaling analysis and disk comparison
  • results_report.md — Markdown report for sharing

Analytics (v2.1+)

tn-bench automatically analyzes benchmark results to identify scaling patterns and performance characteristics:

What’s Analyzed

  • Thread scaling: How performance changes as thread count increases
  • Optimization points: Thread count where peak performance occurs
  • Transition deltas: Speed changes between thread configurations
  • Per-disk variance: Individual drive performance relative to pool average

Key Metrics

Metric Description
Peak Speed Maximum throughput achieved
Optimal Threads Thread count at peak performance
Thread Efficiency MB/s per thread at peak
% of Pool Avg Disk speed relative to pool mean

Sample Analytics Output

{
  "pool_analyses": [{
    "name": "tank",
    "write_scaling": {
      "peak_speed_mbps": 4465.7,
      "optimal_threads": 16,
      "thread_efficiency": 279.1,
      "progression": [...],
      "deltas": [...]
    },
    "read_scaling": { ... },
    "observations": [
      "Speed decreases from 16 to 32 threads"
    ]
  }],
  "disk_comparison": {
    "tank": {
      "pool_average_mbps": 614.5,
      "variance_pct": 0.3,
      "disks": [...]
    }
  }
}

The analytics engine uses neutral data presentation — it reports what it observes without making performance judgments. You draw the conclusions.

Live Telemetry Output (v2.2+)

During benchmark execution, tn-bench collects zpool iostat telemetry and displays detailed per-thread performance statistics in real-time:

Example Telemetry Summary (M50 TrueNAS)

╔══════════════════════════════════════════════════════════╗
║        Zpool Iostat Telemetry Summary for Pool: ice      ║
╚══════════════════════════════════════════════════════════╝
  • Total samples: 1406  |  Steady-state samples: 1287
  • Duration: 1442.23 seconds

────────── Per-Thread-Count Steady-State Analysis ──────────
  WRITE telemetry only (READ excluded due to ZFS ARC cache interference)

  1 Threads (48 samples):
  ┌─ IOPS ────────────────────────────────────────────────────
  │ Mean: 958.4  │ Median: 0.0  │
  ├──────────────────────────────────────────────────────────┤
  │ P99: 4,940.5 [High] │
  ├──────────────────────────────────────────────────────────┤
  │ Std Dev: 1,466.3 [High Variance] │
  ├──────────────────────────────────────────────────────────┤
  │ CV%: 153.0% High Variance │
  └──────────────────────────────────────────────────────────┘
  ┌─ Bandwidth (MB/s) ────────────────────────────────────────
  │ Mean: 307.9  │ Median: 0.0  │
  ├──────────────────────────────────────────────────────────┤
  │ P99: 1,194.2 [High] │
  ├──────────────────────────────────────────────────────────┤
  │ Std Dev: 487.5 [Good] │
  ├──────────────────────────────────────────────────────────┤
  │ CV%: 158.3% High Variance │
  └──────────────────────────────────────────────────────────┘

  10 Threads (100 samples):
  ┌─ IOPS ────────────────────────────────────────────────────
  │ Mean: 6,643.8  │ Median: 6,470.0  │
  ├──────────────────────────────────────────────────────────┤
  │ P99: 11,607.0 [High] │
  ├──────────────────────────────────────────────────────────┤
  │ Std Dev: 1,974.5 [High Variance] │
  ├──────────────────────────────────────────────────────────┤
  │ CV%: 29.7% Variable │
  └──────────────────────────────────────────────────────────┘

  40 Threads (376 samples):
  ┌─ IOPS ────────────────────────────────────────────────────
  │ Mean: 8,003.7  │ Median: 7,855.0  │
  ├──────────────────────────────────────────────────────────┤
  │ P99: 13,925.0 [High] │
  ├──────────────────────────────────────────────────────────┤
  │ Std Dev: 2,907.8 [High Variance] │
  ├──────────────────────────────────────────────────────────┤
  │ CV%: 36.3% High Variance │
  └──────────────────────────────────────────────────────────┘

────────────────────────── Legend ──────────────────────────
  Statistical Measures:
    • Mean:    Average of all samples
    • Median:  Middle value (50th percentile), less affected by outliers
    • P99:     99th percentile - 99% of samples fall below this value
    • Std Dev: Standard deviation - measures spread/consistency
    • CV%:     Coefficient of Variation (std dev / mean × 100)

  CV% Rating (Consistency):
    • Excellent:    < 10%  (highly consistent)
    • Good:         10-20% (good consistency)
    • Variable:     20-30% (some variability)
    • High Variance:  > 30%  (significant inconsistency)

Understanding the Output

Per-Thread Analysis: Each thread count configuration shows:

  • IOPS: Operations per second with consistency ratings
  • Bandwidth (MB/s): Throughput with spread analysis
  • Latency (ms): Response time statistics (P99-rated by speed thresholds)

Why READ telemetry is excluded: ZFS ARC cache artificially inflates read performance numbers, making them misleading. tn-bench reports WRITE telemetry only for accurate pool performance visibility.

ARC Statistics (v2.2+)

tn-bench v2.2 introduces comprehensive ARC (Adaptive Replacement Cache) telemetry using arcstat:

What’s Collected

Metric Description
ARC Hit % Percentage of reads served from ARC
ARC Size (GiB) Total ARC memory usage
Demand/Prefetch Hit % Breakdown of hit types
MRU/MFU Distribution Cache list balance
L2ARC Hit % Secondary cache effectiveness (if present)
L2ARC Size (GiB) L2ARC device capacity
ZFetch Stats Prefetch engine performance

L2ARC Auto-Detection

  • Automatically detects L2ARC via zpool status
  • On systems without L2ARC: L2ARC metrics omitted entirely (no clutter)
  • On systems with L2ARC: Full L2ARC telemetry collected
  • Prevents arcstat crashes on non-L2ARC systems

Example ARC Summary

╔══════════════════════════════════════════════════════════╗
║   ARC Statistics Summary (READ Phase) for Pool: inferno  ║
╚══════════════════════════════════════════════════════════╝
  • Total samples: 487  |  Read-phase samples: 132
  • Duration: 486.23 seconds
  • L2ARC: not present (L2ARC metrics omitted)

──────────── Per-Thread-Count READ ARC Analysis ────────────
  ARC cache performance during READ benchmark phases

  1 Threads (4 samples):
  ┌─ ARC Hit % ───────────────────────────────────────────────
  │ Mean: 99.5% [Excellent] │
  ├──────────────────────────────────────────────────────────┤
  │ Median: 99.9%  │
  ├──────────────────────────────────────────────────────────┤
  │ Std Dev: 0.8 [Excellent] │
  ├──────────────────────────────────────────────────────────┤
  │ CV%: 0.8% Excellent │
  └──────────────────────────────────────────────────────────┘
  32 Threads (89 samples):
  ┌─ ARC Hit % ───────────────────────────────────────────────
  │ Mean: 57.9% [Poor] │

Rating thresholds:

  • Excellent: ≥ 95% (nearly all reads from cache)
  • Good: 85-95% (majority cached)
  • Variable: 70-85% (moderate caching)
  • Poor: < 70% (frequent cache misses)

Color Coding (console output):

  • Green: Excellent ratings
  • Cyan: Good ratings
  • Yellow: Variable/Acceptable
  • Red: High/High Variance

JSON Schema

{
  "schema_version": "1.0",
  "metadata": {
    "start_timestamp": "2025-03-15T14:30:00",
    "end_timestamp": "2025-03-15T15:15:00",
    "duration_minutes": 45.0,
    "benchmark_config": {
      "selected_pools": ["tank", "backups"],
      "disk_benchmark_run": true,
      "zfs_iterations": 2,
      "disk_iterations": 1
    }
  },
  "system": {
    "os_version": "25.04.1",
    "load_average_1m": 0.85,
    "load_average_5m": 1.2,
    "load_average_15m": 1.1,
    "cpu_model": "Intel Xeon Silver 4210",
    "logical_cores": 40,
    "physical_cores": 20,
    "system_product": "TRUENAS-M50",
    "memory_gib": 251.56
  },
  "pools": [
    {
      "name": "tank",
      "path": "/mnt/tank",
      "status": "ONLINE",
      "vdevs": [
        {"name": "raidz2-0", "type": "RAIDZ2", "disk_count": 8}
      ],
      "benchmark": [
        {
          "threads": 1,
          "write_speeds": [205.57, 209.95],
          "average_write_speed": 207.76,
          "read_speeds": [4775.63, 5029.35],
          "average_read_speed": 4902.49,
          "iterations": 2
        },
        {
          "threads": 10,
          "write_speeds": [1850.32, 1823.45],
          "average_write_speed": 1836.89,
          "read_speeds": [15234.56, 14987.23],
          "average_read_speed": 15110.90,
          "iterations": 2
        }
      ],
      "dwpd": 0.15,
      "total_writes_gib": 640.0
    }
  ],
  "disks": [
    {
      "name": "ada0",
      "model": "ST12000VN0008",
      "serial": "ABC123",
      "zfs_guid": "1234567890",
      "pool": "tank",
      "size_gib": 10999.99,
      "benchmark": {
        "speeds": [210.45],
        "average_speed": 210.45,
        "iterations": 1
      }
    }
  ]
}

Example Output (M50 TrueNAS with v2.2 telemetry)


############################################################
#                 tn-bench v2.2 (Modular)                  #
############################################################

TN-Bench is an OpenSource Software Script that uses standard tools to
Benchmark your System and collect various statistical information via
the TrueNAS API.

* TN-Bench will create a Dataset in each of your pools for testing purposes
* that will consume 20 GiB of space for every thread in your system.

! WARNING: This test will make your system EXTREMELY slow during its run.
! WARNING: It is recommended to run this test when no other workloads are running.

* ZFS ARC will impact your results. You can set zfs_arc_max to 1 to prevent ARC caching.
* Setting it back to 0 restores default behavior but requires a system restart.

============================================================
 Confirmation 
============================================================

Would you like to continue? (yes/no): yes

------------------------------------------------------------
|                    System Information                    |
------------------------------------------------------------

Field                 | Value                                     
----------------------+-------------------------------------------
Version               | 25.10.1                                   
Load Average (1m)     | 8.44091796875                             
Load Average (5m)     | 8.38720703125                             
Load Average (15m)    | 9.19482421875                             
Model                 | Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
Cores                 | 40                                        
Physical Cores        | 20                                        
System Product        | TRUENAS-M50-S                             
Physical Memory (GiB) | 251.55                                    

------------------------------------------------------------
|                     Pool Information                     |
------------------------------------------------------------

Field      | Value    
-----------+----------
Name       | fire     
Path       | /mnt/fire
Status     | ONLINE   
VDEV Count | 1        
Disk Count | 4        

VDEV Name  | Type           | Disk Count
-----------+----------------+---------------
raidz1-0    | RAIDZ1         | 4

------------------------------------------------------------
|                     Pool Information                     |
------------------------------------------------------------

Field      | Value   
-----------+---------
Name       | ice     
Path       | /mnt/ice
Status     | ONLINE  
VDEV Count | 5       
Disk Count | 35      

VDEV Name  | Type           | Disk Count
-----------+----------------+---------------
raidz2-0    | RAIDZ2         | 7
raidz2-1    | RAIDZ2         | 7
raidz2-2    | RAIDZ2         | 7
raidz2-3    | RAIDZ2         | 7
raidz2-4    | RAIDZ2         | 7

------------------------------------------------------------
|                     Disk Information                     |
------------------------------------------------------------

* The TrueNAS API returns N/A for the Pool for boot devices and disks not in a pool.
Field      | Value                     
-----------+---------------------------
Name       | sdan                      
Model      | KINGSTON_SA400S37120G     
Serial     | 50026B7784064E49          
ZFS GUID   | None                      
Pool       | N/A                       
Size (GiB) | 111.79                    
-----------+---------------------------
Name       | nvme0n1                   
Model      | INTEL SSDPE2KE016T8       
Serial     | PHLN013100MD1P6AGN        
ZFS GUID   | 17475493647287877073      
Pool       | fire                      
Size (GiB) | 1400.00                   
-----------+---------------------------
Name       | nvme2n1                   
Model      | INTEL SSDPE2KE016T8       
Serial     | PHLN931600FE1P6AGN        
ZFS GUID   | 11275382002255862348      
Pool       | fire                      
Size (GiB) | 1400.00                   
-----------+---------------------------
Name       | nvme3n1                   
Model      | SAMSUNG MZWLL1T6HEHP-00003
Serial     | S3HDNX0KB01220            
ZFS GUID   | 4368323531340162613       
Pool       | fire                      
Size (GiB) | 1399.22                   
-----------+---------------------------
Name       | nvme1n1                   
Model      | SAMSUNG MZWLL1T6HEHP-00003
Serial     | S3HDNX0KB01248            
ZFS GUID   | 3818548647571812337       
Pool       | fire                      
Size (GiB) | 1399.22                   
-----------+---------------------------
Name       | sdo                       
Model      | HUS728T8TAL4204           
Serial     | VAHD4XTL                  
ZFS GUID   | 6447577595542961760       
Pool       | ice                       
Size (GiB) | 7452.04                   
-----------+---------------------------
Name       | sds                       
Model      | HUS728T8TAL4204           
Serial     | VAHE4AJL                  
ZFS GUID   | 11464489017973229028      
Pool       | ice                       
Size (GiB) | 7452.04                   

... (35 total disks)

############################################################
#                      Pool Selection                      #
############################################################

* Available pools:
• 1. fire
• 2. ice
* Options:
• 1. Enter specific pool numbers (comma separated)
• 2. Type 'all' to test all pools
• 3. Type 'none' to skip pool testing

Enter your choice [all]: all

############################################################
#              ZFS Pool Benchmark Iterations               #
############################################################

* How many times should we run each test?
• • Enter any positive integer (1-100, default: 2)
• • Enter 0 to skip this benchmark

Enter iteration count [2]: 1

############################################################
#           Individual Disk Benchmark Iterations           #
############################################################

* How many times should we run each test?
• • Enter any positive integer (1-100, default: 2)
• • Enter 0 to skip this benchmark

Enter iteration count [2]: 0
* Skipping Individual Disk benchmark.

############################################################
#                  DD Benchmark Starting                   #
############################################################

* Using 40 threads for the benchmark.
* ZFS tests will run 1 time(s) per configuration
* Skipping individual disk benchmark

############################################################
#                    Testing Pool: fire                    #
############################################################

* Creating test dataset for pool: fire
✓ Dataset fire/tn-bench created successfully.

============================================================
 Space Verification 
============================================================

* Available space: 2793.50 GiB
* Space required:  800.00 GiB (20 GiB/thread × 40 threads)
* Test iterations: 1 (space freed between iterations)
✓ Sufficient space available - proceeding with benchmarks
* Starting zpool iostat collection for pool 'fire' (interval: 1s)
* Warming up zpool iostat collector (3 samples)...
✓ Zpool iostat collector warmup complete

============================================================
 Testing Pool: fire - Threads: 10 
============================================================

* --- Iteration 1 of 1 ---
* Zpool iostat collector: benchmark phase started
* Zpool iostat collector: segment → 10T-write
* Iteration 1: Writing...
* Iteration 1 write: 2023.22 MB/s
* Zpool iostat collector: segment → 10T-read
* Iteration 1: Reading...
* Iteration 1 read: 6517.87 MB/s
* Space freed after iteration 1

============================================================
 Testing Pool: fire - Threads: 20 
============================================================

* --- Iteration 1 of 1 ---
* Zpool iostat collector: segment → 20T-write
* Iteration 1: Writing...
* Iteration 1 write: 2836.82 MB/s
* Zpool iostat collector: segment → 20T-read
* Iteration 1: Reading...
* Iteration 1 read: 6590.46 MB/s
* Space freed after iteration 1

============================================================
 Testing Pool: fire - Threads: 40 
============================================================

* --- Iteration 1 of 1 ---
* Zpool iostat collector: segment → 40T-write
* Iteration 1: Writing...
* Iteration 1 write: 2813.03 MB/s
* Zpool iostat collector: segment → 40T-read
* Iteration 1: Reading...
* Iteration 1 read: 6628.14 MB/s
* Space freed after iteration 1
* Zpool iostat collector: benchmark phase ended
* Cooling down zpool iostat collector (3 samples)...
✓ Zpool iostat collector cooldown complete
✓ Zpool iostat collection complete: 857 samples

============================================================
 Zpool Iostat Telemetry Summary for Pool: fire 
============================================================


╔══════════════════════════════════════════════════════════╗
║       Zpool Iostat Telemetry Summary for Pool: fire      ║
╚══════════════════════════════════════════════════════════╝
  • Total samples: 857  |  Steady-state samples: 750
  • Duration: 859.54 seconds

────────── Per-Thread-Count Steady-State Analysis ──────────
  WRITE telemetry only (READ excluded due to ZFS ARC cache interference)

  10 Threads (97 samples):
  ┌─ IOPS ────────────────────────────────────────────────────
  │ Mean: 9,851.5  │ Median: 9,880.0  │
  ├──────────────────────────────────────────────────────────┤
  │ P99: 11,424.0 [High] │
  ├──────────────────────────────────────────────────────────┤
  │ Std Dev: 889.4 [Variable] │
  ├──────────────────────────────────────────────────────────┤
  │ CV%: 9.0% Excellent │
  └──────────────────────────────────────────────────────────┘
  ┌─ Bandwidth (MB/s) ────────────────────────────────────────
  │ Mean: 2,667.2  │ Median: 2,680.0  │
  ├──────────────────────────────────────────────────────────┤
  │ P99: 3,030.8 [High] │
  ├──────────────────────────────────────────────────────────┤
  │ Std Dev: 248.0 [Good] │
  ├──────────────────────────────────────────────────────────┤
  │ CV%: 9.3% Excellent │
  └──────────────────────────────────────────────────────────┘
  ┌─ Latency (ms) ────────────────────────────────────────────
  │ Mean: 0.0  │ Median: 0.0  │
  ├──────────────────────────────────────────────────────────┤
  │ P99: 0.0 [Excellent] │
  ├──────────────────────────────────────────────────────────┤
  │ Std Dev: 0.0 [Excellent] │
  ├──────────────────────────────────────────────────────────┤
  │ CV%: 491.5% High Variance │
  └──────────────────────────────────────────────────────────┘

  20 Threads (143 samples):
  ┌─ IOPS ────────────────────────────────────────────────────
  │ Mean: 12,699.0  │ Median: 12,800.0  │
  ├──────────────────────────────────────────────────────────┤
  │ P99: 16,158.0 [High] │
  ├──────────────────────────────────────────────────────────┤
  │ Std Dev: 1,454.1 [High Variance] │
  ├──────────────────────────────────────────────────────────┤
  │ CV%: 11.5%  Good │
  └──────────────────────────────────────────────────────────┘
  ┌─ Bandwidth (MB/s) ────────────────────────────────────────
  │ Mean: 3,698.4  │ Median: 3,830.0  │
  ├──────────────────────────────────────────────────────────┤
  │ P99: 4,055.8 [High] │
  ├──────────────────────────────────────────────────────────┤
  │ Std Dev: 368.7 [Good] │
  ├──────────────────────────────────────────────────────────┤
  │ CV%: 10.0% Excellent │
  └──────────────────────────────────────────────────────────┘
  ┌─ Latency (ms) ────────────────────────────────────────────
  │ Mean: 0.0  │ Median: 0.0  │
  ├──────────────────────────────────────────────────────────┤
  │ P99: 0.0 [Excellent] │
  ├──────────────────────────────────────────────────────────┤
  │ Std Dev: 0.0 [Excellent] │
  ├──────────────────────────────────────────────────────────┤
  │ CV%: 319.0% High Variance │
  └──────────────────────────────────────────────────────────┘

  40 Threads (288 samples):
  ┌─ IOPS ────────────────────────────────────────────────────
  │ Mean: 13,254.2  │ Median: 13,400.0  │
  ├──────────────────────────────────────────────────────────┤
  │ P99: 18,178.0 [High] │
  ├──────────────────────────────────────────────────────────┤
  │ Std Dev: 1,991.4 [High Variance] │
  ├──────────────────────────────────────────────────────────┤
  │ CV%: 15.0%  Good │
  └──────────────────────────────────────────────────────────┘
  ┌─ Bandwidth (MB/s) ────────────────────────────────────────
  │ Mean: 3,680.1  │ Median: 3,860.0  │
  ├──────────────────────────────────────────────────────────┤
  │ P99: 4,050.0 [High] │
  ├──────────────────────────────────────────────────────────┤
  │ Std Dev: 410.3 [Good] │
  ├──────────────────────────────────────────────────────────┤
  │ CV%: 11.2%  Good │
  └──────────────────────────────────────────────────────────┘
  ┌─ Latency (ms) ────────────────────────────────────────────
  │ Mean: 0.0  │ Median: 0.0  │
  ├──────────────────────────────────────────────────────────┤
  │ P99: 0.3 [Excellent] │
  ├──────────────────────────────────────────────────────────┤
  │ Std Dev: 0.0 [Excellent] │
  ├──────────────────────────────────────────────────────────┤
  │ CV%: 449.0% High Variance │
  └──────────────────────────────────────────────────────────┘

────────────────────────── Legend ──────────────────────────
  Statistical Measures:
    • Mean:    Average of all samples
    • Median:  Middle value (50th percentile), less affected by outliers
    • P99:     99th percentile - 99% of samples fall below this value
    • Std Dev: Standard deviation - measures spread/consistency
    • CV%:     Coefficient of Variation (std dev / mean × 100)

  CV% Rating (Consistency):
    • Excellent:    < 10%  (highly consistent)
    • Good:         10-20% (good consistency)
    • Variable:     20-30% (some variability)
    • High Variance:  > 30%  (significant inconsistency)

  P99 Latency Rating (Lower is better):
    • Excellent:    < 10ms   (very fast)
    • Good:         < 50ms   (acceptable)
    • Acceptable:  < 100ms  (may impact workload)
    • High:          > 100ms  (significant latency)

  Std Dev Rating (Consistency - Lower is better):
    • Excellent:    Low spread    (very consistent)
    • Good:         Moderate      (acceptable spread)
    • Variable:     Noticeable    (some spread)
    • High Variance:  Wide spread   (inconsistent)

============================================================
 Pool Write Summary 
============================================================

* Total data written: 1420.00 GiB
* Pool capacity: 5584.00 GiB
* Benchmark duration: 867.60 seconds
* Drive Writes Per Day (DWPD): 25.32
* Cleaning up any remaining test files...

############################################################
#                    Benchmark Complete                    #
############################################################

✓ Total benchmark time: 16.01 minutes
 

Contributing

Contributions are welcome! Please open an issue or submit a pull request for any improvements or fixes.

License

This project is licensed under the GPLv3 License - see the LICENSE file for details.

5 Likes

Wow this looks great. Many thanks for creating and sharing. I’m looking forward to trying this.

2 Likes

Just an update quick with an example of a larger system. 35 disk system with 32 threads took a little over 6 hours. Interesting that even with a 50 gigabyte read, sata hard drive results still seem skewed by caching. I plan an making some alterations soon.


###################################
#                                 #
#          TN-Bench v1.01         #
#          MONOLITHIC.            #
#                                 #
###################################
TN-Bench is an OpenSource Software Script that uses standard tools to Benchmark your System and collect various statistical information via the TrueNAS API.

TN-Bench will make a Dataset in each of your pools for the purposes of this testing that will consume 10 GiB of space for every thread in your system during its run.

After which time we will prompt you to delete the dataset which was created.

Would you like to continue? (yes/no): yes

### System Information ###
Field                 | Value                          
----------------------+--------------------------------
Version               | TrueNAS-SCALE-24.10.0          
Load Average (1m)     | 4.220703125                    
Load Average (5m)     | 3.66455078125                  
Load Average (15m)    | 6.19482421875                  
Model                 | AMD EPYC 7F52 16-Core Processor
Cores                 | 32                             
Physical Cores        | 16                             
System Product        | Super Server                   
Physical Memory (GiB) | 220.07                         

### Pool Information ###
Field      | Value   
-----------+---------
Name       | ice     
Path       | /mnt/ice
Status     | ONLINE  
VDEV Count | 5       
Disk Count | 35      

VDEV Name  | Type           | Disk Count
-----------+----------------+---------------
raidz2-0    | RAIDZ2         | 7
raidz2-1    | RAIDZ2         | 7
raidz2-2    | RAIDZ2         | 7
raidz2-3    | RAIDZ2         | 7
raidz2-4    | RAIDZ2         | 7

### Disk Information ###
Field    | Value              
---------+--------------------
Name     | nvme0n1            
Model    | INTEL SSDPEK1A058GA
Serial   | BTOC14120Y1T058A   
ZFS GUID | None               
Pool     | N/A                
---------+--------------------
---------+--------------------
Name     | sdn                
Model    | HUS728T8TAL4204    
Serial   | VAHE4AJL           
ZFS GUID | 11464489017973229028
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdo                
Model    | HUS728T8TAL4204    
Serial   | VAH751XL           
ZFS GUID | 12194731234089258709
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdp                
Model    | HUS728T8TAL4204    
Serial   | VAHDEEEL           
ZFS GUID | 4070674839367337299
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sds                
Model    | HUS728T8TAL4204    
Serial   | VAHD99LL           
ZFS GUID | 663480060468884393 
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdq                
Model    | HUS728T8TAL4204    
Serial   | VAHD4V0L           
ZFS GUID | 1890505091264157917
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdr                
Model    | HUS728T8TAL4204    
Serial   | VAHDHLVL           
ZFS GUID | 2813416134184314367
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdu                
Model    | HUS728T8TAL4204    
Serial   | VAH7T9BL           
ZFS GUID | 241834966907461809 
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdx                
Model    | HUS728T8TAL4204    
Serial   | VAHD4ZUL           
ZFS GUID | 2629839678881986450
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdt                
Model    | HUS728T8TAL4204    
Serial   | VAHDXDVL           
ZFS GUID | 12468174715504800729
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdv                
Model    | HUS728T8TAL4204    
Serial   | VAGU6KLL           
ZFS GUID | 8435778198864465328
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdw                
Model    | HUS728T8TAL4204    
Serial   | VAHAHSEL           
ZFS GUID | 6248787858642409255
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdm                
Model    | HUS728T8TAL4204    
Serial   | VAHD4XTL           
ZFS GUID | 6447577595542961760
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sda                
Model    | HUS728T8TAL4204    
Serial   | VAHD406L           
ZFS GUID | 17233219398105449109
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdb                
Model    | HUS728T8TAL4204    
Serial   | VAHEE12L           
ZFS GUID | 14718135334986108667
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdc                
Model    | HUS728T8TAL4204    
Serial   | VAHDPGUL           
ZFS GUID | 6453720879157404243
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdd                
Model    | HUS728T8TAL4204    
Serial   | VAH7XX5L           
ZFS GUID | 2415210037473635969
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sde                
Model    | HUS728T8TAL4204    
Serial   | VAHD06XL           
ZFS GUID | 7980293907302437342
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdf                
Model    | HUS728T8TAL4204    
Serial   | VAH5W6PL           
ZFS GUID | 2650944322410844617
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdh                
Model    | HUS728T8TAL4204    
Serial   | VAHDPS6L           
ZFS GUID | 5227492984876952151
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdg                
Model    | HUS728T8TAL4204    
Serial   | VAHDRZEL           
ZFS GUID | 8709587202117841210
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdi                
Model    | HUS728T8TAL4204    
Serial   | VAHDX95L           
ZFS GUID | 13388807557241155624
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdj                
Model    | HUS728T8TAL4204    
Serial   | VAGEAVDL           
ZFS GUID | 4320819603845537000
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdk                
Model    | HUS728T8TAL4204    
Serial   | VAHE1J1L           
ZFS GUID | 16530722200458359384
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdl                
Model    | HUS728T8TAL4204    
Serial   | VAHDRYYL           
ZFS GUID | 9383799614074970413
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdy                
Model    | HUH721010AL42C0    
Serial   | 2TGU89UD           
ZFS GUID | 2360678580120608870
Pool     | N/A                
---------+--------------------
---------+--------------------
Name     | sdz                
Model    | HUS728T8TAL4204    
Serial   | VAHE4BDL           
ZFS GUID | 12575810268036164475
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdaa               
Model    | HUS728T8TAL4204    
Serial   | VAH7B0EL           
ZFS GUID | 3357271669658868424
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdab               
Model    | HUS728T8TAL4204    
Serial   | VAHD4UXL           
ZFS GUID | 12084474217870916236
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdac               
Model    | HUS728T8TAL4204    
Serial   | VAHE4AEL           
ZFS GUID | 12420098536708636925
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdad               
Model    | HUS728T8TAL4204    
Serial   | VAHE35SL           
ZFS GUID | 15641419920947187991
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdae               
Model    | HUS728T8TAL4204    
Serial   | VAH73TVL           
ZFS GUID | 2321010819975352589
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdaf               
Model    | HUS728T8TAL4204    
Serial   | VAH0LL4L           
ZFS GUID | 7064277241025105086
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdag               
Model    | HUS728T8TAL4204    
Serial   | VAHBHYGL           
ZFS GUID | 9631990446359566766
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdah               
Model    | HUS728T8TAL4204    
Serial   | VAHE7BGL           
ZFS GUID | 10666041267281724571
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdai               
Model    | HUS728T8TAL4204    
Serial   | VAH4T4TL           
ZFS GUID | 15395414914633738779
Pool     | ice                
---------+--------------------
---------+--------------------
Name     | sdaj               
Model    | HUS728T8TAL4204    
Serial   | VAHDBDXL           
ZFS GUID | 480631239828802416 
Pool     | ice                
---------+--------------------
---------+--------------------

###################################
#                                 #
#       DD Benchmark Starting     #
#                                 #
###################################
Using 32 threads for the benchmark.


Creating test dataset for pool: ice

Running benchmarks for pool: ice
Running DD write benchmark with 1 threads...
Run 1 write speed: 326.08 MB/s
Run 2 write speed: 323.38 MB/s
Run 3 write speed: 324.94 MB/s
Run 4 write speed: 322.05 MB/s
Average write speed: 324.11 MB/s
Running DD read benchmark with 1 threads...
Run 1 read speed: 6698.41 MB/s
Run 2 read speed: 6654.68 MB/s
Run 3 read speed: 6444.83 MB/s
Run 4 read speed: 6570.46 MB/s
Average read speed: 6592.09 MB/s
Running DD write benchmark with 8 threads...
Run 1 write speed: 1919.81 MB/s
Run 2 write speed: 1920.72 MB/s
Run 3 write speed: 1937.88 MB/s
Run 4 write speed: 1983.57 MB/s
Average write speed: 1940.50 MB/s
Running DD read benchmark with 8 threads...
Run 1 read speed: 25551.28 MB/s
Run 2 read speed: 27919.16 MB/s
Run 3 read speed: 28371.19 MB/s
Run 4 read speed: 28435.76 MB/s
Average read speed: 27569.35 MB/s
Running DD write benchmark with 16 threads...
Run 1 write speed: 1980.08 MB/s
Run 2 write speed: 1850.96 MB/s
Run 3 write speed: 1889.51 MB/s
Run 4 write speed: 1864.92 MB/s
Average write speed: 1896.37 MB/s
Running DD read benchmark with 16 threads...
Run 1 read speed: 2493.91 MB/s
Run 2 read speed: 2541.70 MB/s
Run 3 read speed: 2613.01 MB/s
Run 4 read speed: 4549.56 MB/s
Average read speed: 3049.55 MB/s
Running DD write benchmark with 32 threads...
Run 1 write speed: 1805.01 MB/s
Run 2 write speed: 1657.31 MB/s
Run 3 write speed: 1644.15 MB/s
Run 4 write speed: 1650.92 MB/s
Average write speed: 1689.35 MB/s
Running DD read benchmark with 32 threads...
Run 1 read speed: 2148.35 MB/s
Run 2 read speed: 2136.61 MB/s
Run 3 read speed: 2197.37 MB/s
Run 4 read speed: 2197.61 MB/s
Average read speed: 2169.98 MB/s

###################################
#         DD Benchmark Results for Pool: ice    #
###################################
#    Threads: 1    #
#    1M Seq Write Run 1: 326.08 MB/s     #
#    1M Seq Write Run 2: 323.38 MB/s     #
#    1M Seq Write Run 3: 324.94 MB/s     #
#    1M Seq Write Run 4: 322.05 MB/s     #
#    1M Seq Write Avg: 324.11 MB/s #
#    1M Seq Read Run 1: 6698.41 MB/s      #
#    1M Seq Read Run 2: 6654.68 MB/s      #
#    1M Seq Read Run 3: 6444.83 MB/s      #
#    1M Seq Read Run 4: 6570.46 MB/s      #
#    1M Seq Read Avg: 6592.09 MB/s  #
###################################
#    Threads: 8    #
#    1M Seq Write Run 1: 1919.81 MB/s     #
#    1M Seq Write Run 2: 1920.72 MB/s     #
#    1M Seq Write Run 3: 1937.88 MB/s     #
#    1M Seq Write Run 4: 1983.57 MB/s     #
#    1M Seq Write Avg: 1940.50 MB/s #
#    1M Seq Read Run 1: 25551.28 MB/s      #
#    1M Seq Read Run 2: 27919.16 MB/s      #
#    1M Seq Read Run 3: 28371.19 MB/s      #
#    1M Seq Read Run 4: 28435.76 MB/s      #
#    1M Seq Read Avg: 27569.35 MB/s  #
###################################
#    Threads: 16    #
#    1M Seq Write Run 1: 1980.08 MB/s     #
#    1M Seq Write Run 2: 1850.96 MB/s     #
#    1M Seq Write Run 3: 1889.51 MB/s     #
#    1M Seq Write Run 4: 1864.92 MB/s     #
#    1M Seq Write Avg: 1896.37 MB/s #
#    1M Seq Read Run 1: 2493.91 MB/s      #
#    1M Seq Read Run 2: 2541.70 MB/s      #
#    1M Seq Read Run 3: 2613.01 MB/s      #
#    1M Seq Read Run 4: 4549.56 MB/s      #
#    1M Seq Read Avg: 3049.55 MB/s  #
###################################
#    Threads: 32    #
#    1M Seq Write Run 1: 1805.01 MB/s     #
#    1M Seq Write Run 2: 1657.31 MB/s     #
#    1M Seq Write Run 3: 1644.15 MB/s     #
#    1M Seq Write Run 4: 1650.92 MB/s     #
#    1M Seq Write Avg: 1689.35 MB/s #
#    1M Seq Read Run 1: 2148.35 MB/s      #
#    1M Seq Read Run 2: 2136.61 MB/s      #
#    1M Seq Read Run 3: 2197.37 MB/s      #
#    1M Seq Read Run 4: 2197.61 MB/s      #
#    1M Seq Read Avg: 2169.98 MB/s  #
###################################
Cleaning up test files...
Running disk read benchmark...
This benchmark tests the 4K sequential read performance of each disk in the system using dd. It is run 4 times for each disk and averaged.
This benchmark is useful for comparing disks within the same pool, to identify potential issues and bottlenecks.
Testing disk: nvme0n1
Testing disk: nvme0n1
Testing disk: nvme0n1
Testing disk: nvme0n1
Testing disk: sdn
Testing disk: sdn
Testing disk: sdn
Testing disk: sdn
Testing disk: sdo
Testing disk: sdo
Testing disk: sdo
Testing disk: sdo
Testing disk: sdp
Testing disk: sdp
Testing disk: sdp
Testing disk: sdp
Testing disk: sds
Testing disk: sds
Testing disk: sds
Testing disk: sds
Testing disk: sdq
Testing disk: sdq
Testing disk: sdq
Testing disk: sdq
Testing disk: sdr
Testing disk: sdr
Testing disk: sdr
Testing disk: sdr
Testing disk: sdu
Testing disk: sdu
Testing disk: sdu
Testing disk: sdu
Testing disk: sdx
Testing disk: sdx
Testing disk: sdx
Testing disk: sdx
Testing disk: sdt
Testing disk: sdt
Testing disk: sdt
Testing disk: sdt
Testing disk: sdv
Testing disk: sdv
Testing disk: sdv
Testing disk: sdv
Testing disk: sdw
Testing disk: sdw
Testing disk: sdw
Testing disk: sdw
Testing disk: sdm
Testing disk: sdm
Testing disk: sdm
Testing disk: sdm
Testing disk: sda
Testing disk: sda
Testing disk: sda
Testing disk: sda
Testing disk: sdb
Testing disk: sdb
Testing disk: sdb
Testing disk: sdb
Testing disk: sdc
Testing disk: sdc
Testing disk: sdc
Testing disk: sdc
Testing disk: sdd
Testing disk: sdd
Testing disk: sdd
Testing disk: sdd
Testing disk: sde
Testing disk: sde
Testing disk: sde
Testing disk: sde
Testing disk: sdf
Testing disk: sdf
Testing disk: sdf
Testing disk: sdf
Testing disk: sdh
Testing disk: sdh
Testing disk: sdh
Testing disk: sdh
Testing disk: sdg
Testing disk: sdg
Testing disk: sdg
Testing disk: sdg
Testing disk: sdi
Testing disk: sdi
Testing disk: sdi
Testing disk: sdi
Testing disk: sdj
Testing disk: sdj
Testing disk: sdj
Testing disk: sdj
Testing disk: sdk
Testing disk: sdk
Testing disk: sdk
Testing disk: sdk
Testing disk: sdl
Testing disk: sdl
Testing disk: sdl
Testing disk: sdl
Testing disk: sdy
Testing disk: sdy
Testing disk: sdy
Testing disk: sdy
Testing disk: sdz
Testing disk: sdz
Testing disk: sdz
Testing disk: sdz
Testing disk: sdaa
Testing disk: sdaa
Testing disk: sdaa
Testing disk: sdaa
Testing disk: sdab
Testing disk: sdab
Testing disk: sdab
Testing disk: sdab
Testing disk: sdac
Testing disk: sdac
Testing disk: sdac
Testing disk: sdac
Testing disk: sdad
Testing disk: sdad
Testing disk: sdad
Testing disk: sdad
Testing disk: sdae
Testing disk: sdae
Testing disk: sdae
Testing disk: sdae
Testing disk: sdaf
Testing disk: sdaf
Testing disk: sdaf
Testing disk: sdaf
Testing disk: sdag
Testing disk: sdag
Testing disk: sdag
Testing disk: sdag
Testing disk: sdah
Testing disk: sdah
Testing disk: sdah
Testing disk: sdah
Testing disk: sdai
Testing disk: sdai
Testing disk: sdai
Testing disk: sdai
Testing disk: sdaj
Testing disk: sdaj
Testing disk: sdaj
Testing disk: sdaj

###################################
#         Disk Read Benchmark Results   #
###################################
#    Disk: nvme0n1    #
#    Run 1: 1515.81 MB/s     #
#    Run 2: 1340.11 MB/s     #
#    Run 3: 1351.25 MB/s     #
#    Run 4: 1439.81 MB/s     #
#    Average: 1411.74 MB/s     #
#    Disk: sdn    #
#    Run 1: 234.90 MB/s     #
#    Run 2: 232.80 MB/s     #
#    Run 3: 2916.88 MB/s     #
#    Run 4: 3096.13 MB/s     #
#    Average: 1620.18 MB/s     #
#    Disk: sdo    #
#    Run 1: 229.76 MB/s     #
#    Run 2: 225.23 MB/s     #
#    Run 3: 3071.81 MB/s     #
#    Run 4: 3096.38 MB/s     #
#    Average: 1655.80 MB/s     #
#    Disk: sdp    #
#    Run 1: 222.71 MB/s     #
#    Run 2: 429.55 MB/s     #
#    Run 3: 3106.89 MB/s     #
#    Run 4: 3083.21 MB/s     #
#    Average: 1710.59 MB/s     #
#    Disk: sds    #
#    Run 1: 230.78 MB/s     #
#    Run 2: 224.99 MB/s     #
#    Run 3: 3090.27 MB/s     #
#    Run 4: 3076.32 MB/s     #
#    Average: 1655.59 MB/s     #
#    Disk: sdq    #
#    Run 1: 221.95 MB/s     #
#    Run 2: 2616.89 MB/s     #
#    Run 3: 3090.24 MB/s     #
#    Run 4: 3080.20 MB/s     #
#    Average: 2252.32 MB/s     #
#    Disk: sdr    #
#    Run 1: 230.52 MB/s     #
#    Run 2: 230.57 MB/s     #
#    Run 3: 3077.81 MB/s     #
#    Run 4: 3069.08 MB/s     #
#    Average: 1652.00 MB/s     #
#    Disk: sdu    #
#    Run 1: 225.57 MB/s     #
#    Run 2: 2589.45 MB/s     #
#    Run 3: 3069.44 MB/s     #
#    Run 4: 3081.54 MB/s     #
#    Average: 2241.50 MB/s     #
#    Disk: sdx    #
#    Run 1: 231.11 MB/s     #
#    Run 2: 235.51 MB/s     #
#    Run 3: 3040.96 MB/s     #
#    Run 4: 3075.13 MB/s     #
#    Average: 1645.68 MB/s     #
#    Disk: sdt    #
#    Run 1: 236.26 MB/s     #
#    Run 2: 2602.36 MB/s     #
#    Run 3: 3066.71 MB/s     #
#    Run 4: 3089.58 MB/s     #
#    Average: 2248.73 MB/s     #
#    Disk: sdv    #
#    Run 1: 244.73 MB/s     #
#    Run 2: 247.89 MB/s     #
#    Run 3: 2818.79 MB/s     #
#    Run 4: 3044.16 MB/s     #
#    Average: 1588.89 MB/s     #
#    Disk: sdw    #
#    Run 1: 225.35 MB/s     #
#    Run 2: 220.10 MB/s     #
#    Run 3: 3097.11 MB/s     #
#    Run 4: 3083.58 MB/s     #
#    Average: 1656.54 MB/s     #
#    Disk: sdm    #
#    Run 1: 235.51 MB/s     #
#    Run 2: 2600.26 MB/s     #
#    Run 3: 3077.70 MB/s     #
#    Run 4: 3096.77 MB/s     #
#    Average: 2252.56 MB/s     #
#    Disk: sda    #
#    Run 1: 223.70 MB/s     #
#    Run 2: 225.74 MB/s     #
#    Run 3: 2843.69 MB/s     #
#    Run 4: 3044.02 MB/s     #
#    Average: 1584.29 MB/s     #
#    Disk: sdb    #
#    Run 1: 226.62 MB/s     #
#    Run 2: 225.88 MB/s     #
#    Run 3: 3049.45 MB/s     #
#    Run 4: 3064.97 MB/s     #
#    Average: 1641.73 MB/s     #
#    Disk: sdc    #
#    Run 1: 232.36 MB/s     #
#    Run 2: 232.86 MB/s     #
#    Run 3: 3021.33 MB/s     #
#    Run 4: 3064.97 MB/s     #
#    Average: 1637.88 MB/s     #
#    Disk: sdd    #
#    Run 1: 235.27 MB/s     #
#    Run 2: 236.96 MB/s     #
#    Run 3: 3030.66 MB/s     #
#    Run 4: 3056.96 MB/s     #
#    Average: 1639.96 MB/s     #
#    Disk: sde    #
#    Run 1: 232.43 MB/s     #
#    Run 2: 235.92 MB/s     #
#    Run 3: 2993.68 MB/s     #
#    Run 4: 3040.23 MB/s     #
#    Average: 1625.56 MB/s     #
#    Disk: sdf    #
#    Run 1: 236.74 MB/s     #
#    Run 2: 239.68 MB/s     #
#    Run 3: 2961.91 MB/s     #
#    Run 4: 3038.94 MB/s     #
#    Average: 1619.32 MB/s     #
#    Disk: sdh    #
#    Run 1: 228.78 MB/s     #
#    Run 2: 229.14 MB/s     #
#    Run 3: 2913.63 MB/s     #
#    Run 4: 3014.87 MB/s     #
#    Average: 1596.61 MB/s     #
#    Disk: sdg    #
#    Run 1: 214.93 MB/s     #
#    Run 2: 188.62 MB/s     #
#    Run 3: 2281.16 MB/s     #
#    Run 4: 3028.12 MB/s     #
#    Average: 1428.21 MB/s     #
#    Disk: sdi    #
#    Run 1: 183.42 MB/s     #
#    Run 2: 187.41 MB/s     #
#    Run 3: 495.49 MB/s     #
#    Run 4: 3029.64 MB/s     #
#    Average: 973.99 MB/s     #
#    Disk: sdj    #
#    Run 1: 187.01 MB/s     #
#    Run 2: 248.63 MB/s     #
#    Run 3: 529.65 MB/s     #
#    Run 4: 3024.64 MB/s     #
#    Average: 997.48 MB/s     #
#    Disk: sdk    #
#    Run 1: 238.47 MB/s     #
#    Run 2: 240.28 MB/s     #
#    Run 3: 302.07 MB/s     #
#    Run 4: 2849.10 MB/s     #
#    Average: 907.48 MB/s     #
#    Disk: sdl    #
#    Run 1: 238.28 MB/s     #
#    Run 2: 238.67 MB/s     #
#    Run 3: 276.98 MB/s     #
#    Run 4: 2833.97 MB/s     #
#    Average: 896.97 MB/s     #
#    Disk: sdy    #
#    Run 1: 237.97 MB/s     #
#    Run 2: 237.64 MB/s     #
#    Run 3: 238.21 MB/s     #
#    Run 4: 238.31 MB/s     #
#    Average: 238.03 MB/s     #
#    Disk: sdz    #
#    Run 1: 236.30 MB/s     #
#    Run 2: 237.11 MB/s     #
#    Run 3: 259.58 MB/s     #
#    Run 4: 1494.90 MB/s     #
#    Average: 556.97 MB/s     #
#    Disk: sdaa    #
#    Run 1: 233.77 MB/s     #
#    Run 2: 236.46 MB/s     #
#    Run 3: 247.06 MB/s     #
#    Run 4: 250.19 MB/s     #
#    Average: 241.87 MB/s     #
#    Disk: sdab    #
#    Run 1: 234.54 MB/s     #
#    Run 2: 233.06 MB/s     #
#    Run 3: 371.31 MB/s     #
#    Run 4: 2891.81 MB/s     #
#    Average: 932.68 MB/s     #
#    Disk: sdac    #
#    Run 1: 234.53 MB/s     #
#    Run 2: 234.97 MB/s     #
#    Run 3: 234.95 MB/s     #
#    Run 4: 235.07 MB/s     #
#    Average: 234.88 MB/s     #
#    Disk: sdad    #
#    Run 1: 236.75 MB/s     #
#    Run 2: 237.15 MB/s     #
#    Run 3: 2774.73 MB/s     #
#    Run 4: 3006.34 MB/s     #
#    Average: 1563.74 MB/s     #
#    Disk: sdae    #
#    Run 1: 238.56 MB/s     #
#    Run 2: 238.23 MB/s     #
#    Run 3: 239.97 MB/s     #
#    Run 4: 240.13 MB/s     #
#    Average: 239.22 MB/s     #
#    Disk: sdaf    #
#    Run 1: 251.04 MB/s     #
#    Run 2: 254.38 MB/s     #
#    Run 3: 1693.78 MB/s     #
#    Run 4: 3005.52 MB/s     #
#    Average: 1301.18 MB/s     #
#    Disk: sdag    #
#    Run 1: 242.64 MB/s     #
#    Run 2: 243.03 MB/s     #
#    Run 3: 243.35 MB/s     #
#    Run 4: 234.01 MB/s     #
#    Average: 240.76 MB/s     #
#    Disk: sdah    #
#    Run 1: 241.66 MB/s     #
#    Run 2: 243.91 MB/s     #
#    Run 3: 729.58 MB/s     #
#    Run 4: 2211.85 MB/s     #
#    Average: 856.75 MB/s     #
#    Disk: sdai    #
#    Run 1: 236.64 MB/s     #
#    Run 2: 235.42 MB/s     #
#    Run 3: 234.79 MB/s     #
#    Run 4: 232.29 MB/s     #
#    Average: 234.79 MB/s     #
#    Disk: sdaj    #
#    Run 1: 230.71 MB/s     #
#    Run 2: 231.02 MB/s     #
#    Run 3: 232.86 MB/s     #
#    Run 4: 234.65 MB/s     #
#    Average: 232.31 MB/s     #
###################################

Total benchmark time: 372.75 minutes

Thanks Nick, this is a great idea and works out of the box. A couple of items for me:

  1. This does not recognize that a pool is read only (my secondary NAS is holding my primaries replicated datasets) and attempts to create the tn-bench dataset and run tests against it

  2. The read speed test most definitely can be inaccurate, especially with 1 and 4 threads, since if you have enough memory, all of the data written is cached already.

###################################
#         DD Benchmark Results for Pool: wtrpool    #
###################################
#    Threads: 1    #
#    1M Seq Write Run 1: 552.61 MB/s     #
#    1M Seq Write Run 2: 546.81 MB/s     #
#    1M Seq Write Run 3: 551.42 MB/s     #
#    1M Seq Write Run 4: 549.24 MB/s     #
#    1M Seq Write Avg: 550.02 MB/s #
#    1M Seq Read Run 1: 9577.44 MB/s      #
#    1M Seq Read Run 2: 10809.15 MB/s      #
#    1M Seq Read Run 3: 10979.96 MB/s      #
#    1M Seq Read Run 4: 9556.07 MB/s      #
#    1M Seq Read Avg: 10230.65 MB/s  #
###################################
#    Threads: 8    #
#    1M Seq Write Run 1: 755.58 MB/s     #
#    1M Seq Write Run 2: 693.80 MB/s     #
#    1M Seq Write Run 3: 666.78 MB/s     #
#    1M Seq Write Run 4: 763.02 MB/s     #
#    1M Seq Write Avg: 719.80 MB/s #
#    1M Seq Read Run 1: 650.08 MB/s      #
#    1M Seq Read Run 2: 622.37 MB/s      #
#    1M Seq Read Run 3: 660.73 MB/s      #
#    1M Seq Read Run 4: 666.42 MB/s      #
#    1M Seq Read Avg: 649.90 MB/s  #
###################################

The only other item is maybe put a prompt/warning that the user has to confirm “This will significantly impact performance of your system while running”. I was so excited to run this I made my ChannelsDVR recordings unwatchable when this was running. :smiley:

1 Like

Additionally, TN-Bench does not correctly identify the boot-pool (this might be as designed) as it shows both drives as N/A for pool:

---------+---------------------------
Name     | sdi
Model    | CT1000BX500SSD1
Serial   | 2418E8AC3C51
ZFS GUID | None
Pool     | N/A
---------+---------------------------

sudo zpool status -v
  pool: boot-pool
 state: ONLINE
config:

        NAME           STATE     READ WRITE CKSUM
        boot-pool      ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            sdi3       ONLINE       0     0     0
            nvme0n1p3  ONLINE       0     0     0
1 Like

@Theo Thank you for the feedback. I’ve made a few changes based on your testing as well as some of my own over the past couple of days.

I’ve updated the dd command in the pool benchmark to use 20GiB files instead of 10 GiB files, and only run it twice instead of 4 times. This seems to have increased consistency in my testing so far.

I’ve updated the dd command for the disk benchmark to read as much data as their is RAM, or if the size of RAM exceeds the size of the disk, it will just read the whole disk. This seems to have drastically changed the behavior and produced much better data.

I do both expect and want ARC to play a role by default, since in real-world uses it will actually be used. However I want to minimize its impact for the sake of more actionable numbers. I think these changes do that. I did however make several changes to the opening message that I think you will appreciate.

root@prod[~]# git clone https://github.com/nickf1227/TN-Bench.git && cd TN-Bench && python3 truenas-bench.py
Cloning into 'TN-Bench'...
remote: Enumerating objects: 85, done.
remote: Counting objects: 100% (85/85), done.
remote: Compressing objects: 100% (85/85), done.
remote: Total 85 (delta 35), reused 0 (delta 0), pack-reused 0 (from 0)
Receiving objects: 100% (85/85), 49.41 KiB | 4.94 MiB/s, done.
Resolving deltas: 100% (35/35), done.

###################################
#                                 #
#          TN-Bench v1.05         #
#          MONOLITHIC.            #
#                                 #
###################################
TN-Bench is an OpenSource Software Script that uses standard tools to Benchmark your System and collect various statistical information via the TrueNAS API.

TN-Bench will make a Dataset in each of your pools for the purposes of this testing that will consume 20 GiB of space for every thread in your system during its run.

After which time we will prompt you to delete the dataset which was created.
###################################

WARNING: This test will make your system EXTREMELY slow during its run. It is recommended to run this test when no other workloads are running.

NOTE: ZFS ARC will also be used and will impact your results. This may be undesirable in some circumstances, and the zfs_arc_max can be set to 1 (which means 1 byte) to prevent ARC from caching.

NOTE: Setting it back to 0 will restore the default behavior, but the system will need to be restarted!
###################################

This in particular is expected behavior based on the output of midclt call pool.query. The Boot disk is, however, tested in the individual disk benchmark which should be sufficient unless you disagree. I’ve added a note about this in the script itself.

    print("\n### Disk Information ###")
    print("###################################")
    print("\nNOTE: The TrueNAS API will return N/A for the Pool for the boot device(s) as well as any disk is not a member of a pool.")
    print("###################################")

It doesn’t have logic to check for this. How did this present for you? Did it run anyway for the pools that are not read-only?

I am running one more, larger, systems now to sanity check the efficacy of the changes made.

    ###################################
#                                 #
#          TN-Bench v1.05         #
#          MONOLITHIC.            #
#                                 #
###################################
TN-Bench is an OpenSource Software Script that uses standard tools to Benchmark your System and collect various statistical information via the TrueNAS API.

TN-Bench will make a Dataset in each of your pools for the purposes of this testing that will consume 20 GiB of space for every thread in your system during its run.

After which time we will prompt you to delete the dataset which was created.
###################################

WARNING: This test will make your system EXTREMELY slow during its run. It is recommended to run this test when no other workloads are running.

NOTE: ZFS ARC will also be used and will impact your results. This may be undesirable in some circumstances, and the zfs_arc_max can be set to 1 (which means 1 byte) to prevent ARC from caching.

NOTE: Setting it back to 0 will restore the default behavior, but the system will need to be restarted!
###################################

Would you like to continue? (yes/no): yes

### System Information ###
Field                 | Value                                       
----------------------+---------------------------------------------
Version               | TrueNAS-SCALE-25.04.0-MASTER-20250110-005622
Load Average (1m)     | 0.06689453125                               
Load Average (5m)     | 0.142578125                                 
Load Average (15m)    | 0.15283203125                               
Model                 | AMD Ryzen 5 5600G with Radeon Graphics      
Cores                 | 12                                          
Physical Cores        | 6                                           
System Product        | X570 AORUS ELITE                            
Physical Memory (GiB) | 30.75                                       

### Pool Information ###
Field      | Value       
-----------+-------------
Name       | inferno     
Path       | /mnt/inferno
Status     | ONLINE      
VDEV Count | 1           
Disk Count | 5           

VDEV Name  | Type           | Disk Count
-----------+----------------+---------------
raidz1-0    | RAIDZ1         | 5

### Disk Information ###

NOTE: The TrueNAS API will return N/A for the Pool for the boot device(s) as well as the disk name if the disk is not a member of a pool.
Field      | Value                     
-----------+---------------------------
Name       | nvme0n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM29081000X960CGN        
ZFS GUID   | 212601209224793468        
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme3n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM2913000QM960CGN        
ZFS GUID   | 16221756077833732578      
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme5n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM2913000YF960CGN        
ZFS GUID   | 8625327235819249102       
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme2n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM2913000DC960CGN        
ZFS GUID   | 11750420763846093416      
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme4n1                   
Model      | SAMSUNG MZVL2512HCJQ-00BL7
Serial     | S64KNX2T216015            
ZFS GUID   | None                      
Pool       | N/A                       
Size (GiB) | 476.94                    
-----------+---------------------------
-----------+---------------------------
Name       | nvme1n1                   
Model      | INTEL SSDPE21D960GA       
Serial     | PHM2908101QG960CGN        
ZFS GUID   | 10743034860780890768      
Pool       | inferno                   
Size (GiB) | 894.25                    
-----------+---------------------------
-----------+---------------------------

###################################
#                                 #
#       DD Benchmark Starting     #
#                                 #
###################################
Using 12 threads for the benchmark.


Creating test dataset for pool: inferno

Running benchmarks for pool: inferno
Running DD write benchmark with 1 threads...
Run 1 write speed: 411.17 MB/s
Run 2 write speed: 412.88 MB/s
Average write speed: 412.03 MB/s
Running DD read benchmark with 1 threads...
Run 1 read speed: 6762.11 MB/s
Run 2 read speed: 5073.43 MB/s
Average read speed: 5917.77 MB/s
Running DD write benchmark with 3 threads...
Run 1 write speed: 1195.91 MB/s
Run 2 write speed: 1193.22 MB/s
Average write speed: 1194.56 MB/s
Running DD read benchmark with 3 threads...
Run 1 read speed: 4146.25 MB/s
Run 2 read speed: 4161.19 MB/s
Average read speed: 4153.72 MB/s
Running DD write benchmark with 6 threads...
Run 1 write speed: 2060.54 MB/s
Run 2 write speed: 2058.62 MB/s
Average write speed: 2059.58 MB/s
Running DD read benchmark with 6 threads...
Run 1 read speed: 4209.25 MB/s
Run 2 read speed: 4212.84 MB/s
Average read speed: 4211.05 MB/s
Running DD write benchmark with 12 threads...
Run 1 write speed: 2353.74 MB/s
Run 2 write speed: 2184.07 MB/s
Average write speed: 2268.91 MB/s
Running DD read benchmark with 12 threads...
Run 1 read speed: 4191.27 MB/s
Run 2 read speed: 4199.91 MB/s
Average read speed: 4195.59 MB/s

###################################
#         DD Benchmark Results for Pool: inferno    #
###################################
#    Threads: 1    #
#    1M Seq Write Run 1: 411.17 MB/s     #
#    1M Seq Write Run 2: 412.88 MB/s     #
#    1M Seq Write Avg: 412.03 MB/s #
#    1M Seq Read Run 1: 6762.11 MB/s      #
#    1M Seq Read Run 2: 5073.43 MB/s      #
#    1M Seq Read Avg: 5917.77 MB/s  #
###################################
#    Threads: 3    #
#    1M Seq Write Run 1: 1195.91 MB/s     #
#    1M Seq Write Run 2: 1193.22 MB/s     #
#    1M Seq Write Avg: 1194.56 MB/s #
#    1M Seq Read Run 1: 4146.25 MB/s      #
#    1M Seq Read Run 2: 4161.19 MB/s      #
#    1M Seq Read Avg: 4153.72 MB/s  #
###################################
#    Threads: 6    #
#    1M Seq Write Run 1: 2060.54 MB/s     #
#    1M Seq Write Run 2: 2058.62 MB/s     #
#    1M Seq Write Avg: 2059.58 MB/s #
#    1M Seq Read Run 1: 4209.25 MB/s      #
#    1M Seq Read Run 2: 4212.84 MB/s      #
#    1M Seq Read Avg: 4211.05 MB/s  #
###################################
#    Threads: 12    #
#    1M Seq Write Run 1: 2353.74 MB/s     #
#    1M Seq Write Run 2: 2184.07 MB/s     #
#    1M Seq Write Avg: 2268.91 MB/s #
#    1M Seq Read Run 1: 4191.27 MB/s      #
#    1M Seq Read Run 2: 4199.91 MB/s      #
#    1M Seq Read Avg: 4195.59 MB/s  #
###################################
Cleaning up test files...
Running disk read benchmark...
This benchmark tests the 4K sequential read performance of each disk in the system using dd. It is run 2 times for each disk and averaged.
In order to work around ARC caching in systems with it still enabled, This benchmark reads data in the amount of total system RAM or the total size of the disk, whichever is smaller.
Testing disk: nvme0n1
Testing disk: nvme0n1
Testing disk: nvme3n1
Testing disk: nvme3n1
Testing disk: nvme5n1
Testing disk: nvme5n1
Testing disk: nvme2n1
Testing disk: nvme2n1
Testing disk: nvme4n1
Testing disk: nvme4n1
Testing disk: nvme1n1
Testing disk: nvme1n1

###################################
#         Disk Read Benchmark Results   #
###################################
#    Disk: nvme0n1    #
#    Run 1: 2032.08 MB/s     #
#    Run 2: 1825.83 MB/s     #
#    Average: 1928.95 MB/s     #
#    Disk: nvme3n1    #
#    Run 1: 1964.28 MB/s     #
#    Run 2: 1939.57 MB/s     #
#    Average: 1951.93 MB/s     #
#    Disk: nvme5n1    #
#    Run 1: 1908.79 MB/s     #
#    Run 2: 1948.96 MB/s     #
#    Average: 1928.88 MB/s     #
#    Disk: nvme2n1    #
#    Run 1: 1947.48 MB/s     #
#    Run 2: 1762.31 MB/s     #
#    Average: 1854.90 MB/s     #
#    Disk: nvme4n1    #
#    Run 1: 1829.80 MB/s     #
#    Run 2: 1787.41 MB/s     #
#    Average: 1808.60 MB/s     #
#    Disk: nvme1n1    #
#    Run 1: 1836.51 MB/s     #
#    Run 2: 1879.80 MB/s     #
#    Average: 1858.16 MB/s     #
###################################

Total benchmark time: 15.88 minutes

If you can I’d love to see how it runs on your system now

It seemed undeterred and acted like it was running tests on the read only pool. I checked to see if the tn-bench dataset was there and it was not. At that point I canned the process and proceeded to run it on my production NAS (and got in trouble with my wife :slight_smile: )

Nice changes in the 1.05 version (testing it now). For a future version, I think setting max threads in a configuration to command line parameter would be smart. my backup system takes 3 1/2 years to do the 8 thread read/write test. (Okay, several hours)

While I do plan on making things more configurable at some point, the default threading behavior was intentional.

As an example, I have an all NVME system with a large amount of threads and alot of RAM bandwidth (Top example, 2 CPUs Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
) and another all-NVME system (Bottom Example, 1 CPU, AMD Ryzen 5 5600G with Radeon Graphics) with alot less RAM and RAM bandwidth, but much faster cores.

###################################
#         DD Benchmark Results for Pool: fire    #
###################################
#    Threads: 1    #
#    1M Seq Write Run 1: 237.04 MB/s     #
#    1M Seq Write Run 2: 223.83 MB/s     #
#    1M Seq Write Avg: 230.43 MB/s #
#    1M Seq Read Run 1: 2756.02 MB/s      #
#    1M Seq Read Run 2: 2732.92 MB/s      #
#    1M Seq Read Avg: 2744.47 MB/s  #
###################################
#    Threads: 10    #
#    1M Seq Write Run 1: 2076.10 MB/s     #
#    1M Seq Write Run 2: 2092.43 MB/s     #
#    1M Seq Write Avg: 2084.26 MB/s #
#    1M Seq Read Run 1: 6059.59 MB/s      #
#    1M Seq Read Run 2: 6060.71 MB/s      #
#    1M Seq Read Avg: 6060.15 MB/s  #
###################################
#    Threads: 20    #
#    1M Seq Write Run 1: 2925.10 MB/s     #
#    1M Seq Write Run 2: 2871.85 MB/s     #
#    1M Seq Write Avg: 2898.48 MB/s #
#    1M Seq Read Run 1: 6406.70 MB/s      #
#    1M Seq Read Run 2: 6442.41 MB/s      #
#    1M Seq Read Avg: 6424.56 MB/s  #
###################################
#    Threads: 40    #
#    1M Seq Write Run 1: 2923.48 MB/s     #
#    1M Seq Write Run 2: 2969.82 MB/s     #
#    1M Seq Write Avg: 2946.65 MB/s #
#    1M Seq Read Run 1: 6514.30 MB/s      #
#    1M Seq Read Run 2: 6571.73 MB/s      #
#    1M Seq Read Avg: 6543.02 MB/s  #
###################################

###################################
#         DD Benchmark Results for Pool: inferno    #
###################################
#    Threads: 1    #
#    1M Seq Write Run 1: 411.17 MB/s     #
#    1M Seq Write Run 2: 412.88 MB/s     #
#    1M Seq Write Avg: 412.03 MB/s #
#    1M Seq Read Run 1: 6762.11 MB/s      #
#    1M Seq Read Run 2: 5073.43 MB/s      #
#    1M Seq Read Avg: 5917.77 MB/s  #
###################################
#    Threads: 3    #
#    1M Seq Write Run 1: 1195.91 MB/s     #
#    1M Seq Write Run 2: 1193.22 MB/s     #
#    1M Seq Write Avg: 1194.56 MB/s #
#    1M Seq Read Run 1: 4146.25 MB/s      #
#    1M Seq Read Run 2: 4161.19 MB/s      #
#    1M Seq Read Avg: 4153.72 MB/s  #
###################################
#    Threads: 6    #
#    1M Seq Write Run 1: 2060.54 MB/s     #
#    1M Seq Write Run 2: 2058.62 MB/s     #
#    1M Seq Write Avg: 2059.58 MB/s #
#    1M Seq Read Run 1: 4209.25 MB/s      #
#    1M Seq Read Run 2: 4212.84 MB/s      #
#    1M Seq Read Avg: 4211.05 MB/s  #
###################################
#    Threads: 12    #
#    1M Seq Write Run 1: 2353.74 MB/s     #
#    1M Seq Write Run 2: 2184.07 MB/s     #
#    1M Seq Write Avg: 2268.91 MB/s #
#    1M Seq Read Run 1: 4191.27 MB/s      #
#    1M Seq Read Run 2: 4199.91 MB/s      #
#    1M Seq Read Avg: 4195.59 MB/s  #
###################################

The pools are not apples-to-apples, different drives, different vdev topology. But what I found may be interesting is that having the various thread counts (1 thread, 1/4 of the threads in your system, 1/2 of the threads in your system, all of the threads in your system) will help find where your bottleneck is.

Looking at Threads = 1 between those two all NVME systems, we can see that there is likely a pretty substantial impact in having higher CPU Frequencies and higher IPC.
With only 120% (Both RaidZ1, 4vs5 disks) as many disks, I am seeing a 140% increase in write performance and a 170% increase in read performance on the bottom system vs the top system.

Then, when the thread count kicks up, the results are flipped on their head because of the additional RAM capacity and additional RAM bandwidth. Comparing 40t vs 32t in those results shows the system with 120% (Both RaidZ1, 4vs5 disks) as many disks fall way behind. It is 61% as fast at writes and only 36% as fast for reads.

You can also see different performance characteristics for each run, particularly interesting is 1/4 threads on one of these systems doesnt scale in higher thread counts, where as the other system does. Which probably speaks pretty loudly to the lack of memory capacity and bandwidth…

Small update to take into consideration spaces in pool names, as well as a handler for a read only pool.

I’ve tried running this and it seems to fail. It’s givving me results on the order of Terabytes/second. Is there anything special I need to do to run this?

Can you provide the output of the script?

Looks like a bunch of permissions errors. I probably don’t have acl’s set correctly.

It also does not delete the datasets at the end of the run either.

tn-bench output.txt (189.0 KB)

This is a community resource that is not officially endorsed by, or supported by iXsystems. Instead it is a personal passion project, and all of my opinions are my own.

Try to run as root (sudo su) instead of just sudo, but from a location you have write access to. Like sudo su then mkdir /mnt/ssd_pool/scripts && cd /mnt/ssd_pool/scripts and then run git clone https://github.com/nickf1227/TN-Bench.git && cd TN-Bench && python3 truenas-bench.py

I’ll add a handler for this in the next version to make it more obvious we need elevated privelages.

dd: failed to open '/mnt/ssd_pool/tn-bench/file_0.dat': Permission denied
dd: failed to open '/mnt/ssd_pool/tn-bench/file_3.dat': Permission denied
dd: failed to open '/mnt/ssd_pool/tn-bench/file_2.dat': Permission denied
dd: failed to open '/mnt/ssd_pool/tn-bench/file_1.dat': Permission denied

Same goes for the dataset cleanup and removal. If its busy or you dont have permission, it won’t delete the dataset. I’ll see if I can make this more obvious.

Yup, looks like that fixes it. I need to learn permissions better.

1 Like

Nice, best of luck for this :slight_smile:

Made a similar attempt years ago as a winter holiday project. but did not keep it up, so outdated now.

One thing I’d recommend is you moving to fio instead of dd, thats a much more professional tool and has a lot more options

(If you want me to take out the link let me know, not trying to steal your thunder here, just provided for you to have a look at if you care)

Question, it’s writing 20gb of data for each thread, correct? I’m running into space limitations on my machine. I have a SSD pool with around 1TB of free space, and 56 threads. Is there anything built in to make sure it doesn’t absolutely fill up the pool?

There is currently no space allocation check in place. Please stop the script from running and I can work on this in next version.

1 Like

-squint- Hey wait a minute- TrueNAS staff? When did that happen @NickF1227?

Oh and cool script. I will give this a shot!

1 Like

@koop :slight_smile: Much more recently than most of my community involvement, but long enough that it was time to put on the badge.

@Surgikill The new version has a calculator for space allocation. Thank you for the feedback.

1 Like