This is a community resource that is not officially endorsed by, or supported by iXsystems. Instead it is a personal passion project, and all of my opinions are my own.
Backstory (You can skip ahead)
About 18 months ago my furnace smogged up my whole basement, which is incidentally where all of my computers are. All of the computers (2 EPYC servers, and now my gaming desktop) from that time slowly died off, because carbon build up, despite my best efforts.
Essentially the TLDR; is; I built a new Gaming PC shortly after that happened…and I repurposed my old one as a TrueNAS. I needed to improvise things for a while after that, I wanted to connect a LUN to my new gaming desktop via the magic of TrueNAS. Was working great…Well, now that motherboard crapped out.
Because I “DD” (not dd) a Mac Mini 2018, so I’m stuck with 512GB of local disk…
I have an obnoxiously large “Home Directory” synced between it and my Windows PC… and keeping all of that in sync meant I needed to store stuff “somewhere else” in Windows, and I wanted it fast.
Today I finally resurected one of the EPYC servers I lost. I got a new case, and a replacement motherboard. Old RAM and CPU all check out after a a day of memtest86. Basically I just did a forklift of the motherboard and case…reused all of the other parts I already had.
Some I cleaned with IPA…others I soaked in IPA… and all dried inside a Corsi Rosenthal Box
System Migration, Same Pool
How this all relates to TN-Bench (now tn-bench) is…benchmarking. I have a 3-week old result from that previous system,
Specs
Ryzen 5600G 6C-12T 3.2GHz Base
Gigabyte AORUTS X570 Elite Pro?
32GB 3200MHz G.sKill
RAIDZ1 - Single VDEV - 5x Intel Optane 905P 960GB
So now I have, what I had before,
AMD EPYC 7F52 16C-32T
REPLACEMENT Supermicro H12-SSL-I
256 GB (8X32G) DDR4 3200MHz RDIMM
RAIDZ1 - Single VDEV - 5x Intel Optane 905P 960GB
In the old forum I had done some testing with differant PHY/BUS layers or Generational Improcements…Like doing bifurcation or using a PLX switch or upgrading from one hardware generation to the next.
I hope to pick this back up in a more meaningful way now, measured with tn-bench
. Lots more PCI-E lanes to play with
Old Threads for Reference:
But for now…I just want to see how these two wildly differant systems compare in tn-bench
See other benchmarks here https://cpu-benchmark.org/compare/amd-epyc-7f52/amd-ryzen-5-5600g/
Starting that off these are the results I got from the Ryzen 5 5600G rig about 3 weeks ago (when I updated this program last). It died about 2 weeks after this happened.
Ryzen 5 5600G Full Result
###################################
# #
# TN-Bench v1.07 #
# MONOLITHIC. #
# #
###################################
TN-Bench is an OpenSource Software Script that uses standard tools to Benchmark your System and collect various statistical information via the TrueNAS API.
TN-Bench will make a Dataset in each of your pools for the purposes of this testing that will consume 20 GiB of space for every thread in your system during its run.
After which time we will prompt you to delete the dataset which was created.
###################################
WARNING: This test will make your system EXTREMELY slow during its run. It is recommended to run this test when no other workloads are running.
NOTE: ZFS ARC will also be used and will impact your results. This may be undesirable in some circumstances, and the zfs_arc_max can be set to 1 (which means 1 byte) to prevent ARC from caching.
NOTE: Setting it back to 0 will restore the default behavior, but the system will need to be restarted!
###################################
Would you like to continue? (yes/no): yes
### System Information ###
Field | Value
----------------------+---------------------------------------
Version | 25.04.0
Load Average (1m) | 0.1220703125
Load Average (5m) | 0.2275390625
Load Average (15m) | 0.25244140625
Model | AMD Ryzen 5 5600G with Radeon Graphics
Cores | 12
Physical Cores | 6
System Product | X570 AORUS ELITE
Physical Memory (GiB) | 30.75
### Pool Information ###
Field | Value
-----------+-------------
Name | inferno
Path | /mnt/inferno
Status | ONLINE
VDEV Count | 1
Disk Count | 5
VDEV Name | Type | Disk Count
-----------+----------------+---------------
raidz1-0 | RAIDZ1 | 5
### Disk Information ###
###################################
NOTE: The TrueNAS API will return N/A for the Pool for the boot device(s) as well as any disk is not a member of a pool.
###################################
Field | Value
-----------+---------------------------
Name | nvme0n1
Model | INTEL SSDPE21D960GA
Serial | PHM29081000X960CGN
ZFS GUID | 212601209224793468
Pool | inferno
Size (GiB) | 894.25
-----------+---------------------------
-----------+---------------------------
Name | nvme2n1
Model | INTEL SSDPE21D960GA
Serial | PHM2913000QM960CGN
ZFS GUID | 16221756077833732578
Pool | inferno
Size (GiB) | 894.25
-----------+---------------------------
-----------+---------------------------
Name | nvme4n1
Model | INTEL SSDPE21D960GA
Serial | PHM2913000YF960CGN
ZFS GUID | 8625327235819249102
Pool | inferno
Size (GiB) | 894.25
-----------+---------------------------
-----------+---------------------------
Name | nvme5n1
Model | INTEL SSDPE21D960GA
Serial | PHM2913000DC960CGN
ZFS GUID | 11750420763846093416
Pool | inferno
Size (GiB) | 894.25
-----------+---------------------------
-----------+---------------------------
Name | nvme3n1
Model | SAMSUNG MZVL2512HCJQ-00BL7
Serial | S64KNX2T216015
ZFS GUID | None
Pool | N/A
Size (GiB) | 476.94
-----------+---------------------------
-----------+---------------------------
Name | nvme1n1
Model | INTEL SSDPE21D960GA
Serial | PHM2908101QG960CGN
ZFS GUID | 10743034860780890768
Pool | inferno
Size (GiB) | 894.25
-----------+---------------------------
-----------+---------------------------
###################################
# DD Benchmark Starting #
###################################
Using 12 threads for the benchmark.
Creating test dataset for pool: inferno
Created temporary dataset: inferno/tn-bench
Dataset inferno/tn-bench created successfully.
=== Space Verification ===
Available space: 765.37 GiB
Space required: 240.00 GiB (20 GiB/thread × 12 threads)
Sufficient space available - proceeding with benchmarks...
###################################
# #
# DD Benchmark Starting #
# #
###################################
Using 12 threads for the benchmark.
Creating test dataset for pool: inferno
Dataset inferno/tn-bench created successfully.
Running benchmarks for pool: inferno
Running DD write benchmark with 1 threads...
Run 1 write speed: 408.21 MB/s
Run 2 write speed: 404.35 MB/s
Average write speed: 406.28 MB/s
Running DD read benchmark with 1 threads...
Run 1 read speed: 10529.35 MB/s
Run 2 read speed: 14742.91 MB/s
Average read speed: 12636.13 MB/s
Running DD write benchmark with 3 threads...
Run 1 write speed: 1145.73 MB/s
Run 2 write speed: 1141.83 MB/s
Average write speed: 1143.78 MB/s
Running DD read benchmark with 3 threads...
Run 1 read speed: 8261.30 MB/s
Run 2 read speed: 8395.17 MB/s
Average read speed: 8328.24 MB/s
Running DD write benchmark with 6 threads...
Run 1 write speed: 1838.74 MB/s
Run 2 write speed: 1846.15 MB/s
Average write speed: 1842.44 MB/s
Running DD read benchmark with 6 threads...
Run 1 read speed: 8424.02 MB/s
Run 2 read speed: 8464.73 MB/s
Average read speed: 8444.38 MB/s
Running DD write benchmark with 12 threads...
Run 1 write speed: 2217.72 MB/s
Run 2 write speed: 2247.58 MB/s
Average write speed: 2232.65 MB/s
Running DD read benchmark with 12 threads...
Run 1 read speed: 8469.45 MB/s
Run 2 read speed: 8508.80 MB/s
Average read speed: 8489.13 MB/s
###################################
# DD Benchmark Results for Pool: inferno #
###################################
# Threads: 1 #
# 1M Seq Write Run 1: 408.21 MB/s #
# 1M Seq Write Run 2: 404.35 MB/s #
# 1M Seq Write Avg: 406.28 MB/s #
# 1M Seq Read Run 1: 10529.35 MB/s #
# 1M Seq Read Run 2: 14742.91 MB/s #
# 1M Seq Read Avg: 12636.13 MB/s #
###################################
# Threads: 3 #
# 1M Seq Write Run 1: 1145.73 MB/s #
# 1M Seq Write Run 2: 1141.83 MB/s #
# 1M Seq Write Avg: 1143.78 MB/s #
# 1M Seq Read Run 1: 8261.30 MB/s #
# 1M Seq Read Run 2: 8395.17 MB/s #
# 1M Seq Read Avg: 8328.24 MB/s #
###################################
# Threads: 6 #
# 1M Seq Write Run 1: 1838.74 MB/s #
# 1M Seq Write Run 2: 1846.15 MB/s #
# 1M Seq Write Avg: 1842.44 MB/s #
# 1M Seq Read Run 1: 8424.02 MB/s #
# 1M Seq Read Run 2: 8464.73 MB/s #
# 1M Seq Read Avg: 8444.38 MB/s #
###################################
# Threads: 12 #
# 1M Seq Write Run 1: 2217.72 MB/s #
# 1M Seq Write Run 2: 2247.58 MB/s #
# 1M Seq Write Avg: 2232.65 MB/s #
# 1M Seq Read Run 1: 8469.45 MB/s #
# 1M Seq Read Run 2: 8508.80 MB/s #
# 1M Seq Read Avg: 8489.13 MB/s #
###################################
Cleaning up test files...
Running disk read benchmark...
###################################
This benchmark tests the 4K sequential read performance of each disk in the system using dd. It is run 2 times for each disk and averaged.
In order to work around ARC caching in systems with it still enabled, This benchmark reads data in the amount of total system RAM or the total size of the disk, whichever is smaller.
###################################
Testing disk: nvme0n1
Testing disk: nvme0n1
Testing disk: nvme2n1
Testing disk: nvme2n1
Testing disk: nvme4n1
Testing disk: nvme4n1
Testing disk: nvme5n1
Testing disk: nvme5n1
Testing disk: nvme3n1
Testing disk: nvme3n1
Testing disk: nvme1n1
Testing disk: nvme1n1
###################################
# Disk Read Benchmark Results #
###################################
# Disk: nvme0n1 #
# Run 1: 1735.62 MB/s #
# Run 2: 1543.09 MB/s #
# Average: 1639.36 MB/s #
# Disk: nvme2n1 #
# Run 1: 1526.69 MB/s #
# Run 2: 1432.16 MB/s #
# Average: 1479.42 MB/s #
# Disk: nvme4n1 #
# Run 1: 1523.02 MB/s #
# Run 2: 1412.82 MB/s #
# Average: 1467.92 MB/s #
# Disk: nvme5n1 #
# Run 1: 1523.86 MB/s #
# Run 2: 1463.96 MB/s #
# Average: 1493.91 MB/s #
# Disk: nvme3n1 #
# Run 1: 1533.71 MB/s #
# Run 2: 1482.33 MB/s #
# Average: 1508.02 MB/s #
# Disk: nvme1n1 #
# Run 1: 1624.40 MB/s #
# Run 2: 1547.07 MB/s #
# Average: 1585.73 MB/s #
###################################
Total benchmark time: 10.62 minutes
EPYC 75F2 Full Result
oot@prod:/mnt/inferno/tn_scripts/TN-Bench# ls
LICENSE README.md 'et.query ' truenas-bench.py
root@prod:/mnt/inferno/tn_scripts/TN-Bench# python3 truenas-bench.py
###################################
# #
# TN-Bench v1.07 #
# MONOLITHIC. #
# #
###################################
TN-Bench is an OpenSource Software Script that uses standard tools to Benchmark your System and collect various statistical information via the TrueNAS API.
TN-Bench will make a Dataset in each of your pools for the purposes of this testing that will consume 20 GiB of space for every thread in your system during its run.
After which time we will prompt you to delete the dataset which was created.
###################################
WARNING: This test will make your system EXTREMELY slow during its run. It is recommended to run this test when no other workloads are running.
NOTE: ZFS ARC will also be used and will impact your results. This may be undesirable in some circumstances, and the zfs_arc_max can be set to 1 (which means 1 byte) to prevent ARC from caching.
NOTE: Setting it back to 0 will restore the default behavior, but the system will need to be restarted!
###################################
Would you like to continue? (yes/no): yes
### System Information ###
Field | Value
----------------------+--------------------------------
Version | 25.10.0-MASTER-20250520-015451
Load Average (1m) | 0.025390625
Load Average (5m) | 0.07666015625
Load Average (15m) | 0.0556640625
Model | AMD EPYC 7F52 16-Core Processor
Cores | 32
Physical Cores | 16
System Product | Super Server
Physical Memory (GiB) | 251.55
### Pool Information ###
Field | Value
-----------+-------------
Name | inferno
Path | /mnt/inferno
Status | ONLINE
VDEV Count | 1
Disk Count | 5
VDEV Name | Type | Disk Count
-----------+----------------+---------------
raidz1-0 | RAIDZ1 | 5
### Disk Information ###
###################################
NOTE: The TrueNAS API will return N/A for the Pool for the boot device(s) as well as any disk is not a member of a pool.
###################################
Field | Value
-----------+---------------------------
Name | nvme0n1
Model | INTEL SSDPE21D960GA
Serial | PHM29081000X960CGN
ZFS GUID | 212601209224793468
Pool | inferno
Size (GiB) | 894.25
-----------+---------------------------
-----------+---------------------------
Name | nvme3n1
Model | INTEL SSDPE21D960GA
Serial | PHM2913000QM960CGN
ZFS GUID | 16221756077833732578
Pool | inferno
Size (GiB) | 894.25
-----------+---------------------------
-----------+---------------------------
Name | nvme2n1
Model | INTEL SSDPE21D960GA
Serial | PHM2913000DC960CGN
ZFS GUID | 11750420763846093416
Pool | inferno
Size (GiB) | 894.25
-----------+---------------------------
-----------+---------------------------
Name | nvme4n1
Model | SAMSUNG MZVL2512HCJQ-00BL7
Serial | S64KNX2T216015
ZFS GUID | None
Pool | N/A
Size (GiB) | 476.94
-----------+---------------------------
-----------+---------------------------
Name | nvme5n1
Model | INTEL SSDPE21D960GA
Serial | PHM2908101QG960CGN
ZFS GUID | 10743034860780890768
Pool | inferno
Size (GiB) | 894.25
-----------+---------------------------
-----------+---------------------------
Name | nvme1n1
Model | INTEL SSDPE21D960GA
Serial | PHM2913000YF960CGN
ZFS GUID | 8625327235819249102
Pool | inferno
Size (GiB) | 894.25
-----------+---------------------------
-----------+---------------------------
###################################
# DD Benchmark Starting #
###################################
Using 32 threads for the benchmark.
Creating test dataset for pool: inferno
Dataset inferno/tn-bench created successfully.
=== Space Verification ===
Available space: 882.66 GiB
Space required: 640.00 GiB (20 GiB/thread × 32 threads)
Sufficient space available - proceeding with benchmarks...
###################################
# #
# DD Benchmark Starting #
# #
###################################
Using 32 threads for the benchmark.
Creating test dataset for pool: inferno
Dataset inferno/tn-bench created successfully.
Running benchmarks for pool: inferno
Running DD write benchmark with 1 threads...
Run 1 write speed: 330.56 MB/s
Run 2 write speed: 333.91 MB/s
Average write speed: 332.23 MB/s
Running DD read benchmark with 1 threads...
Run 1 read speed: 13507.10 MB/s
Run 2 read speed: 13834.20 MB/s
Average read speed: 13670.65 MB/s
Running DD write benchmark with 8 threads...
Run 1 write speed: 2537.39 MB/s
Run 2 write speed: 2539.67 MB/s
Average write speed: 2538.53 MB/s
Running DD read benchmark with 8 threads...
Run 1 read speed: 74423.04 MB/s
Run 2 read speed: 75516.14 MB/s
Average read speed: 74969.59 MB/s
Running DD write benchmark with 16 threads...
Run 1 write speed: 4524.23 MB/s
Run 2 write speed: 4527.36 MB/s
Average write speed: 4525.80 MB/s
Running DD read benchmark with 16 threads...
Run 1 read speed: 90504.07 MB/s
Run 2 read speed: 90349.46 MB/s
Average read speed: 90426.77 MB/s
Running DD write benchmark with 32 threads...
Run 1 write speed: 6293.79 MB/s
Run 2 write speed: 6193.00 MB/s
Average write speed: 6243.39 MB/s
Running DD read benchmark with 32 threads...
Run 1 read speed: 25320.35 MB/s
Run 2 read speed: 25346.17 MB/s
Average read speed: 25333.26 MB/s
###################################
# DD Benchmark Results for Pool: inferno #
###################################
# Threads: 1 #
# 1M Seq Write Run 1: 330.56 MB/s #
# 1M Seq Write Run 2: 333.91 MB/s #
# 1M Seq Write Avg: 332.23 MB/s #
# 1M Seq Read Run 1: 13507.10 MB/s #
# 1M Seq Read Run 2: 13834.20 MB/s #
# 1M Seq Read Avg: 13670.65 MB/s #
###################################
# Threads: 8 #
# 1M Seq Write Run 1: 2537.39 MB/s #
# 1M Seq Write Run 2: 2539.67 MB/s #
# 1M Seq Write Avg: 2538.53 MB/s #
# 1M Seq Read Run 1: 74423.04 MB/s #
# 1M Seq Read Run 2: 75516.14 MB/s #
# 1M Seq Read Avg: 74969.59 MB/s #
###################################
# Threads: 16 #
# 1M Seq Write Run 1: 4524.23 MB/s #
# 1M Seq Write Run 2: 4527.36 MB/s #
# 1M Seq Write Avg: 4525.80 MB/s #
# 1M Seq Read Run 1: 90504.07 MB/s #
# 1M Seq Read Run 2: 90349.46 MB/s #
# 1M Seq Read Avg: 90426.77 MB/s #
###################################
# Threads: 32 #
# 1M Seq Write Run 1: 6293.79 MB/s #
# 1M Seq Write Run 2: 6193.00 MB/s #
# 1M Seq Write Avg: 6243.39 MB/s #
# 1M Seq Read Run 1: 25320.35 MB/s #
# 1M Seq Read Run 2: 25346.17 MB/s #
# 1M Seq Read Avg: 25333.26 MB/s #
###################################
Cleaning up test files...
Running disk read benchmark...
###################################
This benchmark tests the 4K sequential read performance of each disk in the system using dd. It is run 2 times for each disk and averaged.
In order to work around ARC caching in systems with it still enabled, This benchmark reads data in the amount of total system RAM or the total size of the disk, whichever is smaller.
###################################
Testing disk: nvme0n1
Testing disk: nvme0n1
Testing disk: nvme3n1
Testing disk: nvme3n1
Testing disk: nvme2n1
Testing disk: nvme2n1
Testing disk: nvme4n1
Testing disk: nvme4n1
Testing disk: nvme5n1
Testing disk: nvme5n1
Testing disk: nvme1n1
Testing disk: nvme1n1
###################################
# Disk Read Benchmark Results #
###################################
# Disk: nvme0n1 #
# Run 1: 1532.82 MB/s #
# Run 2: 1343.92 MB/s #
# Average: 1438.37 MB/s #
# Disk: nvme3n1 #
# Run 1: 1517.45 MB/s #
# Run 2: 1341.46 MB/s #
# Average: 1429.45 MB/s #
# Disk: nvme2n1 #
# Run 1: 1510.00 MB/s #
# Run 2: 1331.97 MB/s #
# Average: 1420.99 MB/s #
# Disk: nvme4n1 #
# Run 1: 1429.24 MB/s #
# Run 2: 1286.49 MB/s #
# Average: 1357.86 MB/s #
# Disk: nvme5n1 #
# Run 1: 1512.97 MB/s #
# Run 2: 1324.44 MB/s #
# Average: 1418.71 MB/s #
# Disk: nvme1n1 #
# Run 1: 1504.12 MB/s #
# Run 2: 1324.81 MB/s #
# Average: 1414.47 MB/s #
###################################
Total benchmark time: 42.86 minutes
I’ll need to start exporting this in JSON because this won’t be maintanable… but for now…
Using some AI Magic to pull out some numbers and make some tables…
Ryzen vs EPYC – DD Benchmark Comparison for ZFS Pool inferno
Single Thread Performance
Metric | Ryzen | EPYC |
---|---|---|
Seq Write Avg | 406 MB/s | 332 MB/s |
Seq Read Avg | 12.6 GB/s | 13.7 GB/s |
Observation:
- We see here the Ryzen, with newer cores and higher clocks, beats the EPYC in single threaded writes.
- But EPYC having alot more RAM and extra channels wins even in single threaded reads.
Thread Scaling Behavior
Write Performance
Threads | Ryzen 32G (MB/s) | EPYC 256G (MB/s) | EPYC/Ryzen Ratio |
---|---|---|---|
1 | 406 | 332 | 0.82× |
1/4 Threads (3v8) | 1,144 | 2,539 | 2.22× |
1/4 Threads vs 1/2 Threads (6v8) | 1,842 | 2,539 | 1.38× |
1/2 Threads (6 vs 24) | 1,842 | 4,526 | 2.46× |
1/2 Threads vs All Threads (12v16) | 2,233 | 4,526 | 2.03× |
All Threads v(12/32) | 2,233 | 6,243 | 2.80× |
Observation:
-
We just see domination here across all comparison points…thanks to massive caches on the CPU…a decent amount of RAM…and those extra juicy memory channels…
-
I would love to test this on a “top of the line” DDR5 platform…but that’s unobtanium … at least until they show up in droves on eBay after a 5 year lease.
Key Takeaway: NVME write performance is directly related to memory bandwidth, and therefore it’s unrelated to ARC.
Read Performance
Threads | Ryzen 32G (MB/s) | 256G EPYC (MB/s) | EPYC/Ryzen Ratio |
---|---|---|---|
1 | 12,636 | 13,670 | 1.08× |
3/8 | 8,328 | 74,970 | 9.02× |
6/16 | 8,444 | 90,427 | 10.8× |
12/32 | 8,489 | 25,333 | 2.97× |
Observation:
-
Here, I was expecing to see the biggest differance…and we do…if for no other reason than the QUANTITY of RAM. We do (this is NOT Direct IO), and its especially obvious when we see the sharp drop off in the “All Threads” test. At that point we’ve finally exhausted the 256G on the EPYC system.
-
The highly cached results in the middle (1/4 to 1/2 load) show just how helpful ARC can be, even in an NVME world.
Key Takeaway: In terms of “What do I upgrade first”: RAM still reigns supreme. For quantity…and bandwidth. The “newer”, albeit smaller, Ryzen 5600G doesn’t stand a chance with only dual channel memory.
Single Disk Single Thread Performance
System | Avg. Across All Disks (MB/s) |
---|---|
Ryzen | 1529.73 |
EPYC | 1413.31 |
Key Takeaway:
Here we again see that extra single core performance of the Ryzen make it edge out it’s server CPU uncle.
Final Thoughts
There’s alot of recent interest in “mini” NAS’s with 4 NVME drives in them. My results here with a gaming board prove those all NVME pools on top of Celerons and Atoms are bottlenecked by those systems. This is probably obvious to many, but well worth mentioning in terms of using benchmarking to better design your next upgrade.
To be clear, those users usecase’s may be fine with that level of performance. I suspect that this class of device would actually work very well for some folks. In fact, for the workload of just serving my “D” drive in Windows, my previous system was working well.
But those systems are not going to give you BRRRR fast disk access in a memory bandwidth starved world. People have high expectations for NVME drives, but you must not forget that RAID and Parity, let alone the rest of ZFS, have a performance cost. When you’re moving data to NVME drives as fast as system memory…Every bit of bandwidth helps. To really see the performance, you need to scale out your workload.
- More threads more better
- AI Needs GPU Memory…ZFS needs system Memory.