Hi all,
I was curious about how much of performance hit I was taking by using encrypted datasets on a spinning HDD. So I made two new datasets, one encrypted, and one not, and ran this script to compare read/write speeds. I am certain that there are many limitations and caveats to this analysis, and I did this mostly out of curiosity / for fun, but the results are still quite interesting!
You can find the script here:
Overview
The BenchmarkDrive script is a bash utility designed for benchmarking the performance of HDDs in a TrueNAS SCALE storage pool. It measures sequential write and read speeds by writing random data from /dev/urandom
to test files on the specified dataset and then reading them back. The script runs for a user-defined number of iterations and outputs two files:
- A CSV file containing the read and write speed for each iteration.
- A TXT log file that stores details about the run, such as the dataset directory, file size parameter, iteration count, and timestamp.
Usage
Save the script as benchmarkdrive.sh
(or another preferred name). Ensure the script is stored on a pool other than the boot pool (since /home
on the boot pool has the noexec
property set).
Run the script with the following syntax:
sudo ./benchmarkdrive.sh <dataset_directory> <count> <iterations>
Where:
<dataset_directory>
: The mount point of your dataset (e.g.,/mnt/pool/dataset
).<count>
: The size of the test file in megabytes (e.g.,1000
for 1GB).<iterations>
: The number of times to run the test (e.g.,30
).
Example:
sudo ./benchmarkdrive.sh /mnt/pool/dataset 2000 100
This command benchmarks /mnt/pool/dataset
by writing and reading a 2GB file for 100 iterations.
- Permission Issues:
If you encounter “permission denied” errors, ensure the script is stored on a filesystem without thenoexec
restriction (e.g., not in/home
) and that you are running it withsudo
.
Method
For each iteration, the following steps are executed:
-
Write Test:
- A test file named
testfile_X
(where X is the iteration number) is created in the specified pool directory. - The script uses
dd
with/dev/urandom
to write random data to the file. The options used (bs=1M
,count=<count>
, andoflag=direct
) ensure direct I/O. - The output from
dd
is captured, and the write time (in seconds) is extracted using ased
regular expression.
- A test file named
-
Read Test:
- The script reads the test file back with
dd
, sending the output to/dev/null
and usingiflag=direct
. - The read time is similarly extracted from the
dd
output.
- The script reads the test file back with
-
Cleanup:
- The test file is removed after both tests.
- A new row with the iteration number, write time, and read time is appended to the CSV file.
Results
Test Parameters
- File Size: 2GB
- Iterations: 100
- Pool: Mirror configuration on 2x Toshiba N300 6TB drives
- Datasets: Both the encrypted and unencrypted datasets were created on the same pool with LZ4 compression and no dedup.
- Encryption method: AES-256-GCM
Sample Statistics
Operation | Dataset | Average Time (s) | Standard Deviation (s) |
---|---|---|---|
Write | Encrypted | 5.10 | 0.77 |
Write | Unencrypted | 4.69 | 0.77 |
Read | Encrypted | 0.513 | 0.33 |
Read | Unencrypted | 0.263 | 0.062 |
Analysis
A two-sample T-interval was calculated for the differences between the encrypted and unencrypted datasets:
-
Write Speed Difference (Encrypted - Unencrypted):
- Point Estimate: 0.411 seconds
- 95% Confidence Interval: (0.198, 0.625) seconds
- Interpretation: Write operations on the encrypted dataset are between 4.2% and 13.3% slower compared to the unencrypted dataset.
-
Read Speed Difference (Encrypted - Unencrypted):
- Point Estimate: 0.250 seconds
- 95% Confidence Interval: (0.184, 0.317) seconds
- Interpretation: Read operations on the encrypted dataset are between 69.7% and 120.4% slower compared to the unencrypted dataset.
Observations
-
Write Performance:
The encrypted dataset exhibits a moderate slowdown in write speeds, with an increase of approximately 0.41 seconds on average. This corresponds to a 4.2%–13.3% reduction in performance—a noticeable but not extreme degradation. -
Read Performance:
The performance penalty for read operations is far more dramatic. The encrypted pool’s average read time is nearly double that of the unencrypted pool, with an overhead that ranges from 69.7% to 120.4%. Additionally, the standard deviation for the encrypted dataset’s read times (0.33 seconds) is significantly higher than that for the unencrypted dataset (0.062 seconds), suggesting not only a slower performance but also greater variability and potential inconsistency in read speeds.
Conclusion
The benchmarking potentially demonstrates that encryption introduces statistically significant performance penalties:
- Write operations on encrypted volumes are modestly slower.
- Read operations, however, suffer a substantial performance hit—both in average speed and consistency.
However, it is unclear what is causing this performance difference. There are many factors at play here, and a deeper analysis would be required to understand exactly what’s happening.