After giving some thoughts to the topic, I concluded it is unlikely that zvol would take advantage of the special VDEV (which is recommended by PBS docs in the HDD scenario).
So I considered running pbs as a container as well; thus, I could mount a plain dataset as storage. To find out which approach is better faster, I made some tests.
Disclaimer 1: All tests are performed by a trained professional an amateur hobbyist. They are not even closely scientific. Perform your own before making a decision.
Disclaimer 2: There is no official docker container for PBS. Use third party containers at your own risk.
So, there is a GitHub repo with a PBS storage benchmarking script. For the testing purposes I set up a docker container python:3.14.0b4-bookworm
(which has Debian 12 under the hood) and a Debian 12.8
VM.
Results:
Container
## primarycache=none
target dir: /zfs-local/test0/dummy-chunks
filesystem detected by stat(1): zfs
files to write: 500000
files to read/stat: 50000
buckets: 65536
sha256_name_generation: 0.29s
create_buckets: 0.94s
create_random_files: 486.84s
create_random_files_no_buckets: 30.22s
read_file_content_by_id: 0.59s
read_file_content_by_id_no_buckets: 0.49s
stat_file_by_id: 0.22s
stat_file_by_id_no_buckets: 0.19s
find_all_files: 337.33s
find_all_files_no_buckets: 1.68s
VM
## primarycache=none
target dir: /mnt/zvol16k/test4/dummy-chunks
filesystem detected by stat(1): ext2/ext3
files to write: 500000
files to read/stat: 50000
buckets: 65536
sha256_name_generation: 0.29s
create_buckets: 1.85s
create_random_files: 235.43s
create_random_files_no_buckets: 19.31s
read_file_content_by_id: 36.25s
read_file_content_by_id_no_buckets: 11.50s
stat_file_by_id: 16.76s
stat_file_by_id_no_buckets: 4.41s
find_all_files: 34.71s
find_all_files_no_buckets: 3.61s
Various technical details:
Spoiler
Both container and VM were limited to 8G memory. All tests were performed on my main pool with 3x 2-way mirrored HC550 VDEVs (aaand mirror D5-P5530 special VDEV). Pool occupancy was ~30%.
Container compose
services:
debian12:
extra_hosts:
- host.docker.internal:host-gateway
image: python:3.14.0b4-bookworm
mem_limit: 8G
restart: unless-stopped
tty: True
user: '3000:3000'
volumes:
- /mnt/<pool>/pbs-perf-test:/zfs-local
- /mnt/<pool>/pbs-perf-test-nvme:/zfs-nvme
- /mnt/smb-loopback:/smb-local
- /mnt/nfs-loopback:/nfs-local
My comments on tests start with ##
.
The benchmarking script prunes RAM cache after each iteration. I’m not a linux guy, so I’ve decided to not let the container prune the host cache. So I’ve commented this out for the container tests, set primarycache=none
, and prayed that it will be enough. I forgot to set it for every case
– see the comments.
I didn’t stop my shares but tried to not use NAS during the tests. Some tests were long enough to interfere with my backup job. Well, they were long enough that I wouldn’t consider tested options eventually anyway.
Datasets for container were LZ4 4M. Zvol for VM was LZ4 16k (there were another tests with 4k and even 4M
). No encryption. The VM disk used virtio driver and a 4096 sector size with thick provision.
I’ve tried to mount SMB share inside a container with no success. Ended up creating smb-loopback mount in TrueNAS itself. The same goes for NFS.
Commands for SMB and NFS loopbacks
sudo mount -v -t cifs -o user=pbs-test //localhost/<smb-share> /mnt/smb-loopback
sudo mount -v -t nfs localhost:/mnt/<pool>/<nfs-dataset> /mnt/nfs-loopback
Almost all tests were started with nohup python3 -u <some-path>/pbs-storage-perf-test/create_random_chunks.py /<mounted-storage>/testX &
command.
Commands used for zvol
## host
sudo zfs create -b 4M -V 10G -o compression=lz4 <pool>/<zvol-name>
sudo zfs set primarycache=none <pool>/<zvol-name>
## guest
sudo fdisk -l
sudo fdisk /dev/vda
## making partitions stuff
sudo mkfs.ext4 /dev/vda1
sudo mkdir /mnt/zvol4m
sudo mount -t ext4 -o rw /dev/vda1 /mnt/zvol4m
sudo chmod 777 /mnt/zvol4m
mkdir /mnt/zvol4m/test1
cd /mnt/zvol4m/test1
su
nohup python3 -u /<some-path>/pbs-storage-perf-test/create_random_chunks.py /mnt/zvol4m/test1 &
Other tests results:
Container
## Duplicate test to check reproducibility of results
## primarycache=none
target dir: /zfs-local/test1/dummy-chunks
filesystem detected by stat(1): zfs
files to write: 500000
files to read/stat: 50000
buckets: 65536
sha256_name_generation: 0.30s
create_buckets: 0.95s
create_random_files: 489.14s
create_random_files_no_buckets: 14.61s
read_file_content_by_id: 0.52s
read_file_content_by_id_no_buckets: 0.49s
stat_file_by_id: 0.22s
stat_file_by_id_no_buckets: 0.19s
find_all_files: 335.54s
find_all_files_no_buckets: 1.37s
--------------------
## Enabled cache.
## primarycache=all
target dir: /zfs-local/test4/dummy-chunks
filesystem detected by stat(1): zfs
files to write: 500000
files to read/stat: 50000
buckets: 65536
sha256_name_generation: 0.30s
create_buckets: 0.99s
create_random_files: 19.96s
create_random_files_no_buckets: 11.48s
read_file_content_by_id: 0.53s
read_file_content_by_id_no_buckets: 0.50s
stat_file_by_id: 0.23s
stat_file_by_id_no_buckets: 0.19s
find_all_files: 9.63s
find_all_files_no_buckets: 0.20s
--------------------
## NVME mirror test (via dataset with special_small_block=recordsize)
## primarycache=none
target dir: /zfs-nvme/test3/dummy-chunks
filesystem detected by stat(1): zfs
files to write: 500000
files to read/stat: 50000
buckets: 65536
sha256_name_generation: 0.29s
create_buckets: 1.00s
create_random_files: 479.72s
create_random_files_no_buckets: 13.76s
read_file_content_by_id: 0.54s
read_file_content_by_id_no_buckets: 0.50s
stat_file_by_id: 0.23s
stat_file_by_id_no_buckets: 0.19s
find_all_files: 345.11s
find_all_files_no_buckets: 1.34s
--------------------
## NVME with enabled cache
## primarycache=all
target dir: /zfs-nvme/test2/dummy-chunks
filesystem detected by stat(1): zfs
files to write: 500000
files to read/stat: 50000
buckets: 65536
sha256_name_generation: 0.30s
create_buckets: 0.97s
create_random_files: 24.61s
create_random_files_no_buckets: 11.61s
read_file_content_by_id: 0.53s
read_file_content_by_id_no_buckets: 0.50s
stat_file_by_id: 0.23s
stat_file_by_id_no_buckets: 0.19s
find_all_files: 9.99s
find_all_files_no_buckets: 0.20s
--------------------
## SMB loopback test. Left overnight with no progress after create_random_files --> canceled and not considered.
## primarycache=none
target dir: /smb-local/test5/dummy-chunks
filesystem detected by stat(1): smb2
files to write: 500000
files to read/stat: 50000
buckets: 65536
sha256_name_generation: 0.30s
create_buckets: 904.77s
create_random_files: 5945.92s
--------------------
## NFS loopback. Same issues.
## primarycache=all
target dir: /nfs-local/test6/dummy-chunks
filesystem detected by stat(1): nfs
files to write: 500000
files to read/stat: 50000
buckets: 65536
sha256_name_generation: 0.31s
create_buckets: 583.49s
create_random_files: 8975.66s
VM
## After some writing to the 10G zvol, it has no space left... I thought that it was caused by an unreasonable blocksize. However, other tests have the same issue.
## "Resolved" it by the 100G zvol eventually.
target dir: /mnt/zvol4m/test1/dummy-chunks
filesystem detected by stat(1): ext2/ext3
files to write: 500000
files to read/stat: 50000
buckets: 65536
sha256_name_generation: 0.29s
create_buckets: 5.15s
create_random_files: 107.10s
Traceback (most recent call last):
File "/home/deb-user/pbs-test/pbs-storage-perf-test/create_random_chunks.py", line 222, in <module>
main()
File "/home/deb-user/pbs-test/pbs-storage-perf-test/create_random_chunks.py", line 188, in main
create_random_files_no_buckets()
File "/home/deb-user/pbs-test/pbs-storage-perf-test/create_random_chunks.py", line 16, in wrapper
func(*args, **kwargs)
File "/home/deb-user/pbs-test/pbs-storage-perf-test/create_random_chunks.py", line 90, in create_random_files_no_buckets
f = open(filename, "w")
^^^^^^^^^^^^^^^^^^^
OSError: [Errno 28] No space left on device: '/mnt/zvol4m/test1/dummy-chunks/no_buckets/3f51cd869a50b770b0507b036c27c195f41fd2bef88b7aa07c412992135d2
## In VM
#du -hs /mnt/zvol4m/
2.6G /mnt/zvol4m/
(Truenas UI) data written: 101.55MiB (1%)
## Truenas shell
$ sudo zfs list -o space <pool>/pbs-test-storage
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
<pool>/pbs-test-storage 29.9T 10.0G 0B 102M 9.90G 0B
--------------------
target dir: /mnt/zvol16k/test2/dummy-chunks
filesystem detected by stat(1): ext2/ext3
files to write: 500000
files to read/stat: 50000
buckets: 65536
sha256_name_generation: 0.28s
create_buckets: 6.01s
create_random_files: 262.30s
Traceback (most recent call last):
File "/home/deb-user/pbs-test/pbs-storage-perf-test/create_random_chunks.py", line 222, in <module>
main()
File "/home/deb-user/pbs-test/pbs-storage-perf-test/create_random_chunks.py", line 188, in main
create_random_files_no_buckets()
File "/home/deb-user/pbs-test/pbs-storage-perf-test/create_random_chunks.py", line 16, in wrapper
func(*args, **kwargs)
File "/home/deb-user/pbs-test/pbs-storage-perf-test/create_random_chunks.py", line 90, in create_random_files_no_buckets
f = open(filename, "w")
^^^^^^^^^^^^^^^^^^^
OSError: [Errno 28] No space left on device: '/mnt/zvol16k/test2/dummy-chunks/no_buckets/f6f08b1c0f1b41d58756959363aa4e69c72e88e3866be74f06e5e27e9b05dd72'
--------------------
target dir: /mnt/zvol4k/test3/dummy-chunks
filesystem detected by stat(1): ext2/ext3
files to write: 500000
files to read/stat: 50000
buckets: 65536
sha256_name_generation: 0.29s
create_buckets: 5.48s
create_random_files: 332.57s
Traceback (most recent call last):
File "/home/deb-user/pbs-test/pbs-storage-perf-test/create_random_chunks.py", line 222, in <module>
main()
File "/home/deb-user/pbs-test/pbs-storage-perf-test/create_random_chunks.py", line 188, in main
create_random_files_no_buckets()
File "/home/deb-user/pbs-test/pbs-storage-perf-test/create_random_chunks.py", line 16, in wrapper
func(*args, **kwargs)
File "/home/deb-user/pbs-test/pbs-storage-perf-test/create_random_chunks.py", line 90, in create_random_files_no_buckets
f = open(filename, "w")
^^^^^^^^^^^^^^^^^^^
OSError: [Errno 28] No space left on device: '/mnt/zvol4k/test3/dummy-chunks/no_buckets/3f51cd869a50b770b0507b036c27c195f41fd2bef88b7aa07c412992135d2>
## In VM
# du -hs /mnt/zvol4k/
2.6G /mnt/zvol4k/
(Truenas UI) data written: 519.99 MiB (5%)
## Truenas shell
$ sudo zfs list -o space <pool>/pbs-test-storage4k
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
<pool>/pbs-test-storage4k 29.9T 10.6G 0B 520M 10.1G 0B
Conclusion
NFS and SMB shouldn’t be considered. This was already stated in the GitHub repo.
Zvol inside the VM and the dataset in the container have comparable performance. Zvol can be considered better for random operations (looks like it can utilise special VDEV after all).
However, I’ve encountered strange issues with zvols. I’m a newbie with zfs block storage, and perhaps issues can be resolved. Right now I’m leaning towards PBS as a container approach, though.