Hello everyone,
I’m new to TrueNAS and would appreciate some insight from the community. I’ve recently set up a TrueNAS SCALE (version 24.10.0) installation and integrated it with my Proxmox cluster over NFS 4.1. Performance seems lower than expected, and I’d love to hear your thoughts.
Setup Details:
Pools:
Pool 1: 7 × MIRROR (2-wide) — Usable: 24.2TB Pool 2: 4 × MIRROR (2-wide)
— Usable: 27.8TB
Drives: SAMSUNG_MZ7LH3T8HMLT-00003
enterprise SSDs
Networking:
TrueNAS box: LACP bond using 2 × 10Gbps SFP+, MTU 9000
Proxmox nodes: Each has a similar bond (2 × 10Gbps SFP+), MTU 9000
Protocol: NFS 4.1 used to share both pools to Proxmox
Performance Testing:
Test Run on the TrueNAS box (local FIO):fio --rw=randrw --rwmixread=50 --bs=4k --iodepth=64 --numjobs=4
–time_based --runtime=60 --direct=1 --ioengine=libaio
–group_reporting --size=1G
Result:
Read: 71.2k IOPS, ~278 MiB/s
Write: 71.2k IOPS, ~278 MiB/s
Latency: ~1.7–1.8 ms
Test Run inside a Proxmox VM (disk on TrueNAS NFS share
Result:
Read: 15k IOPS, ~88 MiB/s
Write: 18k IOPS, ~72 MiB/s
Latency: ~2.1–4.8 ms
Considering I’m using enterprise SSDs in multiple mirror vdevs and bonded 20Gbps links, I was expecting much higher IOPS or throughput.
Is this the expected performance?
Is my NFS config possibly limiting throughput?
Would switching to iSCSI improve performance for VM workloads?
Any sysctl tweaks or ZFS tunables I should look at?
Any suggestions or ideas to help identify or improve the bottleneck would be greatly appreciated!
Thanks in advance