Have just set up a new TrueNAS scale server and am having some performance issues.
Hardware:
CPU: Intel Core Ultra 5 245K
RAM: PNY Performance 32GB (2 x 16GB) DDR5 5600MHz
NIC: Intel X520-DA1, connected to
- PCI_E4 Gen PCIe 4.0 supports up to x4 (From Chipset)
Motherboard: MSI PRO B860M-A
OS: ElectricEel-24.10.2
Storage: 3x Lexar NM790 4TB (RAID Z1)
- M.2_1 Source (From CPU) supports up to PCIe 5.0 x4
- M.2_2 Source (From Chipset) supports up to PCIe 4.0 x4
- M.2_3 Source (From Chipset) supports up to PCIe 4.0 x2
System Configuration:
3x NM790 drives are set up in a RAID Z1 Pool and SMB share was created.
Dataset options all default.
Symptoms:
Read speeds using Windows file copy from the array are perfect, stable at approx 1GB/s.
Write speeds using Windows file copy and SCP to the array are degraded, ranging from 150 - 300 MB/s.
Testing:
iperf results as below:
Accepted connection from 10.0.0.100, port 51108
[ 5] local 10.0.0.61 port 5201 connected to 10.0.0.100 port 51109
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.09 GBytes 9.39 Gbits/sec
[ 5] 1.00-2.00 sec 1.10 GBytes 9.44 Gbits/sec
[ 5] 2.00-3.00 sec 1.10 GBytes 9.48 Gbits/sec
[ 5] 3.00-4.00 sec 1.07 GBytes 9.17 Gbits/sec
[ 5] 4.00-5.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 5] 5.00-6.00 sec 1.09 GBytes 9.33 Gbits/sec
[ 5] 6.00-7.00 sec 1.06 GBytes 9.07 Gbits/sec
[ 5] 7.00-8.00 sec 1.10 GBytes 9.42 Gbits/sec
[ 5] 8.00-9.00 sec 1.10 GBytes 9.44 Gbits/sec
[ 5] 9.00-10.00 sec 1.10 GBytes 9.47 Gbits/sec
[ 5] 10.00-10.01 sec 7.13 MBytes 9.46 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.01 sec 10.9 GBytes 9.36 Gbits/sec receiver
fio results as below:
truenas_admin@neptune[/mnt/nvme_pool/storage]$ fio --ramp_time=5 --gtod_reduce=1 --numjobs=1 --bs=1M --size=100G --runtime=60s --readwrite=write --name=testfile
testfile: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=1
fio-3.33
Starting 1 process
testfile: Laying out IO file (1 file / 102400MiB)
Jobs: 1 (f=1): [W(1)][76.9%][w=5236MiB/s][w=5236 IOPS][eta 00m:06s]
testfile: (groupid=0, jobs=1): err= 0: pid=28509: Fri Mar 14 02:59:36 2025
write: IOPS=5038, BW=5039MiB/s (5284MB/s)(72.3GiB/14691msec); 0 zone resets
bw ( MiB/s): min= 4162, max= 5928, per=99.97%, avg=5037.57, stdev=488.08, samples=29
iops : min= 4162, max= 5928, avg=5037.38, stdev=488.15, samples=29
cpu : usr=2.27%, sys=50.04%, ctx=95388, majf=0, minf=38
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,74027,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=5039MiB/s (5284MB/s), 5039MiB/s-5039MiB/s (5284MB/s-5284MB/s), io=72.3GiB (77.6GB), run=14691-14691msec
I have tried splitting the drives into individual pools and testing them individually and I have noticed the same result.