Low Preformance Help

I’ve got a new build up and running that seems like it should have a lot more oomph. And when the pool was brand new, I was seeing some 110-190 MB/s transfers briefly, but now it’s crawling along at 35-55 MB/s
Dell R750xs with Xeon Silver 4309Y
80Gb DDr4 2666
Perc H755
(12) SATA 8TB Dell drivers - 1 pool, 2vdevs of 6 disks
Intel X520-Da2 10Gb NIC

network transfers seem ok:
Connecting to host, port 5201
[ 5] local port 25442 connected to port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 702 MBytes 5.89 Gbits/sec 453 319 KBytes
[ 5] 1.00-2.00 sec 903 MBytes 7.57 Gbits/sec 109 297 KBytes
[ 5] 2.00-3.00 sec 895 MBytes 7.51 Gbits/sec 171 457 KBytes
[ 5] 3.00-4.00 sec 920 MBytes 7.72 Gbits/sec 165 414 KBytes
[ 5] 4.00-5.00 sec 915 MBytes 7.67 Gbits/sec 173 384 KBytes
[ 5] 5.00-6.00 sec 896 MBytes 7.52 Gbits/sec 164 249 KBytes
[ 5] 6.00-7.00 sec 876 MBytes 7.34 Gbits/sec 297 235 KBytes
[ 5] 7.00-8.00 sec 697 MBytes 5.85 Gbits/sec 105 321 KBytes
[ 5] 8.00-9.00 sec 673 MBytes 5.64 Gbits/sec 173 286 KBytes

Disk IO I’m honestly not sure how to read, but the CPU and disk seem to be sleeping.
root@truenas[~]# zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write

Pool1 11.3T 75.9T 162 382 5.16M 65.1M
raidz2-0 5.80T 37.8T 82 187 2.61M 34.7M
gptid/bba006de-0d96-11ef-be68-b49691fe6aac - - 14 23 455K 5.79M
gptid/bc1fea04-0d96-11ef-be68-b49691fe6aac - - 13 38 450K 5.79M
gptid/bc2c1cdf-0d96-11ef-be68-b49691fe6aac - - 13 38 451K 5.79M
gptid/bb8dc37b-0d96-11ef-be68-b49691fe6aac - - 14 23 470K 5.79M
gptid/bbb15d49-0d96-11ef-be68-b49691fe6aac - - 13 23 429K 5.79M
gptid/bc31ec74-0d96-11ef-be68-b49691fe6aac - - 12 38 417K 5.79M
raidz2-1 5.54T 38.1T 80 194 2.55M 30.4M
gptid/bc261a25-0d96-11ef-be68-b49691fe6aac - - 12 34 412K 5.06M
gptid/bc190058-0d96-11ef-be68-b49691fe6aac - - 12 34 417K 5.06M
gptid/bac95f89-0d96-11ef-be68-b49691fe6aac - - 13 23 438K 5.06M
gptid/bc384ad8-0d96-11ef-be68-b49691fe6aac - - 13 34 428K 5.06M
gptid/bc0e6d32-0d96-11ef-be68-b49691fe6aac - - 14 34 457K 5.06M

Any help is appreciated.

I know the PERCs are not well liked, looking to find a replacement HBA, but also struggling to find out with HBA would be a direct slot in to connect to the backplane.

Do you have deduplication enabled?
Do you get any errors, e.g. checksum errors while writing?
What do you use as a client (software & nic)?
Did you setup any VMs on your pool that permanently cause sync writes?
There could be many reasons.

Also, how full is your pool? zfs performance degrades if there is not much free space.

Thanks for looking.
De-dupe is not enabled on the pool or vdevs.
No errors seen in the alerts, all the disks show healthy.
The file copy is between 2 truenas boxes, both with dell based intel x520-da2 10gb sfps.
I’ve been initiating the copies from a beefy windows desktop using windows explorer and teracopy.
No VMs on the pool, and the only write is the ~2TB copy happening now.

I’d shut down services or servers to the old NAS to avoid giving it any more IO to handle than just reading data for the copy.

On my source NAS, I have a 28TB pool thats 95% full, but I’m trying to clean that up.
The second source pool is only 30% full.

New NAS pool is only 12% used at this point.

1 Like

I’ve been initiating the copies from a beefy windows desktop using windows explorer and teracopy.

How is performance if you get rid of the windows box and just send data directly between the two TrueNAS boxes using rsync or similar? Having Windows in the middle could be causing a lot of unnecessary overhead, especially if it is less than a 10gb nic.


As noted, remove windows from the equation and test direct between your TrueNAS systems.

You could also do some iperf testing to validate network throughput and no issues there.

At 95% full, that pool is 15% past the 80% threshold where TrueNAS goes into “stuff bytes into the nooks and crannies mode”. That really hurts performance. I suggest a dramatic reduction / shift of data.