Poor NFSv4 Performance

Hi there,

I’ve got a pool with sync disabled, and an NFS client connected via 10Gb Ethernet. When I mount and do a dd test, it always maxes out at 105MB/s (840Mbps). When I run the same test locally on TrueNas, I get 3.0GB/s (which I expect), but I would still be hoping to see 1200-1000MB/s via NFSv4 mount.

Any advice or tips?

1 Like

Are you running CORE or SCALE?

What mount options are you using on the client, and what is the client?

Did you do iperf test? 105MB/s is highly suspicious of link negotiating at 1Gb not 10Gb.

Scale Dragonfish.

Client is RHEL 9.3 with default options
192.168.41.1:/my-nfs-mount /my-nfs-mount nfs4 rw,defaults 0 0

iperf3 confirms 10Gb/s.

@william - Any ideas? :pray:

None from my end since we cant reproduce such results.

Have you tried SMB to pinpoint the problem as a NFS issue?

Good idea. Just did it. 455MB/s compared to 97.1MB/s :man_shrugging:

Have you tried setting the nconnect options on the client mount side?

You may want to experiment with tuning on that side first, it can have a significant impact on NFS performance. Note, nconnect should only be used with TrueNAS SCALE NFS servers. It may not be data safe for CORE.

That worked! However, I wasn’t successful with simply adding nconnect=16 into the options section of /etc/fstab. But, I manually mounted with mount -f nfs ... and the mount option appeared.

1 Like

One more interesting observation… nconnect=16 doesn’t appear to work with NFSv4.2, but only v4.1. So, in my /etc/fstab, I had to tie it to that version by adding vers=4.1. Then, mounting the path with fstab does show that the nconnect option was included in the mount. For reference I’m using kernel 5.14.0-362.24.1.el9_3.0.1.x86_64

1 Like