Hi, I have recently changed from redhat to truenas scale 25.04.2.3. I have my server and client directly connected by 10gbps fibre but the speed is showing slower than expect at ~6.2Gbps instead of around ~9Gbps that it was previously in redhat.
Server Specs
AMD EPYC 7282
H11SSL-i
256GB RAM
500GB m.2 NVME SSD
RAIDz2 16x 3.84TB NVME SSD
10gb Emulex OCe11102(R)-N
Client
Windows 11 23H2 (seem to be one of the people not getting 24H2 yet)
AMD Ryzen 7950x3d
64GB RAM
2TB m.2 NVME SSD
10gb Emulex OCe11102(R)-N
I am connecting to the server pool via SMB and getting exactly the same speed as iperf results which tells me it’s not an SMB issue. Performance of the pool is ~2GB/s r&w so I’m leaving some performance on the table clearly. Was going to look into teaming the 2 fibre ports together in the future if I can fix this performance issue.
iperf -c IP returns the following:
iperf -P 8 -bidir -c IP returns the following:
Client to server and server to client results are the same.
I have checked the PCIE connect speed for both cards which is showing as PCI2 x8 as it should. PCIE slots on the server don’t go through chipset so that isn’t limiting it. Card hasn’t moved on the clent when it was getting ~9gbps so I haven’t checked that.
Jumbo Frames are enabled on both client and server but it wasn’t necessary on redhat setup to acheive higher speeds.
I disabled Defender on client to test that but it made no difference. CPU load has plenty to spare on both client and server. Running iperf tops the server at 1% load.
Struggling to think of anything else to check. Planning on booting to ubuntu live cd to test speed from that as a sanity check if I get time to do that.
Anything else to try would be greatly appreciated.

