I’m running the latest version of Core on a 2U server, which has fairly beefy specs, 64c/128t EPYC, 256GB Ram, and 24 SAS SSDs. The server is connected to a switch using an Intel 2x 25Gbe XXV710-DA2 card.
When testing with a file transfer or using a network mount test tool like AJA we are seeing write speeds that are capping out at around 5Gbe on our 10 Gbe clients. Read speeds are at line speed of 10Gbe. I don’t believe this is an issue related to the actual disks, but rather a network one.
Iperf3 tests between a client and the truenas also shows similar results. if i run iperf with -P 8 flag then the line gets fully saturated on both sides, but without that only one side of the transfer is at line speed. The clients themselves can send and receive to/from each other at full line speed on iperf3.
I’ve followed the 10+ Gbe primer resource on this forum but am stumped as to why there is such a large difference between the servers send/receive speeds…