I’ve run into an interesting problem after migrating my working Core 13.0 u6.7 system to Scale 24.04 then upgrading to 24.10. With both Intel X520-DA2 and XXV710-DA2 NICs (with Intel SFP+ modules) the primary port to which everything except iSCSI is bound will only run at 1gbps throughput. The second port will run at full 10gbps or 25gpbs speeds. There is a switch in between the client I’m using to test. The primary ports on both NICs (both client and server) do not have jumbo frames enabled, but the secondary ports do have jumbo frames enabled. The primary port and secondary ports are on two separate VLANs as well and the secondary port is only used for iSCSI storage traffic.
Prior to my migration to Scale this setup was working flawlessly and was able to achieve 10gbps throughput with the X520-DA2 on both ports and 25gbps with the XXV710-DA2 on both ports. I had tunables set per “High Speed Networking Tuning to maximize your 10G, 25G, 40G networks” article from the old forum on Core but of course none of those work on Scale so the only tunable I have set on scale is net.ipv4.tcp_congestion_control=cubic.
I also have the Plex docker container from the repository installed and running and I have 2 VMs that existed prior to the migration configured (they do not have vNICs associated with them at the moment) but not running.
Has anyone run into a similar problem and (hopefully) a solution?
Are you checking the link speed? Are you at 10G or 1G? Check both cards reported link speed. Have you tried swapping transceivers? Are you using fiber or DACs? If you boot into Scale 24.04, do you have the same problem or it only occurs on 24.10?
Browse some other threads and do the Tutorial by the Bot to get your forum trust level up and post images, if necessary.
TrueNAS-Bot
Type this in a new reply and send to bring up the tutorial, if you haven’t done it already.
@TrueNAS-Bot start tutorial
Yep, both ports link at 10G full duplex on the X520 and 25G full duplex on the XXV710.
Both NICs use multimode fiber modules and both the XXV710 and X520 performed as expected while I was on Core. This problem only exhibited after the migration to Scale 24.04. I’ll go ahead and run boot back into 24.04 and make sure that it exhibits there as well.
So, to summarise your setup:
You have a single server with two NICs (1x X520-DA2 and 1x XXV710-DA2) and no other NICs in the server (?) connected to a single switch which is then connected to a single client.
Is that correct?
If yes, is it correct to say that neither NIC achieves more than 1Gbps transfers? What are you testing with, iperf3?
Again, can you verify that there is no connected 1Gbps NIC in the server at all, no “management” NIC or anything similar?
It’s not clear from your previous answer, but have you verified that the switch sees the corresponding 10Gbps/25Gbps link speed on the four ports going to the server?
Client side, what hardware does it use to connect?
What link speed does it negotiate? Please verify both on the client and on the switch.
I’m doing a bit more testing but I believe the culprit to my issue was actually routing performance between two VLANs. But I’ll know for sure after a bit more testing. Even though I think I’ve solved the problem, the setup is as follows:
Server
- OS: TrueNAS Scale 24.10
- Tunables
- net.ipv4.tcp_congestion_control=cubic
- NIC: XXV710-DA2 (tested with X520-DA2 as well)
- Optics: Intel 25Gbps SFP28 (tested with Intel 10Gbps SFP+)
- Port 1
- IP: 192.168.3.1/24
- MTU: 1500
- Port 2
- IP: 192.168.254.1/24
- MTU: 9014
- Default gateway: 192.168.3.254/24
Client
- OS: Windows 11
- NIC: XXV710-DA2 (tested with X520-DA2 as well)
- Optics: Intel 25Gbps SFP28 (tested with Intel 10Gbps SFP+)
- Port 1
- IP: 192.168.2.1/24
- MTU: 1500
- Gateway: 192.168.2.254
- Port 2
- IP: 192.168.254.2/24
- MTU: 9014
- Gateway: none
The client’s Windows route table shows the proper routes that send all traffic not destined for the 192.168.254.0/24 network out port 1 and all traffic destined for 192.168.254.0/24 out port 2.
Both client and server negotiate 10Gbps or 25Gbps (depending on which NIC and which fiber modules are used) and the switch also properly detects a 10Gbps or 25Gbps link speed.
Testing was performed with iperf3.
- Ran iperf3 -s on the Scale server
- Ran iperf3 -c 192.168.3.1 on client and could only achieve 1Gbps max speeds regardless whether or not I used the XXV710-DA2 or the X520-DA2.
- Ran iperf3 -c 192.168.254.1 on client and achieved 25Gbps with the XXV710-DA2 and 10Gbps with the X520-DA2.