The X710 is an advanced card with some rather fiddly features, such as an in-firmware LLDP driver, and the ability to provide full VF support tp a TrueNAS VM when properly configured in ESXi. As with the X520, this high performance driver is authored by Intel, but unlike the X520, the SFP vendor lock is enforced in the card firmware, so Intel-compatible SFPâs are required.
So what I found so far to disable LLDP in the firmware:
Hint: After disabling the vendor lock a reboot is required.
Questions:
Is there anything else to consider with the Intel X710?
Does someone know which version of the X710 driver is included in the latest TrueNAS SCALE?
Even that this thread is about the Intel X710, is there any other 10 Gig SFP+ NIC that supports higher C-States, supports ASPM and works with TrueNAS SCALE? At least the X710 supports higher C-states and ASPM regarding to https://z8.re/blog/aspm.html. The Mellanox cards seem to have no proper support of the power saving measures.
Iâve got the XXV710-DA2, currently running in 10Gb. Havenât tuned anything in particular other than some TCP buffers etc (not specific to this particular card). I bought the card used and flashed the firmware to latest version from Intelâs web site.
Having said that, I use LACP but havenât looked into LLDP as you mention above, should maybe look at thatâŠ
Final note about Mellanox. I know they are not so popular here, probably due to poor support in FreeBSD? But Linux support is rock solid, with Mellanox themselves a fairly active contributor, thereâs even a Mellanox firmware update tool as part of standard Debian. They are also one of the most commonly seen brands in data centres.
Iâve read like yourself that they prevent power saving in higher C-states, at least the older cards. Havenât assessed that, but I do have a couple of their cards in my home lab (on Linux/Proxmox) and only had good experiences. Given Scale is based on a modern Debian kernel I absolutely expect Mellanox would be well supported there too, but no personal experience.
What Mellanox models are you using and could you check which C-states your system reaches and if ASPM is supported (a lot of infos can be found at https://z8.re/blog/aspm.html)?
ConnectX-4 Lx and they seem to have ASPM enabled. According to powertop Iâm typically in C2 most of the time but there are also no higher states listed. Iâve never looked at this before, maybe there are also Bios settings related to this.
All NICs are on latest available firmware. CPU is AMD.
Maybe Intel is the way to go then if you care about power efficiency and the server idles a lot.
This seems like a bit of a jungle though⊠Hardware, firmware, bios settings, OS and CPU governors, software⊠all of which combine in different permutations leading to different behaviour and results.
Iâve never looked into it much. Probably should.
I want to bring up the topic again.
I have been running a X710 since TrueNAS Core. Within Core I never had an issue with this card. Now in Fangtooth I can recognize a weird behavior, and I think it has something to do with this topic. After some time, the system is not reachable anymore and reboots. To be on the safe side, I replaced the network card with an old Connect-X3. With this NIC the system runs for several days without any problems.
As already described, the change via âethtoolâ is not persistent across reboots. Is there any other way to implement a command during startup to switch off LLDP?
Would a Init Command within the Advanced Settings work?
I just gone through this whole endeavor, on 24.04. though. Because I had abysmal upload speed on a 2.5G DAC from fs.com and had to do some testing for the correct tunings. Here is what I did - and make sure your card sits in an PCIe 8x slot.
Sysctl tunables:
Variable
Value
Description
net.ipv4.tcp_congestion_control
bbr
Use BBR congestion control for better internet performance
net.core.rmem_max
134217728
Maximum receive socket buffer size (128MB)
net.core.wmem_max
134217728
Maximum send socket buffer size (128MB)
net.ipv4.tcp_rmem
4096 262144 268435456
TCP receive buffer: min default max (4KB 256KB 256MB)
net.ipv4.tcp_wmem
4096 65536 134217728
TCP send buffer: min default max (4KB 64KB 128MB)
net.core.netdev_max_backlog
30000
Maximum packets in network device input queue
net.ipv4.tcp_slow_start_after_idle
0
Disable slow start after idle (better for sustained transfers)
net.ipv4.tcp_moderate_rcvbuf
0
Disable automatic receive buffer tuning
I than utilize a little post-init shell script to set these settings at startup:
ethtool -G enp1s0f1 rx 4096 tx 4096
ethtool -C enp1s0f1 adaptive-rx off adaptive-tx off rx-usecs 16 tx-usecs 72
ip link set enp1s0f1 txqueuelen 10000
if you use a bridge interface turn off stp on the bridge if you dont need it: echo 0 > /sys/class/net/br1/bridge/stp_state
To prove the command, I set it to Enabled and restarted the system. Unfortunately, it did not work. After the restart, I checked the flags with ethtool command:
root@TrueNAS[~]# ethtool --show-priv-flags enp6s0f0np0
Private flags for enp6s0f0np0:
MFP : off
total-port-shutdown : off
LinkPolling : off
flow-director-atr : on
veb-stats : off
hw-atr-eviction : off
link-down-on-close : off
legacy-rx : off
disable-source-pruning : off
disable-fw-lldp : off
rs-fec : off
base-r-fec : off
vf-vlan-pruning : off
vf-true-promisc-support: off
How can I get it to work? Should I use the ethtool command instead?
root@TrueNAS[~]# ethtool --show-priv-flags enp6s0f0np0
Private flags for enp6s0f0np0:
MFP : off
total-port-shutdown : off
LinkPolling : off
flow-director-atr : on
veb-stats : off
hw-atr-eviction : off
link-down-on-close : off
legacy-rx : off
disable-source-pruning : off
disable-fw-lldp : off
rs-fec : off
base-r-fec : off
vf-vlan-pruning : off
vf-true-promisc-support: off
root@TrueNAS[~]#
This is how the script look like. Its lokated in /root/startstuff.
root@TrueNAS[~/startstuff]# cd /root/startstuff
root@TrueNAS[~/startstuff]# ls -l
total 5
-rwxr-xr-x 1 root root 216 Aug 13 12:28 poststart.sh
#!/bin/bash
# disable LLDP on interface enp6s0f0np0
ethtool --set-priv-flags enp6s0f0np0 disable-fw-lldp off
# disable LLDP on interface enp6s0f1np1
ethtool --set-priv-flags enp6s0f1np1 disable-fw-lldp off
exit 0