10GbE SFP+ with DAC Cable NIC Selection: Intel x520-DA2 vs Mellanox ConnectX-4 Lx

Hello,

Let me start by mentioning that @dan pointed me to @jgreco’s guide 10 Gb connectivity, here: 10 Gig Networking Primer | TrueNAS Community . I’m reviewing it now, but I’ve already run into a bit of a conundrum that I need some advice on.

tl;dr I’d appreciate any advice on whether I should go with the Intel x520-da2 or the Mellanox ConnectX-4 Lx (latest NVIDIA stock firmware) to use with DAC cables (and maybe RJ-45 transceivers later) on a machine with a single PCIe 4.0x4 open slot.

I have a PCIe 4.0x4 slot available. It’s my only slot.
I’ve got an Intel X520-da2 and a Mellanox Connect-X4 Lx available. The Mellanox was a Dell OEM card, but I cross-flashed it to the latest NVIDIA/Mellanox stock firmware.

I know that the Intel card is considered the more robust option for driver support over the older X2 and X3 models, but what about the X4?

The Intel X520-da2 is out of support and very picky about SFP+ RJ-45 modules, and the Mellanox is/was getting firmware updates as late as September 2024.

I’ve also never tried to put the Intel X520-da2 in an x4 slot. It’s a PCIe 2.0x8 card, so I’m not sure how it will like that.

I’m also not sure why a 2x10GbE card like the x520-da2 needed a PCIe 2.0x8 connection (4 GB/s full duplex), but I suspect that’s just me blundering around not quite knowing exactly what I’m doing, as usual.

I have 3 Mellanox ConnectX 4 Lx cards and they all work great at 25 gbps. Not a single issue with a driver under Linux / Opnsense.

I do have E810 Intel card that I bought for 25gbps nonsense router, mainly because I read Intel driver has better support for multi-queues (ice_ddp). But I ended up not using it as Mellanox gives me close to 25gbs throughput and cost $50

Are you still on CORE?

I ask because I think that the @jgreco 10G primer is based on FreeBSD and probably does not apply to Linux (SCALE).

Then this is a strong incentive to get a PCIe 3.0 or 4.0 NIC over an old PCIe 2.0 one.
Is it an open slot, or x4 electrical in x8/x16 mechanical?

Thanks! That clears things up a lot.

I should have specified in my OP and not just the tags: I’m on the latest version of SCALE/CE. So, Linux. The PCIe 4.0x4 slot is a physical x4 open slot.

I agree that that a PCIe 3.0 NIC is the better option; I was just a bit concerned after picking up that Mellanox could be disfavored in a TrueNAS environment.

The Mellanox card I have is the same one @bugacha uses. The documentation actually specifies that it’s designed to run in 3.0x4 mode, as well as 3.0x8, so I wouldn’t be doing anything unexpected with it.

I’ve tested it in a PCIe 3.0x4 slot on an Arch Linux system and got two separate, full-speed 10 GbE interfaces.

I think I should be fine. :slight_smile:

1 Like

Still rebuilding my system so I haven’t actually got 10 GbE set up yet, but the card (with 2 interfaces, according to lspci) came up just fine using the mlx5 driver.

Will report in once I have everything working just to close the loop for anyone who finds this thread later.

The PCIe 2.0 aspect of the X520 would be a dealbreaker for me. As for the Cx-4 in an x4 slot, it works fine as I have one here. Real-world throughput will max around 3.2GB/s thanks to the confluence of PCIe 3.0 and the x4 slot width. I’ve twisted every knob under the sun and spent far too much time trying to exceed this number with zero success across multiple OS.

No issue with 2x 10Gb but 2x 25Gb is bottlenecked.

1 Like

Thanks for your reply. So far, it’s stable and working well at 2x10 GbE. I don’t have any 25 GbE equipment, so I can’t be tempted into trying to make it work. :wink:

Though, I will say: having 10 Gbps+ networking sure turned out to be a great way to find all the bottlenecks in my network that have nothing to do with the NICs in my servers or my switches. :stuck_out_tongue:

2 Likes