Mellenox Hell - Connectx-3 and Infiniband Support

I have a new installation of TrueNAS Core, SMB shares to windows, its all workin on 1gb nix card, love the setup, however…

adding ConnectxX-3 Pro card… 2 card so far added…

CX314X cross flashed to MCX354A-FCBT, its in INFINIBAND mode…
also IBM 40GB Card CB194A…and yes also in InfiniBand mode.

TrueNAS see card in shell, but not active, I tried using command line tools to force Ethernet mode.

I know going from 1gb to 40GB will make a difference in transfer speeds, everything I have tried so far to change mode on cards has failed.

At this point im hoping to either solve issue with what i have or avoid all Connect-x card and get some 40gb QSFP cards that will work out of the box…

can anyone please offer a little help, would be apreciate, thanks

1 Like

Yeah, Mellanox NICs are dodgy in FreeBSD/Core. Experiences are decidedly mixed.

2 Likes

Is there a straight up 40gb QSFP card that is NOT Mellanox that works ethernet mode out of box?

Sure, the Intel XL710 is probably the most basic option in the 40 GbE field. Newer 100 GbE stuff should also support 40 GbE.

Thank you for that, are there any other models available in 40gb speed range, I looked that one up on ebay… its a little pricey for my budget right now…

Not as cheap as infiniband ConnectX-3s… They’re cheap for good reasons.

That’s some awesome network speed you’re looking for. I hope it goes well! All flash pool?

Chelsio T580 should work as well.
But QSFP+ is dead and 25 GbE cards are already hitting the second-hand market. Same recommendation: Intel XXV710 or Chelsio T6225 (or T62100 if you dare…).

1 Like

I have a 40gb switch, each port is QSFP+ 40-56gb, but after seeing TrueNAS hating Infiniband, I would rather skip and go with Ethernet Mode and I find it difficult getting the network card out of infiniband mode. So intel of anything but the connectx3 style cards would be fine.

Im using an X11, 3x NVME Stripe, and 5x 18tb Sata RaidZ1, seem like my biggest issue is getting fiber to work, I know I’m not going to get 40gb but at least 10xbetter than now…hopefully

UPDATE: thanks to your advise, getting the Chelsio T580, got first one, but i need at least 3 to keep it even with communications. Im just assuming it will be plug and play,

Bandwidth is not going to be your limiting factor. :crazy_face:

I only run a single VDEV pool here and between large record sizes and a sVDEV for metadata and small files, my NAS digests about 400MB/s at best over 10GbE fiber. But you may be aiming for something different like two pools and your NVME may be a lot faster than my ancient S3610s running on a SATA bus.

But I do wonder if your use case @ 40GbE will result in noticeably faster performance than at 10GbE or 25GbE network speeds. Might be less headache and a lot less expensive hardware.

thats a really good question, my my current max over 1gb is around 101-120 meg,
but its running on Cat6 through router and straight switch.

SX6036 were so value oriented, i thought skip 10gb and just jump to 40-56gb, I also learned about the limits of the X11, but I had to work with what i have, max ram possible 64gb ecc. I was also considering using caching, I have 2 vdevs… the 2nd one was going to be the transport and set to replicate internally to the mechanicals. 3x2tb drives in stripe (6600mbx3), they are crucial p5 plus,

but there are some limitations, the x11 is only PCIE 3.0 not 4.0, I avoided a single card to unify all drives, one direct on board and 2 separate 4x pcie Nvme cards, 4 pcie lanes for each NVME.

I will be just under 20gb in a perfect world…but doubt it of course…we will see. I will keep you posted…since I was transferring so much data it made sense, its only 65tb on 1st vdev.

Well, you could look into building a single pool with that three drive SSD set as a 3way sVDEV. With large files, an sVDEV, and 1M record sizes, you will get close to double my speed, ie about 800MB/s. That’s close to the limit of 10GbE.

Small files and fast datasets can be “forced” onto the sVDEV by setting the min sVDEV file size parameter of each dataset appropriately. (Ie for example set the cutoff for VM dataset to equal the record size of the dataset)

That in turn reduces your pool count to 1, also gives the benefit of very fast metadata, and so on. If you need a slog that also reduces your slog count by one.

Big thing with sVDEV is that if the sVDEV fails, so will your pool. Better use excellent disks and mirror them 3-way. Not sure how much slower a sVDEV is versus having a dedicated 3-way pool, nVME or SATA. Reckon it would not be that different.

Yes, unfortunately its a home network and my funds are limited, 1 redundant is all i could afford, however I am setting a 2nd unit to act as a backup TrueNas, that will be design to hold same data, big drives get expensive real fast…

Im trying to get the Chelsio Card now, one good thing i can use my infiniband card for is to configure my switch, it can be unlocked and made Hybrid its a Layer3 switch, but the way i intended to use it as an intranet transport between 3 or 4 stations, probably could have gone smaller. I think i have to telnet switch over the serial cable…we will see how that goes…

Apologies, I misread your earlier post, I thought you had a two-VDEV HDD pool and were contemplating a second all SSD pool for the VMs.

With a single VDEV, even large files are unlikely to write faster than 400MB/s and that is with a sVDEV taking over all the metadata and record size set to 1M. So 10GBE would be plenty.

The SSD pool might be a lot faster and be able to max out 10GbE but I kind of doubt it. Let us know how the drives / pools / etc work out for you.

1 Like

I believe in CORE these have always been problematic. I believe I’ve read many topics on the old forums pretty much saying as such.

In SCALE they appear to work just fine in Ethernet mode. At least mine seem to function without issue (ConnectX-4 cards for me personally). Not sure if there’s any caveats I’ve missed though. I’m sure there is.

Infiniband mode… Any particular reason why? Is your ConnectxX-3 card only capable of Infiniband mode? X-4 cards are dirt cheap and as other have eluded there are better documented working hardware for use on CORE.

1 Like

I assume you have tried to set the card to ETH in win or linux and move over?

It should be saved to flash (and not be a driver only issue)

Yes I realized that was the issue with TrueNAS infiniband is disabled in TrueNAS Core at least. That’s why I could see OS see card but fails to work inside TrueNAS.

I downloaded tools in windows, windows 10 has built in driver, it works no problems…some cards are not seen in shell, and some seen like IBM CB194a, using firmware 4.27, after looking in forums’ some say down flashing to 4.22 will allow thernet mode. If I could get a card that is not Connectx (no infiniband) would be ideal.

Reachable speed will depend a lot on your workload.

Just having NVME and a big pipe will not make you magically use the bandwith.
You will need multiple parallel writes to utilize that bandwith.

I ran a X11 with a Gold 5122 before, 2 nvme mirrors and was “only” able to go to some 3GB/s (ok, sync nfs writes, so maybe that was the limiter). But that was while running multiple vmotions at the same time which is the point here, a single process usually is limited to a single vdev’s write speed (maybe different on read).

Also don’t forget the remote side also needs to support the higher speed

1 Like

Very true, initially it will be only 1 NVME, but its a good experiment.