Tri Mode HBA?

I currently run he latest Truenas Scale on a Supermicro X11SRL-F *PCIE3 & 128 gig of RAM, with a LSI 3000-8i HBA connected to 4 4TB Kingston DC600M SSD in one pool and 3 8TB Seagate Ironwolf in a second pool. I was reading about TRI MODE HBA’s (Broadcom 9400 & 9500). I was wondering if such an HBA wold boast the performance of my SSD’s. I don’t think so, but I would like some input. I suspect for more performance I should just go for a Hyper Asus NVME card since my motherboard does support the necessary bifurcation. However, given the size of the available ram I suspect the the Hypercard might ot be that much of an improvement. I run 10gb Ethernet.

Trimode HBAs only accelerate (and allow) NVMe SSDs.

Its not something we recommend unless you have developer skills… we’ve added internally for TrueNAS H-Series products, but we had to work out how to manage this specific devices. SAS/SATA-only is much easier.

3 Likes

If you really need NVME full speed you should go for something like this:

Here each slot has its own 4 lanes to one of the EPYC cpus, and there are so many lanes that no PCIe switches are needed. Plus still additional x16 slots.

Tri-Mode HBAs are a bit of an afterthought.

But if you look here…

… you may run into different brickwalls.

I bet Linus has 60K worth of hardware there. While I’m a tech junkie, I am not that far gone . While an EPYC board would be nice, (I dream about one), I just can’t justify it. My X11-SRL-F has 1-16 lane slot and 4-8 lane slots all to the CPU. There’s also a 4 lane to the PCH. The HBA and Intel x710 take two 8 lane slots so I have room for, 4 NVME’s in the 16 lane, and 2 each in the remaining 2 - 8 lane slots. I suspect that if I put an NVME in the 4 lane on the PCH, it would slow down any VDEV that was directly attached to the CPU. Think I’ll start with the two free 8 lane slots and see how it goes. I suspect this will saturate my 10G. 25G Ethernet cards are relatively cheap, but I haven’t seen any reasonably priced 25G switches (at least for my home deployment).

This is an update on my previous comments. I decided to dig further into my X11-SRF-F motherboard before I ordered any additional hardware and NVME’s. The documentation I had seemed to indicated that bifurcation was available. As I dug further into it, I could not find the options in the bios that the docs I had referred to. However, I kept digging and found the reference. There are three bridges in the CPU that support this. One for the 16 lane slot, 2 each support the other two slots. The bridge for the 16 lane I can just set to 4x4x4x4, however, for the other two bridges, it’s unclear whether 8x4x4 or 4x4x8 would be the right choice for a 2 nvme cards. Since each of the bridges deal with two slots, and each of those two pairs of slots have one slot in use, I was reluctant to play with it. Therefore, I’m going to get the ASUS Hyper card 4.0 for the 16 lane slot. At least that will probably not screw things up. The setting in the bios turned out to be:
Advanced->Chipset->Northbridge->IIO->CPU1->. You need the block diagram to determine the proper bridge. Parts should be here in a few days, will report back.

1 Like