Recommended way to use empty PCI slots for SSDs?

I am building out a new TrueNAS SCALE server to replace my old corn-cob/junk-gaming-PC rig. After over a year of good performance, I decided to put TrueNAS in a slightly upgraded home.

The specs of what I ordered are below, and while I welcome general “woah, dude, that won’t work” or any other comments on it, my specific question is about using empty PCI slots for SSDs.

My motherboard does not support PCI-e bifurcation, and also I don’t have the budget for fancy stuff, so my plan is to put one SSD in each of the 3 empty PCI 3.0 x8 slots. But how?

Having in the past made the mistake of using a “dodgy SATA port multiplier” instead of an enterprise HBA, I don’t want to now make the mistake of using a “dodgy PCI NVMe SSD adapter” :sweat_smile:.

I have read up on it a little, and I think there are all sorts of interesting options in this space now, where you can pack a bunch of NVMe drives onto a card and let TrueNAS SCALE (or Linux in general) see them… but not without PCI bifurcation support, and not without complexity/risk.

If I just want 1 SSD per PCI slot that seems… easy? I think with only 1 SSD per slot involved, the adapters can be a kind of transparent, no-driver affair, right?

I do need it to fit into my 2u server chassis, so it needs to be fairly low profile. ChatGPT suggested the Ableconn PEXM2-130 Low-Profile M.2 NVMe SSD to PCIe 3.0 x4 Adapter but where I live (Japan) that costs almost as much as the (used) server itself. But then I see the GLOTRENDS PA05 M.2 PCIe NVMe 4.0/3.0 adapter for about ten dollars.

So my questions are:

  1. Is there a recommended product in this space for TrueNAS SCALE?

  2. Am I wrong, by any chance, that I need motherboard support for PCI bifurcation to use the multi-SSD PCI cards?

Thanks for any pointers!

P.S. my new server specs are as follows, so if I have calculated correctly I will have three available PCI 3.0 x8 slots available after installing the HBA.

  • Chassis: SuperMicro SuperServer 6028U-TR4T+ 2U with 12 x 3.5” hot-swap bays
  • Motherboard: SuperMicro X10DRU-i+ (dual LGA2011-3 sockets)
  • CPUs: 2 x Intel Xeon E5-2690 v4 (14 cores, 28 threads, 2.6 GHz)
  • RAM: 8 x 16GB DDR4-2400MHz ECC RDIMM (Total 128GB)
  • HBA: SuperMicro SAS AOC-S3008L-L8I (LSI SAS3008 chipset, IT mode)
  • Storage Drives: 12 x HDDs (specifications TBD)
  • NVMe SSDs: 3 x NVMe SSDs in PCIe 3.0 x8 slots via… something?:woman_shrugging:
  • Networking: 4 x 10GbE RJ45 ports (via AOC-2UR6-I4XT riser card)
  • Power Supplies: Dual redundant 1000W units
  • PCIe riser Slots: Configured with risers (AOC-2UR6-I4XT, RSC-R2UW-2E8E16, RSC-R1UW-E8R)|
  • No GPU

There are PCI-ex to NVME controller that can handle 4 disks without bifurcation, but they are a lot expensive (~200$).
In my little experience, due to the lack of native NVME slot, i’m using 2 cheap adapter like the one you post (the goltrend), paied like 2€ each, without any issue. And i’m not using anymore the main PCI x16 slot when i switched to a gpu-less CPU (in case i need to put one for maintainence, i don’t have ipmi or a kvm).
So Imo kinda worth and viable.
Never tested with 3 disks, i don’t need and i don’t wanna put on the server more of what I need honestly

1 Like

I am pretty sure this motherboard does support bitfurcation. Even my X9SRi-F does with the latest BIOS.

3 Likes

There’s not quite such thing as a “dodgy PCI NVMe SSD adapter”.
There are PCIe switches, which are expensive but reliable, and would let your run multiple drives from a single slot without bifurcation.
And there are passive adapters, which are basically just traces from one slot to another, So “a kind of transparent, no-driver affair” indeed.
With x8 → x4x4 bifurcation by the board, you can use these.

1 Like