NVMe Server Setup

Hello Guys,

So, i want to install a couple of Intel D7-P5600 6.4TB U.2 NVMe drives on my new server I’m planning to build. To do that, i found a couple of ways:

  • Use direct cable to cable setup from the Motherboard to the NVMe via the onboard NVMe connectors
  • Use an HBA Card to connect the NVMe either using direct cable or using a backplane
  • Use a plain PCIe Card with SFF ports to connect to NVMe either using direct cable or using a backplane
  • Use a PCIe card and install NVMe on it which directly goes into the PCI Slot of the Motherboards.

Now, I’ve some questions regarding this. So, i came to know that the less the devices are between NVMe and the CPU, faster the NVMe performance. For example, connecting an NVMe via HBA is considered very worse setup as per the forum users as it adds up extra latency and probably, unstable.

Even if we talk about backplane, it has to be connected to some plain U.2 Card with SFF ports or using any HBA and i have seen a lot of users out there who use HBA. I’m not sure if I’m that right but i guess i have also seen 45Homelab using LSI HBA Cards in their NVMe backplanes. So, my question is would the HBA not add latency here and reduce the performance? Or not, because the drives are installed using a backplane?

I’m quite confused about this and looking for a proper solution!

Thanks

Tri-Mode HBAs are very bad idea, this is for sure.

All you need are eight PCIe lanes, in two sets of four. What’s you motherboard? Which PCIe slots (or even M.2 slots) are available?
Ideally, there’s a x8 PCIe slot that is bifurcatable, and all you need is a passive adapter.

In general or in terms of NVMe only?

For installing two disks right?

Planning to get X12SPI-TF or X12SPM-LN6TF from Supermicro

Haven’t decided yet on this.

So, if one has to install the U.2 NVMe, the passive adapter is the best and preferred way?

It makes sense for installing NVMe in less quantity but what if one needs like 16 NVMe? What options are there in that case?

You wrote “a couple”. I took that at face value.

16*4 = 64 lanes. EPYC or Sapphire Rapids Xeon can provide that from a single CPU; older Xeon Scalable, from dual CPU. Otherwise this will involve some PLX switches.
And probably a U.2 backplane to simplify cabling.

So, if I understand correctly, for few NVMe, if you have onboard option, is good to go.

If you don’t have, use a passive adapter and no cabling required.

if the slots are maxed out and you want to install more NVMe, PLX is the way to go right? And a backplane for easier cable management. Is that correct?

It gets more complicated if you’re using PCIe 4+