U.2 NVMe backplanes

Hi all,

I’m trying to spec some hardware to replace a spinning rust TrueNAS Core system with U.2 NVMe drives and running TrueNAS Scale.

I know what drives I want, I know what HBA(s) I want, but I need a little advice on backplanes and enclosures.

I’m currently looking at an ZhenLoong 8 Bay 2.5-inch NVMe U.2 Hard SSD Hot Swappable Module Cage Gen4 32Gb Backplane SFF8654 installation Space of 2 DVD-ROM from Zhenloong on Aliexpress (sorry, can’t add URLs here), but I was concerned by the “32Gb backplane” tag. As far as i’m aware, 32Gb (Gbit/s? or Gbyte/s?) isn’t a huge amount for a backplane for 8 drives is it? But then since the drives are paired so that 8 drives get presented on 4 SFF8654 interfaces, is it 32Gb per interface?
Has anyone got one of these and can do some benckmarking? I primarily want IOPS from this setup, so was considering 2 HBAs such as Broadcom 9670W but if the backplane hamstrings the install then I may need to think again.

When you joined the forum, you got a PM from @TrueNAS-Bot inviting you to take a new user tutorial. Find that PM, respond to it, and complete the tutorial–that will increase your user trust level and allow you to a variety of things, including posting links and images.

1 Like

Hi! To find out what I can do, say @truenas-bot display help.

OK so the URL is https://www.aliexpress.com/item/1005008902570782.html?spm=a2g0o.productlist.main.2.5d37WYoCWYoClY&algo_pvid=7b06708e-2908-47d0-bcf5-cba5ef0e55ad&algo_exp_id=7b06708e-2908-47d0-bcf5-cba5ef0e55ad-1&pdp_ext_f={"order"%3A"-1"%2C"eval"%3A"1"}&pdp_npi=4%40dis!GBP!133.19!133.19!!!172.10!172.10!%4021010c9a17472212782236807e1699!12000047133463293!sea!UK!0!ABX&curPageLogUid=kH6FmP7CtOJZ&utparam-url=scene%3Asearch|query_from%3A

With NVMe drives you just want PCIe lanes, or a PCIe switch, but NOT a HBA. Forget Tri-Mode.

2 Likes

This. Stay away from the tri-mode adapters. In most cases, you are hindering performance. Not only that, but you are likely going to be cheaper in the end by just buying an NVME backplane.

So did you make a choice yet? I’m looking for ways to add a u.2 to my asus w680. I believe I have a slimsas port on my board but if I remember correctly its capped at 2gb but still useful. I still have a pcie 5.0 x8 slot left. So is hba to u.2 possible or no?

Unfortunately I learned the word bifurcation after buying my current motherboard.

Possible (no-) thanks to Broadcom’s greed, but advised against.
Wire the lanes with suitable cables or adapters, or use a PCIe switch.

You must remember incorrectly because the “cap” does not make sense…

Asus WS W680-ACE has a few PCIe slots, and a SlimSAS port (chipset lanes). So use an adapter


or get a SFF-8654 4i to SFF-8639 cable

Sorry I phrased that a two year old. I meant to say that i believe the Asus W680 slimsas port is capped at 2gb per second. I forget where I read it though. I wish the pcie 5.0 adapters weren’t absolutely ridiculous in price for the plx switch cards or even the 4.0 cards. My next motherboard will definitely have full x4x4x4x4 bifurcation. It really saves you money when everything is said and done. But great advice though…much appreciated!

Have a look at the icy dock stuff… the tri-mode HBA’s are silly for u.2.

icy dock

I think others here have correctly warned me and you off of HBAs for U.2
I went for a Supermicro A+ system in the end which has the drives directly connected.
I’m now unsure what RAID level I should be using with U.2s. Normally with spinning rust i’d go RAID10, but will RAIDZ1 or 2 be performant enough not to give away half my capacity?

Raid Z1 would be fine, in smaller (3 wide) vdev’s. How many drives?

8 drives of 7.68Tb each, enterprise so full power backup.

Why?

Is the issue the Tri-Mode HBA or the Tri-Mode backplane?

2x 4 wide Z1 vdevs would be a place to start without sacrificing too much to redundancy and or performance. It will depend on the workload.

With respect to performance, it’s the HBA. NVMe was designed to provide direct access to storage; Tri-Mode ruins that by running NVMe drives through (or should I say “under”?) the SCSI bus… It works, until it doesn’t.


But Tri-Mode backplanes are an issue as well because they require U.3 drives and are incompatible with U.2. Wiring drives to a PCIe switch ot directly to PCIe lanes takes U.2 or U.3.

Don’t feed the Broadcom Beast.

A single PCI gen 4/5 nvme has more raw IO than the 12G SAS bus. So setting up a new build with 100% NVME with a HBA is underutilizing whole bunch of performance. The Tri-mode stuff is more for adding some NVME performance, or using SAS nvme’s, with a pile of rust.

Thanks - that’s what I was going to benchmark.