Ideas / Advice for upgrading my NAS with ~10 nvme drives?

Hi,

With the help of the TN community I built and have been using my Dell Precision T7500 based NAS since 2018. It’s been great, but I’m about to pick up ten 2TB WD Black sn850X NVME drives, and I could use advise on the best way to use them with TrueNAS.

I can go buy some junk adapter PCBs online that go M.2 to SATA, but that doesn’t seem like a good way to maximize SSD speeds. Are there PCI adapters to look into? Should I be considering building an entirely different machine at this point?

Thanks in advance for any input!

There was a similar post recently. If your motherboard supports bifurcation, you can buy cards for cheap that split a PCIe slot into m.2 nvme slots, up to 4 disks per card. For the full claimed sequential speed of 7300MB/s, you will need 4 x PCIe 4.0 lanes per disk. For 10 disks, you will need 40 lanes of PCIe 4.0.

Does your CPU have >40 lanes of PCIe 4.0? If not, I would recommend a smaller number of larger capacity drives.

1 Like

Thank you Jorsher.

My T7500 does not appear to support bifurcation, which would have been a convenient option (buy 8 of the nvme’s and just replace my sata SSDs).

So I guess I’m looking at building a second NAS. I found this reddit post about doing so, and then from some searching found this supermicro box:

24 nvme disks would put me close to my existing NAS capacity. That’s pretty tempting right off the bat.

Listing the specifications would have been nice rather than assuming that readers know what a Precision 7500 is, or are willing to browse through the whole old thread to find out.
Less (second-hand) U.2 drives of larger capacity would likely help over a lot of small M.2.

You can look into PCIe switches, but these are costly.

If I correctly understand that this is a DDR3 server and you’re considering full NVMe, an upgrade is probably in order… (But quand-node Xeon Scalable may be pushing it a bit too far.)

There are pcie adapters that do not require bifurcation. A bit expensive thoigh

2 Likes

Can spend a couple hundred on one, connect 4 x m.2/u.2, and buy cheaper drives since it won’t have the bandwidth for the drives he selected :slight_smile:

1 Like

Thank you all for the advice. I apologize for not having more information ready, but the opportunity to buy these new-in-box drives at 1/2 price appeared suddenly and I was caught unprepared.

So at this point I’ve had a breather and (hopefully) stopped jumping the gun on a spur of the moment NAS build. I understand now that the complete blade servers I was looking at were all U.2 nvme format, and converting each M.2 SSD with an adapter more or less removes the price incentive of buying an armful of them.

And instead of talking about a “replacement NAS” I should have made a distinction, my existing TrueNAS box has spinning storage, but also “apps” running on a pool of 500gb sata SSDs that is tapped out.

So a smaller scope, near term project would be to see if I can replace those 4 sata disks with these m.2 drives to expand the pool. That would be cool.

According to the Dell forum, My Precision T7500 has 32 PCI lanes:

  • PCI x8 Gen 2
  • PCI x16 Gen 2 (x8 Gen2)
  • PCI x8 Gen2
  • PCI x16 Gen2 (x8 Gen2)
  • PCI x4 Gen1

I have two of these Dell H310 SAS HBA cards (pdf specs) installed. The PDF says 8-lane PCIe 2.0. One of them only manages the 4x sata disks currently.

I’m considering trying to keep the two H310 HBA cards for future spinning disk expansion, and adding two x8 PCIe to dual M.2 adapters (possible product?) to get four of these nvme drives in the system. But now I’m unsure of how to confirm compatibility.

You can find switched adapter cards holding 4 M.2 from a x8 slot, or possibly 8 M.2 from a x16 slot, so you’d be almost there. But your Precision T7500 will run everthing at Gen 2 speed, which may be less than satisfactory overall.

Browse through BIOS options for bifurcation…

1 Like

Anyone ever seen bifurcation on a PCIe Gen 2 board?

2 Likes

Ultimately, the initial goal is not worth pursuing with your current hardware. You don’t have enough lanes for 10 x u.2/m.2 without spending a lot of money for switching cards. Even if you do that, you’ll be limited to PCIe 2.0 speeds (500MB/s per lane).

If I were you and wanted to improve the performance and capacity of the app pool, I would just buy 2-4 enterprise 1.92TB SATA SSDs from ebay and mirror them. If I wanted to add additional disks and have the physical space, I’d buy a cheap HBA and add some SATA SSDs.

The server you linked is not really intended for what you want to do. I don’t think spending $100-200 for a m.2/u.2 switching card only to be limited to PCIe 2.0 speeds would really be worth the trouble. For the price of the card and disks, you could probably find a more modern used server.

2 Likes

Supports 12 m.2

1 Like