Has anyone tried dell R730xd or R740xd with NVMe

i’m looking to get the SFF R730xd and also get the NVMe kit for the 4 bays at the end. the regular SFF bays will be for VM storage then the U.2s will be for steam library.

i’m not finding much from google about people who have tried. has anyone made this work? will truenas recognize the drives and be able to pool them? will it work with the regular NVMe kit and an HBA330 or will i have to be a tri-mode HBA card?

edit1: has anyone installed truenas on the BOSS cards from the 14th gen? i heard they run in hardware RAID 1 automatically, wondering if that’s gonna be an issue

There’s no reason for it not to work, with the caveat that Gen 13s are picky about booting from NVMe.

Yes, PCIe and SATA/SAS are completely independent in U.2 backplanes, as used by Gen 13 and Gen 14 Dells.

Hell no, stay away. Not only would it not work, it’d be a piece of crap if it did. Tri-mode is nothing but a scam.

I’m actually going to be deploying a few R630s (same basic concept) with U.2 SSDs around the end of this week. Feel free to remind me then to let you know how it went.

i’m booting from the rear SSDs or the BOSS cards if i go with the R740xd so i think i should be fine. i am a little worried about the BOSS card though because i hear it does hardware RAID 1 automatically

ok i wasn’t sure how truenas would pick it up. if they pick up as individual drives then that’s perfect for truenas right?

i’ll remind you then. i don’t have the server yet to any testing so it’d be much appreciated, thank you

Yes, it’s really the only way.

I have a R630 with the NVMe kit and HBA330, though I’m using it as a Proxmox host rather than a NAS. The u.2 drive just shows up like any other NVMe drive:

root@pve3 ➜  ~ lsblk
NAME                                                                                           MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                                                                                              8:0    0 232.9G  0 disk
├─sda1                                                                                           8:1    0  1007K  0 part
├─sda2                                                                                           8:2    0   512M  0 part
└─sda3                                                                                           8:3    0 223.1G  0 part
sdb                                                                                              8:16   0 232.9G  0 disk
├─sdb1                                                                                           8:17   0  1007K  0 part
├─sdb2                                                                                           8:18   0   512M  0 part
└─sdb3                                                                                           8:19   0 223.1G  0 part
sdc                                                                                              8:32   0   3.6T  0 disk
├─sdc1                                                                                           8:33   0   3.6T  0 part
└─sdc9                                                                                           8:41   0     8M  0 part
sdd                                                                                              8:48   0   3.6T  0 disk
├─sdd1                                                                                           8:49   0   3.6T  0 part
└─sdd9                                                                                           8:57   0     8M  0 part
sde                                                                                              8:64   0   3.6T  0 disk
├─sde1                                                                                           8:65   0   3.6T  0 part
└─sde9                                                                                           8:73   0     8M  0 part
rbd0                                                                                           251:0    0    18G  0 disk
rbd1                                                                                           251:16   0     4G  0 disk
rbd2                                                                                           251:32   0     2G  0 disk
rbd3                                                                                           251:48   0     8G  0 disk
rbd6                                                                                           251:96   0     4M  0 disk
nvme0n1                                                                                        259:0    0   3.6T  0 disk
└─ceph--c6e2ac5f--0c46--4151--9ff5--c234b63a8277-osd--block--19a4db5e--8cab--408e--b02e--f67caa56bac6
                                                                                               252:0    0   3.6T  0 lvm
root@pve3 ➜  ~
2 Likes

ohhhh awesome! i think if it shows up in proxmox it’ll show up in truenas since my understand is that they’re both debian with ZFS on top of it

That part shouldn’t really be surprising, they’re just PCIe devices connected (via a switch, for signal integrity reasons) to a PCIe port.

i almost forgot about this but has anyone installed truenas on the BOSS cards from the 14th gen? i heard they run in hardware RAID 1 automatically, wondering if that’s gonna be an issue

Those things are too rich for my tastes, so no first-hand experience. They’re probably fine for a boot device, with advantages and disadvantages relative to mirroring using ZFS.

The 14G Dell line is still using the BOSS-S1 I believe; the Marvell 88SE9230 on it is capable of independent drive mode (and even passes TRIM commands) but boot devices are a case where I’m not opposed to running things in hardware-backed RAID modes.

It’s got some quirks under CORE/BSD but seems to be fine under SCALE/Linux. Enable UEFI mode, configure your virtual disk through the iDRAC lifecycle manager or menus, and install to it - you’ll then get the iDRAC-based alerts if an M.2 device fails.

1 Like

thank you! i will remember this when i do the truenas install

I have an R730xd with the rear enablement kit, H330 in IT mode and the NVME enablement kit installed.

The rear bays have a couple Samsung 860 pros for the OS, bays 0-19 are full of SAS and SATA drives. Bays 20-23 have the Kioxia NVMEs.

Everything shows up as it should. I setup the Kioxias in a RAID-Z1. Performance has been an issue over an NFS share with a 25G NIC. I think I am having NIC issues and have another on order.

Also, I have 8 additional NVME drives in cheap amazon PCIE adapters. Drives are configured in two other pools. NVME performance is amazing and I can’t wait for prices on the R740xd’s to come down a bit more. The more NVME the better!

thank you for the amazing intel! what do you use your NVMe pool for just out of curiosity?

Note that while the Dell H330 does passthrough natively, the OEM firmware has poor queue depth - crossflashing or replacing with an HBA330 might improve this.

Typically NFS will demand sync writes, which could be impacted by network latency and the cache flush performance of the Kioxia NVMe drives. What model are they?

I use 4 Samsung 970 EVO plus drives in m.2 PCIE in the cheap adapter cards for app storage.

4 Samsung 980 pros in the same adapter cards for a lightning fast SMB network share to my workstations.

The 4 Kioxia U.2 drives in the front are for an NFS share for XCP-ng hypervisors.

Thanks for the response @HoneyBadger

I think the H330 I am running is crossflashed. I bought it from the ArtofServer on ebay.

Thanks for the tips. They are KCM5DRUG3T84.

I mistyped, I am running 25G networking. A ubiquiti aggregation switch and DAC cables to all the servers. Performance on the Kioxia NFS share to an XCP hypervisor has been miserable. The XCP box has four 1TB M500 drives in a raid 10 for local VM storage. The NFS performance is waaaay slower than that array.

Don’t use the BOSS raid. If you look at the specs it doesn’t support patrol read which means it doesn’t do automatic parity verification. You’d be better off using zfs mirror for the boot of truenas on the 2 boss ssd’s.

If the OP doesn’t mind, I have a similar question and hope that @ericloewe can shed some light… I’m building an off lease dell now and want to use Gen 3 NVMe SSD 2.5.
Specs:
Dell PowerEdge R740xd 24-Bay NVMe 2.5" 2U Rackmount Server
1024GB [16x 64GB] DDR4 PC4-2666V ECC RDIMM
2x Intel Xeon Gold 6246 3.3GHz 12 Core 24.75MB 10.4GT/s 165W 2nd Gen Processor
16x Dell 3.2TB NVMe SSD 2.5" Gen3 MU Solid State Drive [51.2TB Raw]
Dell Dual Port 10GBASE-T + Dual Port 1GBASE-T rNDC | Intel X550 I350
PCIe Slot: Dell Dual Port 25GB SFP28 PCI-E CNA | Intel XXV710-DA2
PCIe Slot: Dell Dual M.2 6G PCI-E BOSS-S1 Controller + 2x Dell PE 120GB SATA SSD M.2 6Gbps RI Solid State Drives
PCIe Slot: Dell PE PCI-E 12x NVMe Drive Expander Card
PCIe Slot: Dell PE PCI-E 12x NVMe Drive Expander Card
2x Dell PE 1600W 100-240V 80+ Platinum AC Power Supplies

Does the Dell PE PCI-E 12x NVMe Drive Expander Card pass the disks through, it seems to be an HBA. Thanks!

We’ve had an NVME based PE R740xd system for more than 4 years now. In it’s current configuration it has 24 x 15.36TB Micron 9300 Pro NVME drives. We use it mostly as high performance virtual disk storage via NFSv3. Several years ago, there were some issues around the number of drives the system would recognize, hot swap, and overall performance but those were mostly the result of how new NVME drivers were to FreeBSD at the time and have since been rectified.

“Does the Dell PE PCI-E 12x NVMe Drive Expander Card pass the disks through, it seems to be an HBA. Thanks!”

Yes, it does. The PCIE “switches” or “expander cards” that Dell ships with NVME enabled servers of that generation pass through on TrueNAS Core without any issues.

1st and 2nd gen Intel Xeon Scalable CPUs don’t have enough PCIE lanes to run more than a few NVME drives each so the expander cards are a requirement. You only get 32 lanes from those cards. In practice, most workloads won’t use that kind of bandwidth regularly so it should be just fine.