Scale with JBOD and NVMe - Question about setup

Hello together,

if been using TrueNAS Core for a long time. Mostly on older hardware with the famous H200 HBA in a DELL R510. :wink:

Now, I have a newer system with Hardware from Lenovo.

It’s a SR635 v1 with a JBOD D1212 attached via an SAS-HBA 440-8e, which encapsulate an Broadcom SAS3808 IOC.

My issue is that the 8 SCSI NL-SAS disk isnide the JBOD show up sometimes as 8 disk, and on some other important points as 16 disks. :worried:

lsblk shows correctly only 8 disk from the JBOD,
while the sd* notation running from sda to sdq.
/dev/disk/by-uuid/ shows correctly only 8 SCSI disks.
The TrueNAS “Disk” page shows also only 8 disk. (mostly sdj to sdq)

As I start to create a pool I see 16 disk. For sure it is impossible to create the pool with wrong disks (using duplicated serial numbers). I check lsblk and managed to add 8 disk via GUI to a new pool. I tried the same via cli zfspool but then TrueNAS behaves a bit strange.

I did read a bit and checked storecli. LoadBalancing Mode is not supported and multipathing cannot be disabled inside. Atleast with the shipped drivers from TrueNAS scale.
I need to do some testing what happens if I disconnect an cable or switch of a board inside the JBOD, but maybe you can advise how to solve this issue. (Maybe create the zfspool with cli based on that uuids and import it?)

In connection to this: I like to use NVME for ZIL and meta data, maybe cache (L2ARC). But as NVMEs are minimum 960GB big, having two or 3 for a ZIL and cache is simply overkill. The system now has 4 1,75TiB NVMEs.
Can I use it safely with TrueNAS if i partition those NVMEs and add the partitions to vDevs via cli to the pool?

For sure, I did this already, but I hesitate put it in production as the TrueNAS is used as hardened Linux repository for Veeam in the end. So a system I need data integrity and partial high writes or reads. So something Veeam describes in a vlog [How to build Secure Linux Repositories] with ubuntu and ZFS, but the beauty of a great GUI and overall system.

Thank you for any advises, helping and input.

I hope this is actually a 3008 HBA.

Dual pathing? Check the wiring, and use only one port.

Proper terminology would be “SLOG”.

Neither SLOG nor L2ARC require redundancy. A special vdev would.

SLOG+L2ARC would be “safe” in the sense that it could always be removed to sanitise, but is unsupported. However, anything other than Optane is not likely to react well to doing simultaneous double duty for read intensive (L2ARC) and write intensive (SLOG) tasks.

Hello,

Thank you for your answers.

The Lenovo external SAS Adapter is according to the docs equivalent to a Broadcom HBA 9500-8e.

I would like to use two ports for redundancy. The point here is not the SAS cables, more that on the JBOD each controller has only a single power source. So if one PDU fails, the JBOD is dead. With two cables I have a redundant system.
SAS cabling is fine, double checked docs and storecli. Storecli shows correctly 8 disks.

Somewhere I did read something about multipath, but I didn’t found the multipath demon on truenas.

Hardware is there and it is an AMD Epyc 7003 with 8 cores.
No chance to implement Intel Optane. The NVMe is direct attached PCIe 4.0 from Intel. I know it from an S2D that they provide 1,5 gbit/s read while writing with 300mbit/s with lower than 3ms write latency.
For me that sounds reasonable fast, especially that I use it as a backup storage. So either high write or high read, but unusual at the same time. Or do I got it wrong?

Multipath is no longer supported by TrueNAS CORE/SCALE/Community. You cannot have “redundancy”, and if the wiring is set up for that, look no further.

Why?

Then you do NOT ned a SLOG. And L2ARC is, at best, questionable.