Hello together,
if been using TrueNAS Core for a long time. Mostly on older hardware with the famous H200 HBA in a DELL R510.
Now, I have a newer system with Hardware from Lenovo.
It’s a SR635 v1 with a JBOD D1212 attached via an SAS-HBA 440-8e, which encapsulate an Broadcom SAS3808 IOC.
My issue is that the 8 SCSI NL-SAS disk isnide the JBOD show up sometimes as 8 disk, and on some other important points as 16 disks.
lsblk shows correctly only 8 disk from the JBOD,
while the sd* notation running from sda to sdq.
/dev/disk/by-uuid/ shows correctly only 8 SCSI disks.
The TrueNAS “Disk” page shows also only 8 disk. (mostly sdj to sdq)
As I start to create a pool I see 16 disk. For sure it is impossible to create the pool with wrong disks (using duplicated serial numbers). I check lsblk and managed to add 8 disk via GUI to a new pool. I tried the same via cli zfspool but then TrueNAS behaves a bit strange.
I did read a bit and checked storecli. LoadBalancing Mode is not supported and multipathing cannot be disabled inside. Atleast with the shipped drivers from TrueNAS scale.
I need to do some testing what happens if I disconnect an cable or switch of a board inside the JBOD, but maybe you can advise how to solve this issue. (Maybe create the zfspool with cli based on that uuids and import it?)
In connection to this: I like to use NVME for ZIL and meta data, maybe cache (L2ARC). But as NVMEs are minimum 960GB big, having two or 3 for a ZIL and cache is simply overkill. The system now has 4 1,75TiB NVMEs.
Can I use it safely with TrueNAS if i partition those NVMEs and add the partitions to vDevs via cli to the pool?
For sure, I did this already, but I hesitate put it in production as the TrueNAS is used as hardened Linux repository for Veeam in the end. So a system I need data integrity and partial high writes or reads. So something Veeam describes in a vlog [How to build Secure Linux Repositories] with ubuntu and ZFS, but the beauty of a great GUI and overall system.
Thank you for any advises, helping and input.