Advise storage design please

Hi I’m new to TrueNAS, I’m looking for advice on the right configuration for my needs.
I’m looking for max usable capacity with protection against a single drive failure, I don’t want to mirror. I’m not sure (a) if I’m better served RAIDZ or dRAID, or (b) which VDEV’s I should be creating beyond DATA VDEV.
I will be running several containers, 1-2 VM guests, and some SMB or NFS network shares.

I’ve installed ElectricEel-24.10.2.1 on NVMe and I have a PC with Broadcom 9207-8e (LSI SAS2308) connected to a HPe Nimble ES1 disk shelf. In the shelf I have

  • 5 x 20 TB SATA
  • 5 x 6 TB SATA
  • 1 x 4 TB

In the PC I also have two NVMe

  • 500 GB NVMe with TrueNAS installed
  • 1 TB NVMe with Ubuntu from the previous install that I want to virtualise and then the NVMe will be free

dRAID can only be relevant for enterprise deployments with several tens of drives.

Probably none. But you should actually deploy TWO pools:

  • HDDs in raidz# for SMB and NFS shares;
  • SSD in mirror (or single drive backed up frequently to the HDD pool) for VMs. VMs fare badly on raidz.

You can stripe two 5-wide raidz vdevs. I’d actually recommend raidz2 given the size.
The 4 TB drive is useless.

1 Like

I agree entirely with @etorix and in addition…

  • 500GB NVMe is wasted on TrueNAS boot - get yourself a small SATA SSD for this.

  • IMO you really want two matched NVMe drives so that you can mirror them for your VMs and apps (in what I would call an ssd-pool cf. a hdd-pool).

The idea of the SSD pool is to put everything on it which is actively used and where because of frequent access, the IO response time will make a significant difference. So this would include the VM O/s, any apps or containers you intend to run, and their actively used data.

All the inactive, rarely accessed, at-rest data goes on HDD.

For your Ubuntu VM …

I would personally recommend (i.e. this is a personal view) splitting the data into OS, active data and inactive data.

  • The O/S needs to be on a virtual disk i.e. a zVol with synchronous writes = always.
  • The active data should be accessed by an NFS share and be on an SSD dataset.
  • The inactive data should be accessed by an NFS share and be on a HDD dataset.

(The reason for putting sequential data files files in datasets and accessing them over NFS etc. is to avoid the need for synchronous writes for them - because synchronous writes have a significant performance impact.)

1 Like

Thank you @etorix and @Protopia! Exactly the kind of insight I’m looking for.

  1. RAIDZ
    I will lock in on RAIDZ# thanks. Can I ask what is behind your recommedation for Z2?
    As I understand it Z2 will reduce my usable capacity, but will increase protection from one failed drive to two simultanous failed drives such as a failure during rebuild/resilvering.
    Is there more / other reasons I should consider Z2?

  2. HDDs in the pools
    I should add as context I’m migrating from a Synology DS1815+ with 8 x 6 TB drives that is running BTRFS RAID 6. This is where the five 6 TB drives will come from once free’ed up.
    Am I correct in assuming that if I create the pool with just the 20 TB drives that I won’t be able to add the 6 TB drives to the pool later?
    i.e. I need to start the pool with the smaller drives

  3. Boot drive
    I take your point that the 500 GB NVMe drive is wasted on TrueNAS OS, I got it at a sale but I am limited by NVMe slots so I will re-think. I will look at something like a Netac N600S 2.5" 128GB Internal SSD instead, any concerns using NAND Flash?

  4. SSD for active
    I like your call out for a SSD mirror for VM’s, containers, and active data. That is what I’m planning to use the 1 TB NVMe for once I copy the existing data off, but if I swap TrueNAS to a SATA SSD then I could mirror the 1 TB NVMe to a second. If I can’t I will back up to HDD as you recommend.

  5. Ubuntu VM
    Thanks @Protopia I use a similar storage layout of OS vs Active vs Inactive but the first two I currently place on separate partitions on SSD and the 3rd on a NFS (HDD) share. I will re-think my approach and use your recommendation.

I feel I’ve stretched this subject. I will start a new chat for my other question… read/write cache.

Thank you again - awesome to get your input

  1. You were previously using RAID6 - RAIDZ2 is the equivalent. Why are you thinking of changing down to RAIDZ1? But yes, the two simultaneous failed drives is the reason - same as on your Synology.

  2. You need to put the 6TB and 20TB drives into separate vDevs (in the same pool). If you put them into the same vDev then you would need to start with the 6TB drives, and when you add the 20TB drives they will effectively become 6TB drives - hence putting them in separate vDevs.)

  3. So long as your boot drive is SSD and not a USB flash stick, you will be fine.

  4. Exactly.

  5. On ZFS, separate datasets in the same pool is the equivalent of separate partitions on the same disk in e.g. Windows.

Good luck.

1 Like

Don’t be obsessed with “right sizing”. TrueNAS does not much much space to boot from, but if you got it for cheap your pick is right.

What’s the hardware? No more PCIe or M.2 slots?

1 Like

I agree that if you have a larger SSD than needed and if you have no other better use for either the drive or the drive slot, then go ahead and use it anyway.

My points were primarily that NVMe performance isn’t needed for the boot pool, and if you are running apps then you would benefit from having an SSD pool to run them from and so you might have a better use for both the NVME drive and the NVME slot.

1 Like

Home build PC, motherboard is a Asus TUF Z390M-PRO GAMING (WI-FI). She’s old but still serve’s me well.
It has two M.2 NVMe slots (22110 & 2280), two PCI express 3.0 x16 lane slots, one PCI express 3.0 x1 slot.

The two PCIe16 slots are taken up by a dual SFP+ NIC and the HBA.

Yip that was my take from your posts - don’t worry I’m not tearing my hair out but its good feedback/knowledge to have still

Absolutely take your point that RAIDZ2 is akin to RAID6.
My risk appetite has changed for array failure, when I built the RAID6 array I didn’t have decent data protection/recovery methods available and now I do.

Would an array failure suck = yes, alot
Can I get it back = yes

The x1 slot is perfect for boot. That leaves two M.2 on-board, off the chipset, and the x16 slot from the CPU could host a x8x4x4 adapter for two more M.2 drives alongside a half-height HBA (or NIC).