Recommended ZFS layout for NAS with 4 large HDDs

Hi, what ZFS layout should I use for system with 2x24 TB and 2x22 TB hdds?

I’d like to use space of 3 HDDs (leaving 1 for parity). I don’t have super important data so Mirror or RAID10 would be a waste of space for me.

I believe I have a choice between RAIDZ1 and dRAID1 with 1 spare, is that correct?

I think dRAID1 would be better due to large capacity of my disks and the fact that 90% of my files will be very large. Future pool expansion is very unlikely so not of concern.

I need to plan migration of data from above mentioned 2x22TB drives working in a mirrored pool to the new pool. It seems that RAIDZ1 on Truenas Scale 25 won’t allow me to create a pool with just 2 drives (even though I’d plan to expand it after data migration to 4).
I think I’d need to remove 1 disk from my mirror and create a 3-disk RAIDZ1 pool, then copy data and expand to 4-disks. Doesn’t this feel a bit risky? That’s 30h+ of data copying.

Does anyone know if dRAID1 has any similar limitations with creation of initial pool with just 2 drives and then expanding to 3 and 4?

Do you have a backup of your current data or can you replace it from another source?

dRAID is meant for a LOT of disks. Skip that.

Raid-Z1 is not recommended for use with those disks sizes due to if a disk fails and it’s resilvering, one more disk failure and the pool and data is gone.

If you are happy with the risk, you can remove one 22TB from your mirror pair, making it a single disk stripe VDEV. Then you can use the 22TB and 2x 24TB to create a Raid-Z1 VDEV (It will only use the size of the smaller disk(s) so it is like you have 3x 22TB drives in Raid-Z1). Copy your data to the Raid-Z1 pool. Verify all your data is there. Move the system dataset from the old mirror to the new Raid-Z1 and you should be fine.

This all assumes you ONLY have data to move and no Apps or VMs.

dRAID does not expand, and makes no sense with less than a few tens of drives.
Beside, dRAID1 with one spare would give you the space of… two drives. Raidz2 would be better (and the best choice overall).

Then don’t ask: You have decided for raidz1. :wink:

3-wide raidz1, and expand
or degraded 4-wide raidz1 (3 drives + 1 sparse file, using CLI), and resilver after replicating data

A bit risky it is. But since you “don’t have super important data” it shouldn’t matter. Or does it?

1 Like

Thanks for your answer and please forgive my lack of ZFS knowledge.
If dRAID does not expand - then this leaves me with just one option for 3+1 setup - RAIDZ1 (with RAIDZ2 I’d loose capacity of 2 drives which is too wasteful).
As for the risk during data migration: if you’re not afraid of dying it doesn’t mean you should just put your head on the block. Data migration should still be done the most reasonable way (within available constraints).

The more information on your system and setup, the better the advice. I don’t know where your knowledge gaps are so linking some basics.

Read the documents so you understand the trade off of using Raid-Z Expansion. Expand the ‘Overview and Considerations’ section for explanation.

BASICS

iX Systems pool layout whitepaper

1 Like

I’d personally advocate for mirrors if you can afford the cost. In this case, you’d get 2 vdevs instead of 1 vdev, and you’re not calculating parity… so you’re inherently going to have a (~ <50%) faster pool. Since IOPs are low on hard drives generally, it’s the better choice if performance matters.

There’s also more ZFS metdata written to disk in that case, since there are two vdevs. That makes recovering from bad situations (multiple hardware failures) easier in general.

Growing mirrors 2 at a time is also a benefit, I think its a little less messy and time consuming than raidz expansion and zfs rewrite. Note, zfs rewrite would be helpful with mirrors too. It’s also a battlehardened strategy that’s existed since the dawn of ZFS.

1 Like