Can a 3 drive Z2 be set up in the UI? Does this make sense as part of my migration?

In Goldeye 25.10.2, I have a 3x mirror consisting of 3 Seagate Ironwolf Pro 12TB HDDs for 12TB total capacity. It’s been in place for a while but is up to almost 70 percent full. So, I’m considering buying a fourth 12TB Ironwolf and moving to a 4 drive Z2 single VDEV pool, for 24TB total capacity.

The way I’m considering migrating involves a transitional step with a 3 drive Z2 architecture. I would like to confirm that it is possible to create this in the Truenas UI. Thoughts?

Here is how I plan to migrate. (1) install the new 12TB drive. (2) detach two drives from the existing 3x mirror so that the existing pool will just become a single 12TB drive holding all datasets. (3) there will now be three free drives in the server. From them, create a 3-drive Z2 pool (the step I want to confirm is possible). (4) Replicate all datasets from the old pool, which will be down to one drive, to the new 3 drive Z2 pool, thus preserving all parent/child hierarchies, (5) destroy the old pool, thus freeing up the one remaining drive. (6) use the “expand” feature to grow the 3 drive Z2 pool to a 4 drive Z2 pool.

Does the above migration make sense?

Also, please note that I’m not worried about data loss despite going down to a one drive pool at one point in the process because I have numerous backups, both on-site and in the cloud.

This won’t be possible from the UI and also not directly from the CLI as you cannot create a pool that will be degraded right on creation afaik.
You can however create a 12TB loop device, create the RAIDZ2 using your three drives and that and then instantly disconnect it before filling the pool which will leave it degraded but working.
Then replicate and replace the missing drive with the now-free one - this will also save you the trouble, time and potential misreported sizes from an expansion.

How would it be degraded on its creation? I’m talking about a three-drive (single VDEV) pool with Z2 architecture. It would be three 12TB HDDs, so with Z2 architecture that would 12TB capacity. I don’t see how it would be degraded, unless I’m missing something.

It should work in theory. I just don’t know if the Truenas UI allows it. Frankly, I’ve used Truenas and before it Freenas for nine years, but all I’ve had before now is mirror pools, so I don’t know much about other architectures, which is why I’d like to confirm it’s possible in the UI before buying a new HDD and beginning the migration.

Of course, I have numerous backups outside the Truenas server, so I could just destroy everything and restore from backups, but since the backups aren’t on ZFS, I would lose the parent/child dataset hierarchy and would need to recreate it, so I’d rather keep it on ZFS and do it this way if possible.

Raid-Z1 requires 3 disks and Raid-Z2 requires 4 disks per the GUI tool tips on 25.10.2

1 Like

Raid-Z1 requires 3 disks and Raid-Z2 requires 4 disks per the GUI tool tips on 25.10.2

Thanks. I’m not sure if I can access the tool tips for RAID Z2 since all if have is mirror pools, but it sounds like a 3 disk RAID Z2 pool cannot be created in the GUI.

Interestingly, it’s my understanding that Open ZFS allows a three-drive RAID Z2 pool, so this can probably be done in the CLI, although not the GUI.

@Patrick sorry you’re actually right creating a RAIDZ2 from three disks should work on CLI even if the UI won’t allow it as it normally doesn’t make any sense. I never really thought about it either because of this reason - just had a minimum of three disks in my head for RAIDZ1 and four for RAIDZ2 but of course it’s actually “one more than the number of parity disks” according to zpoolconcepts.7 — OpenZFS documentation

I would probably still go with my original recommendation though to skip the expansion in favour of a “normal” disk replacement and resilver.

Are you suggesting creating a fake 12TB drive at the CLI? I did a search and saw that this is theoretically possible. Then Truenas should see 4 drives and allow me to create a Z2 pool in the UI, correct? Then what: detach the loop device and run the Z2 pool degraded temporarily for the purpose of migration?

That’s interesting, and sounds like an approach that might work.

I asked a chatbot how to create a loop device in Linux, and it came back with this:

truncate -s 12T /mnt/somepool/fake_disk.img
losetup /dev/loop0 /mnt/somepool/fake_disk.img

Does that sound right?

That’s the idea. This approach is sometimes used in the risky endavour of creating a RAIDZ1 from a previous mirror.

It does (from taking a sharp look at it, nothing more :sweat_smile: )

1 Like