In Dragonfish or earlier or in an EE upgrade when you haven’t upgraded the pool to support RAIDZ Expansion, the minimum number of devices for RAIDZ2 is indeed 4.
With EE in the O/S & ZFS and with a pool which supports expansion, the minimum number of devices for a RAIDZ2 is supposed to be 3 - but the TrueNAS UI may not yet allow this, and you may need to use the CLI to create a 3-device RAIDZ2.
I am using the word “devices” here because they can be drives or partitions or even files in an existing dataset. (TrueNAS UI only deals with drives and always creates a partition and uses that to create a pool - if you want to use files, you need to do it via the CLI.)
So with 2 new disks and a file, from the CLI you can create a RAIDZ2. If you offline and delete the file, you then have a degraded RAIDZ2 with the same redundancy as RAIDZ1, consisting of two drives.
BUT, we need to be clear that a 3x RAIDZ2 is NOT the same as a 3x Mirror - because the RAIDZ2 can be expanded in size by adding a drive but the redundancy level cannot be changed whilst the mirror vDev cannot be changed in size by adding a drive but can only have the redundancy level changed.
You are definitely right - if you are going to use the command line (and indeed do so as root) then you absolutely need to be sure that you are issuing exactly the right commands to achieve what you want. The good news, however is that you will only lose your data if you do something from the command line to your existing pool. If you only work on the new disks and new pool, it is unlikely that you will screw anything up.
I don’t have time to work out exactly what commands you will need to enter, but expanding my previous post…
Physically add 2 new disks to the server and boot it.
Do an lsblk ... with appropriate parameters to see what the new disks are now called.
On the 2 new disks, create a full disk partition. Run lsblk again to get the PARTUUID of these partitions.
Create an empty+sparse file in the root dataset of your existing pool.
Create a new RAIDZ2 pool using zpool create with /dev/disk/by-partuuid/uuid1 and dev/disk/by-partuuid/uuid2 and /mnt/oldpool/sparsefile.
At this point I think you can then export the pool using the CLI zpool export and re-import it using the UI.
In the UI, offline the sparse file from the new pool and you have a degraded RAIDZ2 pool equivalent to RAIDZ1.
Replicate your data from the old pool to the new one. Many ways to do this, but the best is probably using ZFS replication. I am not sure what you will need to do with the ElectricEel Docker applications dataset if you have one on your old pool - you will need advice from others on how to move this to the new pool before you do the replication.
Once the replication is complete and all your data is copied from the old pool to the new one, you will need to make the replica read-write (if you didn’t tell it to replicate as read-write), you will probably need to make the replica the master in the replica relationship or remove the replication relationship completely, and you will need to update any Systems Settings / Advanced configurations you have created (e.g. cron jobs, Init/Shutdown scripts etc) to reference the new pool rather than the old one.
In the UI remove one mirror from the old pool, and add it to the RAIDZ2 pool to resilver and make the RAID2 pool non-degraded and wait for the resilver to complete.
At this point I would probably export the old pool and do a reboot to ensure that the old pool is no longer being used. After the reboot and checking that everything is now working and all your data has been copied across, you can now destroy the old pool and reuse the disk to add it to the new pool. I am not sure how best to destroy the old pool - if I couldn’t destroy the exported pool from the UI, I would probably re-import it and then destroy it.
You can now use the UI to add the last drive to the new RAIDZ2 vDev using RAIDZ expansion
Hopefully @etorix or @stux or @dan or someone else equally skilled can give the above a good sanity check, and perhaps craft a draft set of commands you will need to run.
@deafhobbit I guess you need to weigh up the risks of:
Doing (almost?) everything via the UI, i.e. removing the mirror and making a 3x RAIDZ2 from the old drives (which might still need to be done from the CLI if the UI says it is too few drives), accepting the risk of no redundancy for a period;
Doing the degraded pool creation from the CLI, and the rest from the UI, retaining 1 level of redundancy at all times.
Two things I forgot:
Make sure you do a scrub of your old pool before you start anything to ensure that bit-rot errors are corrected before you remove the mirror;
Make sure you do burn in tests on the new drives before you use them for real data.
Alternative solutions which might be easier:
Split the mirror into two identical pools (one of which will have a new name and mount point) - keep that as the source for copying your data later and destroy the original pool to use with the 2x new disks to create the 3x RAIDZ2.
Or if you can’t split the pool this way, create a non-redundant temp pool on a single new drive and copy everything there before destroying the old pool and creating the new one.
so, i did end up picking up that 20 tb external drive that was on sale at best buy. i already had a smaller external drive with my really life critical stuff backed up to it, but being able to have an offline backup of everything else was appealing.
i’m backing everything up to it now, which will take awhile. once that’s done, i’ll put the new drives in and build a 4 drive raidz2 pool.
i know that means my data will be on a single disk for a bit, which carries some risk. however, the only way to avoid that without buying yet another drive would be building a degraded pool with the command line, and i think the risk i mess something up doing that is greater.