RAIDZ1 to RAIDZ2 without doubling drives

I have an 4x8 TB RAIDZ1 array on TrueNAS. I’ve reached 80% capacity, so I’d like to expand capacity, but I don’t want to keep adding drives to a RAIDZ1 vdev beyond 4, so I’d like to migrate to RAIDZ2.

The problem is that I don’t have a spare 16 TB of space available to store my data in order to destroy my existing pool and rebuild it as RADIZ2.

I know the simplest solution would be to buy five more drives, make a RAIDZ2 vdev, destroy my old pool, then add my old disks to the new RAIDZ2 vdev. But it seems like overkill to buy five drives just for this temporary migration.

I have an idea to do it with only three drives, and I wanted to see if it’s sensible, since I don’t see anyone discuss it.

Here’s my plan:

  1. Create a RAIDZ2 vdev with three new 8 TB HDDs
  2. Set my old RAIDZ1 vdev to read-only
  3. Remove the weakest HDD from my existing RAIDZ1 vdev
  4. Add the old HDD as the fourth drive to my new RAIDZ2 vdev (giving new vdev about 16 TB, enough to migrate my data)
  5. Migrate all datasets from my old RAIDZ1 pool to my new RAIDZ2 pool
  6. Delete my old RAIDZ1 pool
  7. Add the remaining three disks to my new RAIDZ2 pool

I know that I’m taking a risk in that if any disks on my old RAIDZ1 vdev fail during steps 3-5, then I lose all data on my old pool, but I have backups via restic on multiple cloud storage providers. It would be a pain to restore from backup, but it would be doable. My understanding is that the risk I’m taking is on par with a drive from my RAIDZ1 vdev dying from natural causes.

Aside from the risk of data loss in steps 3-5 if a drive fails, is there anything wrong with this plan?

Create the new Z2 array with 3 new disks PLUS a sparse zvol, removing the zvol after array creation to create a degraded array (acting as a Z1). This will give you 16TB whilst maintaining parity in both arrays

Then use old disks, once data is transferred to:

  1. Replace missing disk to “upgrade” the array back to Z2
  2. Add disks, one by one to the pool to increase capacity
2 Likes

Is the purpose of the sparse zvol to allow me to create the RAIDZ2 vdev with <4 disks? I keep seeing people say that the minimum disks for RAIDZ2 is 4 disks, but I’ve tested it on some USB drives, and it creates a vdev fine with just 3:

$ zpool create \
  -f \
  usbpool \
  raidz2 \
  -m /mnt/usbpool \
  /dev/sdc \
  /dev/sdd \
  /dev/sde
$  zpool status usbpool
  pool: usbpool
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        usbpool     ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0

errors: No known data errors
$ zpool list usbpool
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
usbpool  85.5G   378K  85.5G        -         -     0%     0%  1.00x    ONLINE  -

Not really.

The purpose is to create the Z2 with 3 disks and have 16TB (ish) of space

The reason you see people recommend against raidz2 for 3 disks is probably because it would make more sense to have a three-way mirror, for performance reasons. However, that’s for situations where you aren’t planning to expand beyond 3 disks.

In your case, you plan to add more disks to the vdev.

Oh, gotcha!

That’s clever, thanks! That mitigates the risk of losing data if a single drive on the old vdev fails during the migration.

Note that it is recommend to use UUIDs rather than drive letters when creating pool from teh command line.

1 Like