Adding a RAIDZ2 vdev to a pool of mirrors, then removing mirror vdevs

Hello everyone,

I could use a sanity check on a pool expansion / migration plan, because I think I might be misunderstanding what modern ZFS can and can’t do.

I’m on TrueNAS CORE. I have an SSD pool made of 8 mirror vdevs (so 16 SSDs total), plus a single NVMe used as a SLOG. This pool is basically “VM storage” for me - mainly NFS for VM disks (Xen + Proxmox), plus backups/snapshots.

I bought 8 new SSDs that are 2× larger, and my end goal is to end up with an 8-disk RAIDZ2 layout. (Retiring/reusing current pool SSDs for something else).

I’m trying to do this with as little downtime and manual copying as possible, because migrating VM storage by hand always turns into a long weekend and doing that without mistakes could be challenging.

The plan I thought would work:

  • Add the new 8-disk RAIDZ2 vdev to the existing pool
  • Let the pool reshuffle / migrate data
  • Remove the old mirror vdevs one by one until the pool is only the RAIDZ2 vdev

After more reading, I’m seeing comments that sound like: you can only remove mirror vdevs if the pool is made only of mirrors, and once there’s a RAIDZ vdev in the pool, removing mirrors is no longer possible / supported.

If that’s true, then my whole plan is dead.

  • Am I correct that “add RAIDZ2 vdev + remove mirror vdevs” won’t work (or isn’t supported) once RAIDZ exists in the pool?
  • If that approach is not valid, what’s the standard/recommended way to migrate from mirrors RAIDZ2 with minimal downtime?
  • Is creating a new pool and migrating everything basically my only realistic option?

My pool is fairly full, so I don’t really have the luxury of “temporarily moving data somewhere else” unless there’s a clever approach people use for this.

Thanks a lot fellow humans (AI was super confident about ability to remove mirror vdev no matter what, that is quite scary mistake to make).

yeah, she’s dead jim… and normally if all that pool is used for, is vm storage, i’d say raidz2 is not a good fit because of the limited iops…

1 Like

Even on decent SSDs? I mean, current pools performance is OKish, I understand it is mirrors (basically we started with 4 vdevs I think) - but modern SSDs are able to provide quite a lot of IOPS sustained. Unless am I missing something absolutely terrible here?

Apologies for double reply, not being able to edit post is taking some getting used to.

Would having 1 hot spare and having RAIDZ1 provide better performance? My concern is having drives from same batch and getting 2 drives dead during re-silvering to be honest.

No since raidz has the iops of a single disk, if you want better performance stick with mirrors.

Yes, but IOPS of a pool tracks the combined IOPS of all vdevs - so basically with RAIDZ2 and 8 drives I would get the IOPS of a single drive with a space of ~6. Assuming I would be adding 8 drives at the time in the future, there would be no additional “penalty”.

My real issue is figuring out how to move/migrate my existing pool into new vdev layout.

P.S. Update:

But I agree with your point that something like 2xRAIDZ1 (with 4 drives each) is more suitable for VMs.

This seems like an valuable option to migrate anything:

# Step 1: Create new pool (after connecting Nytro drives)
zpool create -o ashift=12 \
  -O compression=lz4 \
  -O atime=off \
  newpool \
  raidz1 <drive1> <drive2> <drive3> <drive4> \
  raidz1 <drive5> <drive6> <drive7> <drive8>

# Step 2: Bulk transfer (VMs stay running)
zfs snapshot -r SSD@migrate1
zfs send -Rv SSD@migrate1 | zfs recv -Fv newpool

# Step 3: Monitor progress (separate terminal)
watch -n 30 "zfs list -r newpool | grep -E 'pve-vm|vm '"

# Step 4: When done, stop Proxmox VMs and any other consumers
# Then final incremental:
zfs snapshot -r SSD@migrate2
zfs send -Rv -i SSD@migrate1 SSD@migrate2 | zfs recv -Fv newpool

# Step 5: Swap pool names
zpool export SSD
zpool export newpool
zpool import newpool SSD

# Step 6: Bring everything back up

As I only use NFS shared for drives this should (in theory) by the way