I could use a sanity check on a pool expansion / migration plan, because I think I might be misunderstanding what modern ZFS can and can’t do.
I’m on TrueNAS CORE. I have an SSD pool made of 8 mirror vdevs (so 16 SSDs total), plus a single NVMe used as a SLOG. This pool is basically “VM storage” for me - mainly NFS for VM disks (Xen + Proxmox), plus backups/snapshots.
I bought 8 new SSDs that are 2× larger, and my end goal is to end up with an 8-disk RAIDZ2 layout. (Retiring/reusing current pool SSDs for something else).
I’m trying to do this with as little downtime and manual copying as possible, because migrating VM storage by hand always turns into a long weekend and doing that without mistakes could be challenging.
The plan I thought would work:
Add the new 8-disk RAIDZ2 vdev to the existing pool
Let the pool reshuffle / migrate data
Remove the old mirror vdevs one by one until the pool is only the RAIDZ2 vdev
After more reading, I’m seeing comments that sound like: you can only remove mirror vdevs if the pool is made only of mirrors, and once there’s a RAIDZ vdev in the pool, removing mirrors is no longer possible / supported.
If that’s true, then my whole plan is dead.
Am I correct that “add RAIDZ2 vdev + remove mirror vdevs” won’t work (or isn’t supported) once RAIDZ exists in the pool?
If that approach is not valid, what’s the standard/recommended way to migrate from mirrors RAIDZ2 with minimal downtime?
Is creating a new pool and migrating everything basically my only realistic option?
My pool is fairly full, so I don’t really have the luxury of “temporarily moving data somewhere else” unless there’s a clever approach people use for this.
Thanks a lot fellow humans (AI was super confident about ability to remove mirror vdev no matter what, that is quite scary mistake to make).
Even on decent SSDs? I mean, current pools performance is OKish, I understand it is mirrors (basically we started with 4 vdevs I think) - but modern SSDs are able to provide quite a lot of IOPS sustained. Unless am I missing something absolutely terrible here?
Apologies for double reply, not being able to edit post is taking some getting used to.
Would having 1 hot spare and having RAIDZ1 provide better performance? My concern is having drives from same batch and getting 2 drives dead during re-silvering to be honest.
Yes, but IOPS of a pool tracks the combined IOPS of all vdevs - so basically with RAIDZ2 and 8 drives I would get the IOPS of a single drive with a space of ~6. Assuming I would be adding 8 drives at the time in the future, there would be no additional “penalty”.
My real issue is figuring out how to move/migrate my existing pool into new vdev layout.
P.S. Update:
But I agree with your point that something like 2xRAIDZ1 (with 4 drives each) is more suitable for VMs.