Migrating for mirrors to raidz2

Hello,

I have a pool with 36.22 TiB of storage and need to expand once again
My pool is laid out:

Mirror 1 -
-- 8TB
-- 8TB

Mirror 2 -
-- 16TB
-- 16TB 

Mirror 3 -
-- 16TB
-- 16TB

I am planning to add 2 additional 16TB drives but I don’t want to add them as another mirrored vdev as I will be losing a significant amount of useable storage which will worsen when I next expand

My goal is to rearrange the layout and use the disks and mirrors 2 and 3 in a 6 disk raidz2. The pool will then be:

Mirror 1 -
-- 8TB
-- 8TB

Raidz2 -
-- 16TB
-- 16TB 
-- 16TB
-- 16TB
-- 16TB 
-- 16TB

I understand ther are performance implication with mixing vdev types in the same pool but I’m not that bothered - the 8TB drives are several years old and I will look to replace them later.

My question is how can I utilise my existing disks to migrate from the mirrors into a raidz2? I have been reading about a zpool split command but I don’t fully understand it
Can I please get some guidance

Many thanks!

Pool split is not likely what you want.

Further, mixing RAID-Z2 and 2 way Mirror is not a good idea for a different reason that you may think. The RAID-Z2 vDev can withstand 2 disks lost before data loss. However, a 2 way Mirror can only withstand 1 disk lost before data loss. This makes your 2 way Mirror vDev the weak link.

Their maybe complicated, (and error prone), methods to make the change you want. But, my thoughts are that if you have to ask, you don’t have enough skill to complete the task without data loss.


ZFS is just not as flexible as people would like. While ZFS is a first class file system, volume manager and software RAID system, it is not perfect. TrueNAS hides some of the complexity behind a GUI and command line interface. And then on the other hand, prevents some extreme customizations of pool, dataset and vDev configurations because those would be prone to errors.

1 Like

Thanks for that rather unhelpful reply.

Firstly I am aware that a mirror can only withstand one disk loss - but it makes no difference since the entire pool is currently comprised of mirror vdevs so the risk is the same after I convert the 16TB mirrors to raidz2.
It does mean however that when I finally come around to migrating the 8TB mirror to the raidz2 I will then have more resilience.

Their maybe complicated, (and error prone), methods to make the change you want. But, my thoughts are that if you have to ask, you don’t have enough skill to complete the task without data loss.

What is the point of even saying this? You have absolutely no idea what level of skill I have nor how long I’ve been using TrueNAS. The whole point of this community is to support one another with using this software, not to brush off someone else’s question because you think they won’t understand how things work.

With that said can anyone else (who actually wants to provide some help) provide some guidance on the split tool, please? My understanding is that I can use it to remove one disk per mirror from the vdev and reutilise that disk in a new vdev. and slowly migrate the data away.
If that’s not the case, could you please let me know what I can use to move off the disks and data.

I do not have the resources to back up 36TiB of storage somewhere, rebuild the entire pool and then add the data back again.

Any help appreciated.

Thanks

I don’t believe TrueNAS currently supports zpool split but it essentially takes a mirror (or multiple mirror vdevs) and splits them into two discrete pools each becoming a stripe (both retaining all of the original data). What you decide to do after that is your choice.

Here is an example of what your setup would look like from the CLI.


Once exported from the CLI and imported via the UI they look like this.

I used GiB volumes instead of TiB for obvious reasons.

You could now potentially destroy one of the pools and that would give you the following drives to play with.

1 x 8TB
4 x 16TB

BTW @Arwen is very well respected in the forum and has contributed a significant amount over the years and I’m sure the advice that was given was with the best intentions.

2 Likes

Yes, it was rather unhelpful. Mostly because their is no easy way to do what you want.

My point was to say that such a procedure could result in data loss.

We have had people here in the forums want such procedures, but then don’t want to accept the risks. If you came back and said you have good & well tested backups, and want clues, then it is possible a procedure could be developed.

The reason I said split mirrors would not likely work for what you intended, is that you can’t get a the hybrid pool layout you originally wrote about from splitting.

However, a simple Mirror break on all Mirrors, would free up half your disks, just like @Johnny_Fartpants wrote about split Mirrors. Both split Mirror and Mirroring breaking removes all redundancy from your pool during the transition.

Whence you have the free disks, you can use them to create a new pool using RAID-Zx. However, ZFS vDevs, (Mirror or RAID-Zx), use the smallest disk for all disks. Thus, the 2 x 8TB disks are not helpful in this context, somewhat like what you initially wrote, using them in a separate Mirror vDev.

Then, with a new RAID-Zx, you can copy the data over from the old pool.

You have to check the space used in the old pool, and determine if 5 x 16TB disks in a RAID-Z1 or RAID-Z2 will be enough. It is generally discouraged to use very large disks with RAID-Z1, yet you may not get enough space using RAID-Z2.

So a staged migration might be needed. After you have migrated enough data, you can shrink the old pool by 1 x 16TB disk and expand the new pool with that freed up disk. Repeat if needed.

Even more “optimizations” can be made. Since you can’t really use the 8TB freed up disk in the new RAID-Zx pool, you can add it in as a stripe to the old pool. This may be helpful if you have space issues removing one of the 16TB disks from the old pool.

Complicated and error prone, so some here in the forum would not recommend such a procedure. For example, loss of 1 disk in the old pool during the transition means loss of all remaining data on that pool.

Edit: One last note. Attempting to make the asymmetric pool from the original post, using split Mirrors or detached Mirrors, in the same original pool won’t work. Whence a RAID-Zx vDev is added to the old pool using the freed up disks, it is no longer possible to remove any of the striped 16TB disks.

So, if the original intent is to use as much storage as possible, including asymmetrically having a 2 way Mirror of the 8TB disks, creating a new pool is required. Along will all the data migration.

2 Likes

Thank for the replies

I understand that losing a disk during this migration will mean significant data loss but that is a risk I am fine with. None of the data is precious to me and can be sourced again.

If zpool split is not supported within TrueNAS then I guess I could proceed by detaching a disk from the mirrors, leaving them in a degraded state, and moving the data to a new created pool (obviously risking data loss).
My understanding is since all vdevs are mirrors, I can use zpool remove to move the data to other available mirrors and take the drives away and that raidz2 can now be expanded with as many disks as required

If this is correct could you verify my logic, please:

  1. Detach 1 disk from each 16TB mirror
  2. Create raidz2 vdev from above disks plus 2 new 16TB disks
  3. Create pool from raidz2 which will have roughly 28TiB of useable space
  4. Migrate ~28TiB of data from original pool to new pool leaving around 6TiB of data on original pool
  5. Use zpool remove for remaining 16TB disks
  6. Remove vdevs from pool
  7. Expand raidz with two available 16TB disks
  8. Migrate remaining data

This should then leave my with a new pool with about 56TiB of space and an another pool with 8TB - significantly more than if I added another 16TB

Let me know what you think

many thanks for the help so far





Seems to work in test but remember your numbers may be a little off when it comes to expansion because of the below statement.

The expanded vdev uses the pre-expanded parity ratio, which reduces the total vdev capacity. To reset the vdev parity ratio and fully use the new capacity, manually rewrite all data in the vdev. This process takes time and is irreversible.

Yes.

Steps 2 & 3 are part of the same thing, in the sense that you have to create a first vDev when creating a pool.

Step 5 removes the stripe vDev, (aka single disk), from the pool when all the data has been migrated off. Thus, step 6 is actually part of step 5.

Otherwise, it is pretty complete.

Thanks I’ve ordered the disks and will run the process when they arrive

Does this guide still apply for burn-in testing?

Thanks