RAIDZ expansion - multiple disks at once?

Question’s pretty much what it says on the tin: is it possible to expand a RAIDZn vdev by, say, two disks simultaneously? So, e.g., if I have a six-disk RAIDZ2 and want to turn it into an eight-disk RAIDZ2, can I do that in a single step, or do I need to add one disk, wait (however many days) for that to finish, then add the second one?

Haven’t seen this question here (though I could have missed it), nor do I see anything about it in the docs:

I suspect it is: Add one, add another, wait and wait.
With the second expansion actually proceeding once the first expansion is complete.

I believe the UI requires you to choose one disk to extend.

This doesn’t go into detail about it, but says:

Extend a RAIDZ VDEV to add additional disks one at a time, expanding capacity incrementally.

1 Like

It is my understanding that only one disk at a time can be added.

This feature allows disks to be added one at a time to a RAID-Z group, expanding its capacity incrementally

Additionally, I found this feature request…

1 Like

Yes, while it would be nice to allow adding multiple storage units, (aka HDD, SSD or NMVe), to a RAID-Zx at one time, to reduce the copy portion time, it may be less important now:

Whence that hits main stream, I would think a Parity Add feature to RAID-Z1/2 would be more important. We are starting to see people using RAID-Z1 with huge disks, (>10TB), and wanting to expand. Even to 8 or more disks.

Now some occasionally look at that and say “okay, I have >=8 disks in a RAID-Z1, how can I change that to RAID-Z2?” Experienced OpenZFS users know the answer, “backup, re-create and restore”. But, a Parity Add feature, using a new disk, would be nice. Not saying it is easy or even possible, but solves some of the RAID-Zx expansion problems.

2 Likes

I may be wrong, but increasing parity should be at most as complicated as raidz expansion, and possibly easier. The most straightforward implementation would be to just fill the new drive with blocks for the second/third parity, without reflowing or redistributing existing blocks.

I’m far from an expert in the guts of ZFS, but I don’t think this is correct. RAIDZ expansion reads in the existing data/parity and writes it out unchanged across the newly-expanded vdev (which is one of the reasons that the data:parity ratio doesn’t change for existing data when you add a disk to a vdev). Parity expansion would require doing the same thing as well as recomputing parity (and therefore checksums).

From what I can tell, Nope, MUCH, MUCH, (add 10 more), harder.

Adding parity will involve block pointer updates. Not the de-fragment or fragment that some people want. Just a plain block pointer update for new location. So, VERY HARD because of:

  • hard links
  • block clones
  • snapshots
  • dataset clones
  • bookmarks?

Now why does that need a new location?
Simple, a full width RAID-Zx stripe without a spare block immediately after, can’t support an additional parity. Remember, adding a column to RAID-Zx does not add free space at the column level, but at the end of the vDev.

Further, during the disk add routine, exactly the same as RAID-Zx expansion, any new writes would need to be restricted to the old maximum width. Or perhaps written with the new parity level. This is needed because we must not let any old level of parity be written to the new maximum width. That would prevent additional parity to be added to that RAID-Zx stripe, (without adding yet another disk!).

This does assume that RAID-Zx stripes are contiguous. But, I think they are, (just covering my bets in case they don’t have to be…).

The biggest stumbling block for this feature is the “block pointer update”. As soon as word gets out that you are working on such, people will jump all over you and ask / demand / threaten for the Holy Grail of de-fragmenting or fragmenting. Those are in a whole another universe of HARD. Right up there with changing compression, encryption & de-duplication in place.


Now the first step could be taking a single disk and adding single parity. Initially it would have all the new parity on the new disk. But, future writes would be distributed between the 2 disks. And of course, you can then expand that 2 disk RAID-Z1 as desired.

People may say why?
Because converting a Mirror pool to RAID-Zx might then be possible.

Of course, this does not get us normal Add Parity to RAID-Z1/2.