Restructuring my vdev


I just setup a system a couple months ago and I mistakenly set it up as a single wide stripe vdev of 54, 9.1 tb disks. I am using under a third of the total capacity currently. I am wanting to build in some redundancy. I’m hoping to be able to remove several disks from the current vdev and set them up in some raidx config and migrate to that.

I removed 1 disk already in the gui, I think it was successful. But it took quite a while and was wondering if cli would be better and if I would be able to select multiple at a time?


I’m not sure but probably is possible to remove a disk from a stripe vdev if there is enough space to copy the contents of the removed disk to the remaining disk (this is nedded not to loose all the data in your pool).
I think that doing it one disk a time from web interface is the best (slow but more safe) way.
Best Regards,


That’s a typo, right? :flushed:

1 Like

No typo. I got my hands on a ucs 3260 for free.

It’s not the sheer number of disks that’s insane.

It’s that you would even build a vdev with 54 disks total.

Mirror? Would have been overkill and completely wasteful.

RAIDZX? Performance notwithstanding, resilvering would be a nightmare.

Stripe? No comment.

I believe you just have to carefully and safely remove one disk at a time, while crossing your fingers that you don’t lose the entire pool. It’s technically feasible with only 30% of the pool full.

1 Like

That’s been my dilemma. How should I carve it up?

That’s up to you, and maybe some users in here might suggest their preference for how to build a ZFS pool from 54 disks, based on certain scenarios and use-cases.

But the real priority now is to make sure you don’t lose everything because of the precarious 54-wide STRIPE vdev you created.

1 Like

A third.

Its probably safest to start removing vdevs, and adding the removed drive back as mirrors (by extending single disks)

So, once you’ve removed half of the disks you’ll have full redundancy.

And you know what, that’s a fine situation. 54 disks in mirrors is not that unusual.

And you’ll only be using 66% of the space.

BUT if you want to go for some sort of raidz after that, well, you could keep on removing vdevs still, and then begin migrating across to a new pool with raidz vdevs.


I got this error during the removal for the first drive and again for the second.

Probably a different form of this bug I reported.


1 Like