Moving RAIDZ2 pool to new drives on SCALE - best practices?

I’ve got an 8x10TB RAIDZ2 pool that I want to migrate to a new 8x18TB RAIDZ2 VDEV to take advantage of the extra space. I have enough power and data ports to have all 16 drives plugged in at the same time.

I read this post where @etorix recommended simply replacing each drive in the old VDEV with a drive in the new VDEV, one by one. That said, the OP was going from a 2-drive mirrored VDEV to another of the same size. Does this advice hold true for my situation, or is there a different recommended approach?

This holds, and will keep your pool and your shares in the process.

As you could host two full pools, an alternative would be to simply create “newpool” (8*18TB) and replicate from “oldpool” to “newpool”. Benefit: You keep “oldpool” as backup, and could use it as a pre-filled replication target. Drawback: “newpool” has a new name, so you have to either readjust all your shares to the new name or rename “newpool” to “oldpool”.

Did you do a drive burn in test on all the new ones? Better now to find out if there are stinkers in the bunch.

2 Likes

Is this still the preferred method for testing?

While this is true.

You are likely best served used zfs send / receive rather that doing 8 resilvers.

I believe so. I go by browsing the Resources Category and any article in there with links to the old forum, I figure the info is up to still valid.

I prefer using a script.

Use one of the two linked from the following thread

(I couldn’t find a thread about @dak180 script to
link to… is there one?)

Also, would remove all current fragmentation from pool.

It’s up to you.

You could replace all the disks at once.

Once you’ve scheduled all the replacements, run zpool resilver <pool> to restart the resilver as a concurrent resilver

1 Like

Appreciate the link to the script, thank you! Is there any reason I can’t run this against all 8 drives simultaneously, each in their own TMUX session?

No, the script should do just that: Concurrent tests, each in its own session.

1 Like

Nope. Thats the preferred mode.

In fact, I think Dak’s script has that as a feature (-tm flag iirc)

Not currently; I suppose I should do something about that given the uptick in intrest in it; maybe later this week.

It does indeed.

1 Like

Everything worked like a charm! zfs send/receive was shockingly fast IMO (~36hrs for ~44TB), and the pool rename process was a bit tricky but that linked post had the keys to the kingdom. Thanks to all for the assist.