I’ve got a Scale system, that I’m using as hobbyist (Meaning downtime is acceptable). The main storage pool consists of a four wide RAIDZ1 VDEV that is populated with a mix of WD Red Plus and Seagate IronWolfs all 4TB and a set of 500GiB Samsung 980 NVMe’s as metadata VDEV (That I thought was a good idea, though I’m no longer convinced).
As I’m approaching capacity limit on my current pool I’ve bought 5 HGST Ultrastar He12 12TB drives (used) and the question now is, what’s the best migration strategy.
The rig is a Kaby Lake motherboard with an Broadcom 9207-8i SAS2308 6G SATA SAS HBA card to drive the spindles in the pool. Hence I could mount four of the new drives and copy data over, but I could also replace one disk at a time.
I’m reasonably happy with my current datasets and layout, so there’s no need as such to reshuffle anything.
Your old drives aren’t that large, so replacing them one by one shouldn’t take too long. Make sure though to use in-place replacement, so you always keep your redundancy. The advantage of this method is that all your settings for that pool (e.g. periodic maintenance tasks, shares) are preserved.
On the other hand starting a new pool with 4 HDDs could be a good opportunity to get rid of that special VDEV, if you care to do that. It would also be an opportunity to adjust other parameters of your pool (e.g. recordsize), if you feel the need to do so.
The one thing I forgot to mention in my previous post: One of the current (rather unfortunate) side-effects of the raidz expansion is that:
The original data still uses the original ratio of data blocks to parity blocks and you’ll need to run a script for in-place rebalancing, if you want to reclaim that extra space.
Currently the freespace calculation of some zfs tools are still based on the original data to parity ratio, so the freespace display in TrueNAS is likely not going to be accurate after a raidz expansion. However there are CLI commands that show the correct freespace amount.
You’re right that existing data aren’t rebalanced on RAIDZ widening, but I’m “only” talking about replacing all disks; one at a time to larger drives.
Unless I conclude that I made some error when building the pool in the first place that I need to replace it.
Whenever possible, avoid headaches by NOT using raidz expansion, and build a new pool instead at the desired width.
As raidz1 will become increasingly problematic with larger drives and increased width, this seems a good opportunity to move to 5-wide raidz2.
Sounds increasingly like the best path forward. I’ll take some time to study the nitty gritty details, such as ensuring that ashift really matches the physical sector size of the drives etc.
A 5-wide raidz2 pool would indeed be ideal, but the way I understood OP’s situation, they only have 4 SATA ports free at the moment. So without new hardware I don’t see how they can set up a new 5-wide pool without raidz expansion.
If the 4-wide goes away after migration, then it’s always possible to create the 5-wide with 4 disks and one sparse file, offline the sparse, migrate data with simple redundancy, then once migration has been confirmed and the old pool is gone, re-use one of the now free ports to plug in another drive and resilver, replacing the sparse file with a real disk.
That does require some CLI shenanigans as the GUI, wisely, doesn’t offer building vdevs with sparse files instead of disks .
You only need to have 8 drives online to have redundancy of both pools still. Ie you can pull one of the new raidz2 drives and still transfer all your data to it.
But do you not have any motherboard sata ports available?
If there’s a hard limit at 8 ports: Export the old pool and unplug one of it drives. Plug in the new drives, create a 5-wide raidz2, then offline a drive and remove it. Plug then back the 4th drive of the old pool; replicate from the old pool to the (degraded) new pool (there’s still one degree of redundancy all along). Export and remove the old pool. Add the fifth drive and let the new pool resilver. Done—no expansion, no CLI shenanigans, no dragons.