I originally had 8x 6TB in RAIDZ2, but I have been replacing each of those with 16TB ones. The last one is currently resilvering.
This is the first time I have expanded my pool. What should I expect. Perhaps naively, I have been assuming that ZFS will see there is more space to use and just auto expand the pool. Is this the case? Or is there anything else I need to do once the resilver is finished?
It has since forever with FreeNAS and with CORE. But with SCALE, it seems iX forgot how to partition a disk or something–I know this issue was present in 23.10 and 24.04, and I think it was present in Bluefin as well, despite in each case a number of bug reports saying it’d be fixed in the next release. I hope they truly have it fixed now.
I really can’t believe it would be a deliberate decision to change that in SCALE, but then I’ve never understood the purpose of the “Expand” button in the GUI either. But based on what OP’s said, sounds like it is working in EE.
Turns out (at least in Dragonfish 24.04.2) how the drive is partitioned depends on how it was introduced into the pool.
So, if you have a mirror of two 4T disks, and you “replace” one with an 8T disk, the 8T disk is partitioned into one 4T partition.
But if you attached the disk instead (by “extending” the mirror), then its partitioned into an 8T partition.
And if you instead attach the disk as a spare… and then that spare is used to auto-replace the 4T disk… then you end up with an 8T replacement, rather than a 4T replacement.
And for ZFS auto-expand to work the disk needs to be partitioned to its full size… not the size of the disk it originally replaced.