So i recently upgraded my old backup nas (node304) to a Jonsbo N3, giving me room for 2 additional disks. The original setup was a 6x8TB Z2, so i added 2 8TB disks, although initially only 1 displayed as “unused disks” in the UI, but since the expansion had to be done one at a time, i pushed on.
From the demos i seen i wasnt aware how long the expansion would take, but my pool was 80% full, but after 24hrs i was still only at 68%, but when i woke the next morning, one of the original disks had died and the expansion was paused at 95%, but to remove the dead disk, and the swap the one that didnt show up, i had to turn off the system. I swapped the dead disk with the one that hadnt shown up before, and rebuilt the pool, but in the UI, while it does say its 7 wide, instead of 6, it reports 80% full still, wouldnt adding 8TB reduce that number? Ive done several reboots incase was old data, or has something stuffed up? Is there a way to re-do the expansion?
Is it worth doing ZFS rebalance, listed here? again, im not great with scripts/commands, but before i go down that rabbit hole, just wondering if it will help?
Too many “but”…
Space reporting after raidz expansion is fundamentally broken. Rebalancing would only partially help, and there’s not enough free space here.
For now let’s just check the pool condition with sudo zpool status -v
(please post the output as formatted text, with the </> button).
Oh ok, then its just a reporting error most likely, that along with the zpool status being ok, with no errors, i dont think i have much to worry about, thanks for the help @etorix
It looks like at the time you posted before you got the extra space the expansion process hadn’t quite finished as it shows in your zpool status that it was still going but was really close to finishing at 98% and “55” minutes to go.
There is an existing bug that after a reboot the UI doesn’t show the resumed expansion process.
I got around this myself by running a looped zpool status with grep to show the progress of the expansion process.