Issues extending ZFS

So i recently upgraded my old backup nas (node304) to a Jonsbo N3, giving me room for 2 additional disks. The original setup was a 6x8TB Z2, so i added 2 8TB disks, although initially only 1 displayed as “unused disks” in the UI, but since the expansion had to be done one at a time, i pushed on.

From the demos i seen i wasnt aware how long the expansion would take, but my pool was 80% full, but after 24hrs i was still only at 68%, but when i woke the next morning, one of the original disks had died and the expansion was paused at 95%, but to remove the dead disk, and the swap the one that didnt show up, i had to turn off the system. I swapped the dead disk with the one that hadnt shown up before, and rebuilt the pool, but in the UI, while it does say its 7 wide, instead of 6, it reports 80% full still, wouldnt adding 8TB reduce that number? Ive done several reboots incase was old data, or has something stuffed up? Is there a way to re-do the expansion?

Im not great with commands, i will try, an i can run any specific commands, but am more comfortable with UI, ie im a newb

Is it worth doing ZFS rebalance, listed here? again, im not great with scripts/commands, but before i go down that rabbit hole, just wondering if it will help?

Too many “but”…
Space reporting after raidz expansion is fundamentally broken. Rebalancing would only partially help, and there’s not enough free space here.

For now let’s just check the pool condition with
sudo zpool status -v
(please post the output as formatted text, with the </> button).

1 Like

Seems no errors

root@minime[~]# sudo zpool status -v
  pool: Z2_Volume
 state: ONLINE
  scan: resilvered 5.03T in 12:05:15 with 0 errors on Thu Dec 26 10:14:29 2024
expand: expansion of raidz2-0 in progress since Mon Dec 23 22:36:45 2024
        34.3T / 34.8T copied at 142M/s, 98.69% done, 00:55:52 to go
config:

        NAME                                      STATE     READ WRITE CKSUM
        Z2_Volume                                 ONLINE       0     0     0
          raidz2-0                                ONLINE       0     0     0
            1f5a9cc5-fa81-11e7-85a5-ac1f6b1c041c  ONLINE       0     0     0
            20411418-fa81-11e7-85a5-ac1f6b1c041c  ONLINE       0     0     0
            2125ead4-fa81-11e7-85a5-ac1f6b1c041c  ONLINE       0     0     0
            dabc4935-f355-4989-a70c-6e243f5336f2  ONLINE       0     0     0
            22d585f2-fa81-11e7-85a5-ac1f6b1c041c  ONLINE       0     0     0
            23c39a35-fa81-11e7-85a5-ac1f6b1c041c  ONLINE       0     0     0
            21fe283f-5fff-4287-a9e5-899b4ffe7d56  ONLINE       0     0     0

errors: No known data errors

  pool: boot-pool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:59 with 0 errors on Tue Dec 24 03:46:00 2024
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          sda3      ONLINE       0     0     0

errors: No known data errors

Oh ok, then its just a reporting error most likely, that along with the zpool status being ok, with no errors, i dont think i have much to worry about, thanks for the help @etorix

1 Like

As soon as i post my reply, go back to the dashboard to find this !!

Thanks again @etorix

It looks like at the time you posted before you got the extra space the expansion process hadn’t quite finished as it shows in your zpool status that it was still going but was really close to finishing at 98% and “55” minutes to go.

There is an existing bug that after a reboot the UI doesn’t show the resumed expansion process.

I got around this myself by running a looped zpool status with grep to show the progress of the expansion process.

Good to see your expansion process was successful

2 Likes

Resilvered and expanded, all drives identified by UUID. All in order. :+1:

1 Like