Degraded Pool During RAIDZ Expansion

Running 24.10.0.2.

I wen to test our RAIDZ expansion. Have a RAIDZ2 12-disk single VDEV pool and trying to expand it by another disk. Disks are 12TB (10.91 TiB).

About 25% into the expansion process got a notification that for one of the existing disks in the pool the “ATA error count increased from 0 to 1”, that the pool was degraded, and that the expansion process was paused until that error was cleared. I put in another disk and replaced the faulty one and the pool is back ONLIINE.

But the expansion job did not seem to resume automatically and I couldn’t see any option in the Job list to resume it. I ended up rebooting the server.

The GUI says it is a 13-wide VDEV, but that the usable capacity is 99.61 TiB. I don’t think 99.61 TiB would be correct for an expanded vdev? I think that’s the same usable space as before the expansion attempt.

Is there some way I can drill into the state of the failed expansion and how to complete it or undo it? If blocks are actually being written to the 13th disk, etc? Am I going to have to restore the whole pool from a backup? The pool is ONLINE and working, but I’m concerned it is borked in some way.

I think there are a lot of posts already in regards to issues with GUI & available space reporting after RAIDZ expansion.

1 Like

Thanks. If I go to the command line I do see now some indication that the expansion is continuing slowly despite the GUI getting stuck;

$ sudo zpool status
pool: pool
state: ONLINE
scan: resilvered 6.58T in 14:46:52 with 0 errors on Sat Nov 16 13:26:14 2024
expand: expansion of raidz2-0 in progress since Fri Nov 15 21:05:39 2024
6.31T / 82.9T copied at 91.9M/s, 7.61% done, 10 days 02:44:05 to go

Seems like it won’t be done 'til Thanksgiving though. I didn’t think adding a disk would take that long, at least not based on the initial progress before the disk faulted.

Expansion only works on a healthy pool. So here expansion was suspended until the 12-wide raidz2 had fully resilvered, and then it took over where it was. And expansion does rewrite existing data across, even though it does not refow existing data to the new stripe width, so it is bound to take quite some time for a pool which holds over 80 TB.
As for space accounting after expansion, that’s indeed deeply borked.

12-wide raidz2 is already at the upper boundary of what’s considered safe, so you’re pushing it. Any further resilver and/or expansion is going to take forever and some days.

3 Likes

It’s going now and the ETA is dropping, so I suspect it is doing a calculation based on the start of the expansion not accounting for the resilvering of the faulted drive. The resilvering took about 15 hours I think. I understand a 13-disk vdev isn’t ideal, but the 12 was working ok. I’m looking into the best way (economically) to migrate to a 2x 8-drive vdev layout.

I wasn’t imminently running out of space on the pool, so mostly this was just to try out the feature. If it gets through the expansion uncorrupted despite the pool degradation in the middle of the process, I’m impressed.

1 Like

I have a very similar issue where I had a failed drive occur during expansion and had to replace and resilver. The expansion is still going… but it is so slow!

After 19 days i’m only at 30.72%

Now it won’t even calculate an ETA, but based on rough math it seems I have about 41 more days remaing I suppose. The sad part here is after this is done, I plan to add 1 more disk… I fear how long it will be…

root@truenas[~]# zpool status data
pool: data
state: ONLINE
scan: resilvered 3.90T in 16:53:25 with 0 errors on Mon Nov 4 10:49:32 2024
expand: expansion of raidz1-0 in progress since Fri Nov 1 17:43:17 2024
4.94T / 16.1T copied at 3.13M/s, 30.72% done, (copy is slow, no estimated time)
config:

    NAME                                      STATE     READ WRITE CKSUM
    data                                      ONLINE       0     0     0
      raidz1-0                                ONLINE       0     0     0
        e077b967-57eb-4cb5-abb2-149d99dfdae0  ONLINE       0     0     0
        b7683a54-6aff-8b45-b68b-0e7fd4361d93  ONLINE       0     0     0
        d8390cb5-0673-3742-a323-f8bd762ea8ec  ONLINE       0     0     0
        13178070-5ded-1a42-ab01-950485d986be  ONLINE       0     0     0
        473195e3-0769-4497-ae57-c9720ed5278d  ONLINE       0     0     0

errors: No known data errors

1 Like

Based on that performance, I’d suspect you have SMR drives, which are particularly I’ll-suited for use with ZFS, especially when resilvering or reflowing data (due to expansion)

1 Like

My expansion did complete. The “copied at” speed kept increasing and the actual elapsed excluding paused time ended up being a bit under 3 days IIRC for adding a 12 TB ST12000VN0007 to my original 12-disk RAIDZ2 array. I’m guessing the throughput calculation doesn’t account for any time the expansion is paused. I have an LSI 9300-16i HBA.

If you only have 4TB disks, I think you have something wrong or SMR drives like the previous poster indicated. I don’t know your system details, but based on my one experience, I’d expect adding a 4TB drive to take about 1 day of expansion time excluding any pauses. I’m not sure why the process is slower than resilvering a failed drive.

All data is read and reflowed across all drives…

In a resilver, only the one drive is written too.

1 Like

Can RAID-Z expansion only add one disk at a time? I initially thought it was possible to add multiple disks simultaneously, but later I realized it seems to only support one at a time. I tried adding a new 18TB disk to an existing 18TB * 6 RAID-Z2, and the speed was about 40MB/s, which is estimated to take nearly a month. Moreover, during this period, the entire pool’s responsiveness seems extremely slow, to the point where it is almost unusable.

Do you happen to have a scrub going at the same time?

1 Like
2 Likes