Power failure during pool expansion

Hello Truenas Guys,
I was expanding a raidz1 pool from 6 wide to 7 wide and was 80% complete when i had a power failure and was forced to shut down the server. Once i was able to reboot, truenas did not restart the pool expansion.

The 7th drive is visible under the Topology, manage devices tab but is not showing extra drive space (4tb). Is there a way to initiate the pool expansion to where it was at prior to forced shutdown?

Many thanks in advance

@Makaone I was actually testing this in an earlier thread around space accounting - expansion survives even hard poweroff, so the expansion has probably happened but the space reporting might not be correct.

Can you drop to a shell (System ā†’ Shell) and post the output of zpool status -v where it should show the results of your expansion operation in the header? Assuming the expansion was completed, Iā€™d suggest firing off a scrub task manually (Storage ā†’ ZFS Health ā†’ Scrub) and see if that helps.

@HoneyBadger , Thanks for the quick reply. Below is the output as requested. As you can see it still says pool expansion in progress, but nothing is showing on the web ui.

root@truenas[~]# zpool status -v Media
pool: Media
state: ONLINE
scan: scrub in progress since Tue Oct 29 02:52:37 2024
1.54T / 2.49T scanned at 823M/s, 182G / 2.49T issued at 94.8M/s
0B repaired, 7.13% done, 07:06:52 to go
expand: expansion of raidz1-0 in progress since Sun Oct 27 00:05:42 2024
2.11T / 2.49T copied at 12.0M/s, 84.53% done, 09:23:50 to go
config:

    NAME                                      STATE     READ WRITE CKSUM
    Media                                     ONLINE       0     0     0
      raidz1-0                                ONLINE       0     0     0
        sdb2                                  ONLINE       0     0     0
        sdc2                                  ONLINE       0     0     0
        sdd2                                  ONLINE       0     0     0
        sde2                                  ONLINE       0     0     0
        sdf2                                  ONLINE       0     0     0
        sdg2                                  ONLINE       0     0     0
        9fcf6d80-c89a-4d63-9c2a-9742edb40fa3  ONLINE       0     0     0

Probably a bug in Web GUI.

@HoneyBadger - Did you want to verify, and if confirmed, report it for the user?

Youā€™re running a scrub at the same time as the expansion - thatā€™s going to definitely slow things down. Iā€™d suggest stopping the scrub for now, let the expand finish, and then re-run the scrub.

@Arwen Iā€™ll check on this to see if the job/status doesnā€™t preserve after a reboot, I admit I didnā€™t look for that before and I was testing with pretty small SSDs so the expands would often complete almost at the same time as middleware came up and I was able to log in :stuck_out_tongue:

2 Likes

I ran a scrub on expansion completion and now showing everything as expected.

One thing is for sure that it takes quite a while to add a 4tb sas drive to a pool.

Patience is a virtue! that unfortunately i donā€™t possess. :wink:

I donā€™t like the look of that zpook status result. Its showing 6 times the sd?2 amd 1 times a UUID - where it should show all UUIDā€™s

I would suggest exporting the pool and then importing from the GUI - but please make sure that you have a backup first - just in case.

@NugentS I did what you suggested, but it still shows as per previous output. The only thing that i can think of is that the first 6 drives are identical Seagate drives, and the last one a Hitachi drive.
I also have another pool of Seagate 600m drives x6 wide and they show as

root@truenas[~]# zpool status -v Data
pool: Data
state: ONLINE
scan: scrub repaired 0B in 00:01:00 with 0 errors on Sun Oct 27 00:01:02 2024
config:

    NAME        STATE     READ WRITE CKSUM
    Data        ONLINE       0     0     0
      raidz1-0  ONLINE       0     0     0
        sdk2    ONLINE       0     0     0
        sdl2    ONLINE       0     0     0
        sdm2    ONLINE       0     0     0
        sdn2    ONLINE       0     0     0
        sdo2    ONLINE       0     0     0
        sdp2    ONLINE       0     0     0

errors: No known data errors

So I really have no idea whats going on.

When I type (at an SSH prompt)

I get
`root@NewNAS[/mnt/BigPool/SMB/NewNAS-Scripts]# zpool status -v BigPool
pool: BigPool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using ā€˜zpool upgradeā€™. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 08:43:51 with 0 errors on Mon Oct 28 08:44:02 2024
config:

    NAME                                      STATE     READ WRITE CKSUM
    BigPool                                   ONLINE       0     0     0
      mirror-0                                ONLINE       0     0     0
        a28f66e6-c1c2-43bd-9242-0fa87b834983  ONLINE       0     0     0
        a1ed67bb-d434-45f4-9fdd-17d063a096d2  ONLINE       0     0     0
      mirror-1                                ONLINE       0     0     0
        b8f03a13-3686-4363-b7d5-84307822db42  ONLINE       0     0     0
        42f0f7d7-3df4-4ef4-ab5d-174d3acd1d70  ONLINE       0     0     0
      mirror-2                                ONLINE       0     0     0
        f4f1266d-e031-42df-bea1-24c4feceb500  ONLINE       0     0     0
        8a12053c-25bd-4b19-9387-4a3f668e32f1  ONLINE       0     0     0
      mirror-3                                ONLINE       0     0     0
        7c9403fc-955f-4c99-a24c-24e8d70d228b  ONLINE       0     0     0
        8f6e2346-9cdf-4679-ae83-2112ec68cb59  ONLINE       0     0     0
      mirror-7                                ONLINE       0     0     0
        f9055bc1-acbe-4f42-974f-ff8fb8095d4a  ONLINE       0     0     0
        5fe1e4ae-7362-4fa4-8805-c962bbe1378b  ONLINE       0     0     0
    special
      mirror-4                                ONLINE       0     0     0
        9a6c58bc-702f-4701-b82b-93d1d8a6cddf  ONLINE       0     0     0
        e64de128-9242-4f0b-85d3-4022579826b5  ONLINE       0     0     0
    logs
      c02745a1-9e52-48fd-81b1-abcb2daaa459    ONLINE       0     0     0

errors: No known data errors
root@NewNAS[/mnt/BigPool/SMB/NewNAS-Scripts]#
`

This tells me that something is wrong at your end. If you have exported and then imported (In the GUI) then it should be showing as mine is

Thanks for the quick reply, but unfortunately i can not get it to display all UUIDā€™s.
It seems to work fine, but i dont know how top proceed to correct the issueā€¦

I vaguely recall using;

  • Use GUI to export pool
  • Manually import pool using ā€œ-dā€ option to scan only the GUID disk directory.
  • If that worked to use GUIDs, then manually export pool.
  • New GUI import should ā€œrememberā€ how the pool was previously imported and preserve the GUID paths.

I think the problem is the ā€œzpool.cacheā€ file remembers the pool and itā€™s configuration. Manually forcing a change should solve the problem.

But, your mileage may varyā€¦

If you try it, let us know the results.

@Arwen @HoneyBadger I have noticed that the drives not showing the UUID have a 2GB swap partition where the one drive showing the UUID does not have the 2GB partition.
Would that have anything to do with it?