Hello Truenas Guys,
I was expanding a raidz1 pool from 6 wide to 7 wide and was 80% complete when i had a power failure and was forced to shut down the server. Once i was able to reboot, truenas did not restart the pool expansion.
The 7th drive is visible under the Topology, manage devices tab but is not showing extra drive space (4tb). Is there a way to initiate the pool expansion to where it was at prior to forced shutdown?
@Makaone I was actually testing this in an earlier thread around space accounting - expansion survives even hard poweroff, so the expansion has probably happened but the space reporting might not be correct.
Can you drop to a shell (System ā Shell) and post the output of zpool status -v where it should show the results of your expansion operation in the header? Assuming the expansion was completed, Iād suggest firing off a scrub task manually (Storage ā ZFS Health ā Scrub) and see if that helps.
@HoneyBadger , Thanks for the quick reply. Below is the output as requested. As you can see it still says pool expansion in progress, but nothing is showing on the web ui.
root@truenas[~]# zpool status -v Media
pool: Media
state: ONLINE
scan: scrub in progress since Tue Oct 29 02:52:37 2024
1.54T / 2.49T scanned at 823M/s, 182G / 2.49T issued at 94.8M/s
0B repaired, 7.13% done, 07:06:52 to go
expand: expansion of raidz1-0 in progress since Sun Oct 27 00:05:42 2024
2.11T / 2.49T copied at 12.0M/s, 84.53% done, 09:23:50 to go
config:
Youāre running a scrub at the same time as the expansion - thatās going to definitely slow things down. Iād suggest stopping the scrub for now, let the expand finish, and then re-run the scrub.
@Arwen Iāll check on this to see if the job/status doesnāt preserve after a reboot, I admit I didnāt look for that before and I was testing with pretty small SSDs so the expands would often complete almost at the same time as middleware came up and I was able to log in
@NugentS I did what you suggested, but it still shows as per previous output. The only thing that i can think of is that the first 6 drives are identical Seagate drives, and the last one a Hitachi drive.
I also have another pool of Seagate 600m drives x6 wide and they show as
root@truenas[~]# zpool status -v Data
pool: Data
state: ONLINE
scan: scrub repaired 0B in 00:01:00 with 0 errors on Sun Oct 27 00:01:02 2024
config:
I get
`root@NewNAS[/mnt/BigPool/SMB/NewNAS-Scripts]# zpool status -v BigPool
pool: BigPool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using āzpool upgradeā. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 08:43:51 with 0 errors on Mon Oct 28 08:44:02 2024
config:
Thanks for the quick reply, but unfortunately i can not get it to display all UUIDās.
It seems to work fine, but i dont know how top proceed to correct the issueā¦
@Arwen@HoneyBadger I have noticed that the drives not showing the UUID have a 2GB swap partition where the one drive showing the UUID does not have the 2GB partition.
Would that have anything to do with it?