Unable to grow pool after replacing some smaller disks in pool

Hi!

Sorry if placing this in the wrong category, but I hope general can be used for troubleshooting.

I “think” im a victim of the bug related to disk expand. However, im currently running at TrueNAS-SCALE-23.10.2, which according to this thread should be fixed?

NAME                                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool                                  220G  3.00G   217G        -         -     1%     1%  1.00x    ONLINE  -
  mirror-0                                 220G  3.00G   217G        -         -     1%  1.36%      -    ONLINE
    nvme0n1p3                              222G      -      -        -         -      -      -      -    ONLINE
    nvme1n1p3                              222G      -      -        -         -      -      -      -    ONLINE
sata                                      14.5T  5.93T  8.62T        -         -     1%    40%  1.00x    ONLINE  /mnt
  raidz2-0                                14.5T  5.93T  8.62T        -         -     1%  40.8%      -    ONLINE
    f9600503-dd2e-42bd-beab-7e02a9aa413d  1.82T      -      -        -         -      -      -      -    ONLINE
    9d92549d-d9d5-4128-b6d9-cd3b10aad433  1.82T      -      -        -         -      -      -      -    ONLINE
    a261ed21-a76d-414b-9481-cd751031f957  1.82T      -      -        -         -      -      -      -    ONLINE
    e52c12b8-5aff-4d49-8597-00c8db9170f8  2.73T      -      -        -         -      -      -      -    ONLINE
    a65976bb-497f-470b-b332-8f57023e72eb  2.73T      -      -        -         -      -      -      -    ONLINE
    f2cb26f4-93c4-4d2e-9568-517c4b929d00  2.73T      -      -        -         -      -      -      -    ONLINE
    a0b85b8d-b75b-4fda-93c4-9d766eb6f07f  2.73T      -      -        -         -      -      -      -    ONLINE
    9a6bfe21-0cf0-497e-a14b-feeed007ccab  2.73T      -      -        -         -      -      -      -    ONLINE

Autoexpand is on for Sata

I have run the following commands without success.
sudo zpool online -e sata f9600503-dd2e-42bd-beab-7e02a9aa413d
sudo zpool online -e sata 9d92549d-d9d5-4128-b6d9-cd3b10aad433
sudo zpool online -e sata a261ed21-a76d-414b-9481-cd751031f957

There was a tip in the thread above to detach, reattach and resilver, but I was at least not allowed to detach from terminal due to not running mirrors (raidz2)

Any help would be very much appretiated!

Best regards,
Erlend

Seems like the partition size is the problem. Would it be possible to grow it easily, or must I do the whole detach, wipe, reimport, resilver?

NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda           8:0    0   2.7T  0 disk  
└─sda1        8:1    0   2.7T  0 part  
sdb           8:16   0   2.7T  0 disk  
└─sdb1        8:17   0   2.7T  0 part  
sdc           8:32   0   2.7T  0 disk  
└─sdc1        8:33   0   2.7T  0 part  
sdd           8:48   0   2.7T  0 disk  
└─sdd1        8:49   0   2.7T  0 part  
sde           8:64   0   2.7T  0 disk  
└─sde1        8:65   0   2.7T  0 part  
sdf           8:80   0   2.7T  0 disk  
└─sdf1        8:81   0   1.8T  0 part  
sdg           8:96   0   2.7T  0 disk  
└─sdg1        8:97   0   1.8T  0 part  
sdh           8:112  0   2.7T  0 disk  
└─sdh1        8:113  0   1.8T  0 part 

Unfortunately it wasn’t fixed in 23.10.2

See this link for a solution:

https://wiki.familybrown.org/en/manual-replacement

3 Likes

@erlendfalch

I am definitely not a ZFS expert, but according to this page:

“After rebooting, you can use zpool offline pool partition followed by zpool online -e pool partition for each partition to expand the pool to use all available space.”

So, probably the reason that your zpool online -e commands failed, is that you hadn’t preceded them with a matching zpool offline first.

Important note: Only offline/online a single drive at a time and make sure that the pool has full redundancy restored before you offline the next drive.

Very much appreciated, I’ll attempt this later today!

Worked like a charm running the described command in the article! Note to anyone else reading this, console threw an error but the partition size where still corrected.

root@zeus[/nonexistent]# parted /dev/sdf resizepart 1 100%
Failed to add inotify watch for /run/udev: Too many open files
Failed to add inotify watch for /run/udev: Too many open files
Information: You may need to update /etc/fstab.

1 Like