How to expand a zpool on pre-existing disks with more capacity?

About 18 months ago I replaced my 4 2TB disks with 4 4TB disks, when I replaced the last disk, instead of expanding the pool it would show the disk as offline. (I can’t remember exactly, I had some screen shots and logs of the commands and UI but I’ve misplaced them). Very similar to this https://ixsystems.atlassian.net/browse/NAS-126809.

My pool is RAIDZ1 4 wide 3.64TiB. It shows with a usable capacity of 5.16TiB.

I’ve just updated to 24.10 and
RTFM (https://www.truenas.com/docs/scale/24.10/scaletutorials/storage/managepoolsscale/#expanding-a-pool), and can’t see anything in there about my specific situation.

Some Googling lead me to https://postgres.ai/docs/how-to-guides/administration/add-disk-space-to-zfs-pool but this comes unstuck at growpart

root@truenas[/home/admin]# growpart /dev/sde1 1
zsh: command not found: growpart

Is it possible to expand my pool on the existing disks, and if so, how?

For information here’s zpool status

root@truenas[/home/admin]# zpool status big-pool
  pool: big-pool
 state: ONLINE
  scan: resilvered 432K in 00:00:02 with 0 errors on Thu Nov  7 20:52:42 2024
config:

        NAME                                      STATE     READ WRITE CKSUM
        big-pool                                  ONLINE       0     0     0
          raidz1-0                                ONLINE       0     0     0
            sdi1                                  ONLINE       0     0     0
            5b2d8f7e-ee35-4e7a-961a-7c310ab336d7  ONLINE       0     0     0
            1ed1947d-a191-48b4-992a-2a3e1d25074c  ONLINE       0     0     0
            bb9ec3ee-545f-4e6a-a4a4-92b71411924b  ONLINE       0     0     0

errors: No known data errors

and lsblk

root@truenas[/home/admin]# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda      8:0    0 931.5G  0 disk
└─sda1   8:1    0 931.5G  0 part
sdb      8:16   0 931.5G  0 disk
└─sdb1   8:17   0 931.5G  0 part
sdc      8:32   0 931.5G  0 disk
└─sdc1   8:33   0 931.5G  0 part
sdd      8:48   0 931.5G  0 disk
└─sdd1   8:49   0 931.5G  0 part
sde      8:64   0   3.6T  0 disk
└─sde1   8:65   0   1.8T  0 part
sdf      8:80   0   3.6T  0 disk
└─sdf1   8:81   0   1.8T  0 part
sdg      8:96   0   3.6T  0 disk
└─sdg1   8:97   0   1.8T  0 part
sdh      8:112  0   7.4G  0 disk
└─sdh1   8:113  0   7.4G  0 part
sdi      8:128  0   3.6T  0 disk
└─sdi1   8:129  0   1.8T  0 part
sdj      8:144  0 238.5G  0 disk
├─sdj1   8:145  0     1M  0 part
├─sdj2   8:146  0   512M  0 part
├─sdj3   8:147  0   222G  0 part
└─sdj4   8:148  0    16G  0 part
zd0    230:0    0     2T  0 disk

The disks in question are sde, sdf, sdg and sdi.

Fixed your URLs @GregBrrr :slight_smile:

big-pool being online is good - you’ve obviously since rebooted several times, your pool is importing properly, so the disks and ZFS labels themselves seem to be healthy.

Have you tried the “Expand” option in the Storage Dashboard?

image

1 Like

I suspect that the SCALE equivalent is parted /dev/sde resizepart 1 100%.

But using the UI is preferable.

Do you know why your pool uses PARTUUIDs for 3 drives and the device name for the 4th?

1 Like

But I don’t believe there’s any way of doing this in the UI, and disk replacement to grow a pool has been broken in SCALE for a long time, despite many bugs having been filed (and closed as “fixed”)–though I’ve seen reports this is fixed in EE.

Correct.

For more information:

1 Like

(Thanks for fixing the URLs)

I don’t know how I missed that button. I was going big-pool > Manage Devices > RAIDZ1 > Extend, which was only offering to replace add a disk.

The Expand button took seconds, big-pool is now 10.45TiB

1 Like

I’ve no idea about the UUIDs, I had just tried taking sdi offline to see if I could “replace” it with it’s self and trigger the expansion that way. I hadn’t looked at zpool status before that so I can’t say if it was like that before. I’ve just rebooted to see what happens and interestingly sdi is now sde:

root@truenas[/home/admin]# zpool status big-pool
  pool: big-pool
 state: ONLINE
  scan: resilvered 432K in 00:00:02 with 0 errors on Thu Nov  7 20:52:42 2024
config:

        NAME                                      STATE     READ WRITE CKSUM
        big-pool                                  ONLINE       0     0     0
          raidz1-0                                ONLINE       0     0     0
            sde1                                  ONLINE       0     0     0
            5b2d8f7e-ee35-4e7a-961a-7c310ab336d7  ONLINE       0     0     0
            1ed1947d-a191-48b4-992a-2a3e1d25074c  ONLINE       0     0     0
            bb9ec3ee-545f-4e6a-a4a4-92b71411924b  ONLINE       0     0     0

errors: No known data errors