Expanding pool to use full disk sizes

I have a RAID Z1 pool of (4) 4TB disks that are not using the full disk space because the pool originally used 2TB disks that were individually upgraded to 4TB over time.

So, the pool still believes or only uses them as 2TB disks not 4TB.

What is the process to expand the pool under this scenario? And, is there any risk of data loss?

If I just click the “Expand” button, I get an error.

[EFAULT] Command sgdisk -d 1 -n 1:2048:+3904920576KiB -t 1:BF01 -u 1:0c29fd38-a52a-4ca4-ad0c-cb2fe9d807f6 /dev/sda failed (code 4): Could not create partition 1 from 2048 to 7809843199 Could not change partition 1's type code to BF01! Error encountered; not saving changes.

Try running the lsblk command and the zpool status as below so we can see what your disks and pools look like currently

Thanks to Protopia for the following.
‘I have a standard set of commands I ask people to run to provide a detailed breakdown of the hardware, so please run these and post the output here (with the output of each command inside a separate </> (crtrl +e) preformatted text box) so that we can all see the details:’

lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
sudo zpool status -v
sudo zpool import
lspci
sudo storcli show all
sudo sas2flash -list
sudo sas3flash -list

Edited to just the pool in question –

root@truenas[~]# lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
NAME MODEL ROTA PTTYPE TYPE      START          SIZE PARTTYPENAME PARTUUID
sda  ST400    1 gpt    disk            4000787030016              
├─sda1
│             1 gpt    part       2048 1998254506496 Solaris /usr & Apple ZFS
│                                                                 0c29fd38-a52a-4ca4-ad0c-cb2fe9d807f6
└─sda2
              1 gpt    part 3902844928    2147484160 Linux swap   dd72c6e2-e485-4f41-8069-5baf3b0f435a
sdb  ST400    1 gpt    disk            4000787030016              
├─sdb1
│             1 gpt    part       2048 1998253457920 Solaris /usr & Apple ZFS
│                                                                 2746c644-6338-4024-8385-16f3779b5390
└─sdb2
              1 gpt    part 3902842880    2147484160 Linux swap   b2c2cacd-b578-47c9-a776-ac72b7a7ae7c
sdc  ST400    1 gpt    disk            4000787030016              
├─sdc1
│             1 gpt    part        128    2147418624 Linux swap   afe1671a-3431-419b-aeb6-59098b28c476
└─sdc2
              1 gpt    part    4194432 3998639463936 Solaris /usr & Apple ZFS
                                                                  9e730103-0567-422e-9574-d432b1af1292
sdd  ST400    1 gpt    disk            4000787030016              
├─sdd1
│             1 gpt    part        128    2147418624 Linux swap   588f8179-21c4-4c66-acac-3d9f1ca25e87
└─sdd2
              1 gpt    part    4194432 3998639463936 Solaris /usr & Apple ZFS
                                                                  10df4487-6743-4c99-a3f0-07f2b88d1312
root@truenas[~]# sudo zpool status -v
  pool: pool1
 state: ONLINE
  scan: scrub repaired 0B in 04:14:36 with 0 errors on Sun Jun 15 04:14:47 2025
config:

	NAME                                      STATE     READ WRITE CKSUM
	pool1                                     ONLINE       0     0     0
	  raidz1-0                                ONLINE       0     0     0
	    0c29fd38-a52a-4ca4-ad0c-cb2fe9d807f6  ONLINE       0     0     0
	    2746c644-6338-4024-8385-16f3779b5390  ONLINE       0     0     0
	    9e730103-0567-422e-9574-d432b1af1292  ONLINE       0     0     0
	    10df4487-6743-4c99-a3f0-07f2b88d1312  ONLINE       0     0     0

errors: No known data errors

I was hoping for some others to post feedback on the lsblk results.

It looks like every disk except for sda has a full-size or nearly-full-size partition. Strange that the GUI button didn’t work. OP, try this command as root: parted /dev/sda resizepart 1 100%.

1 Like

Hi Dan,

Here’s what I got:

root@truenas[~]# parted /dev/sda resizepart 1 100%
Error: Can't have overlapping partitions.

I’m wondering if I need to “Offline” the disk in question, and then perform the upgrade?

Do you still have your previous 2TB drives? Since you are on Raid-Z1 and if you have a spare HD port, you could use a 2TB to do an ‘in place’ replacement of the 4TB you want to mess with and do so without causing your pool to be in a degraded state, with no spare, redundant drives

No, you don’t need to offline the disk. But for some idiotic reason, TrueNAS put the swap partition on that disk at the end, and I missed it when I reviewed the output of that command.

To remove the swap partition, parted /dev/sda rm 2. If that runs successfully, repeat the resize command.

1 Like

Ok, here’s what I got:

root@truenas[~]# parted /dev/sda rm 2
Information: You may need to update /etc/fstab.

root@truenas[~]# parted /dev/sda resizepart 1 100%
Information: You may need to update /etc/fstab.

root@truenas[~]# lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
NAME MODEL ROTA PTTYPE TYPE      START          SIZE PARTTYPENAME PARTUUID
sda  ST400    1 gpt    disk            4000787030016              
└─sda1
              1 gpt    part       2048 4000785964544 Solaris /usr & Apple ZFS
                                                                  0c29fd38-a52a-4ca4-ad0c-cb2fe9d807f6
sdb  ST400    1 gpt    disk            4000787030016              
├─sdb1
│             1 gpt    part       2048 1998253457920 Solaris /usr & Apple ZFS
│                                                                 2746c644-6338-4024-8385-16f3779b5390
└─sdb2
              1 gpt    part 3902842880    2147484160 Linux swap   b2c2cacd-b578-47c9-a776-ac72b7a7ae7c
sdc  ST400    1 gpt    disk            4000787030016              
├─sdc1
│             1 gpt    part        128    2147418624 Linux swap   afe1671a-3431-419b-aeb6-59098b28c476
└─sdc2
              1 gpt    part    4194432 3998639463936 Solaris /usr & Apple ZFS
                                                                  9e730103-0567-422e-9574-d432b1af1292
sdd  ST400    1 gpt    disk            4000787030016              
├─sdd1
│             1 gpt    part        128    2147418624 Linux swap   588f8179-21c4-4c66-acac-3d9f1ca25e87
└─sdd2
              1 gpt    part    4194432 3998639463936 Solaris /usr & Apple ZFS
                                                                  10df4487-6743-4c99-a3f0-07f2b88d1312

I’m assuming I need to repeat this for /dev/sdb

Also, do I need to worry about the /etc/fstab message?

Correct.

Nope, it isn’t used with TrueNAS.

2 Likes

Great! This worked and the storage size on this pool has jumped up. Here’s the result:

root@truenas[~]# parted /dev/sdb rm 2
Information: You may need to update /etc/fstab.

root@truenas[~]# parted /dev/sdb resizepart 1 100%
Information: You may need to update /etc/fstab.

root@truenas[~]# lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
NAME MODEL ROTA PTTYPE TYPE   START          SIZE PARTTYPENAME PARTUUID
sda  ST400    1 gpt    disk         4000787030016              
└─sda1
              1 gpt    part    2048 4000785964544 Solaris /usr & Apple ZFS
                                                               0c29fd38-a52a-4ca4-ad0c-cb2fe9d807f6
sdb  ST400    1 gpt    disk         4000787030016              
└─sdb1
              1 gpt    part    2048 4000785964544 Solaris /usr & Apple ZFS
                                                               2746c644-6338-4024-8385-16f3779b5390
sdc  ST400    1 gpt    disk         4000787030016              
├─sdc1
│             1 gpt    part     128    2147418624 Linux swap   afe1671a-3431-419b-aeb6-59098b28c476
└─sdc2
              1 gpt    part 4194432 3998639463936 Solaris /usr & Apple ZFS
                                                               9e730103-0567-422e-9574-d432b1af1292
sdd  ST400    1 gpt    disk         4000787030016              
├─sdd1
│             1 gpt    part     128    2147418624 Linux swap   588f8179-21c4-4c66-acac-3d9f1ca25e87
└─sdd2
              1 gpt    part 4194432 3998639463936 Solaris /usr & Apple ZFS
                                                               10df4487-6743-4c9

Now that I’ve removed the swap partitions from sda and sdb, I’m wondering if I should do the same for sdc and sdd? If so, how would that work?

There’s no need to, but you can if you like. You’d do it the same way as you did from sda and sdb, except that it’s partition 1 on the remaining disks.

For the changes to sdc and sdd, I was thinking, good to have the same configuration across all disks, to avoid possible future issues, and also maximize disk space. If there’s no risk to doing it, I’ll go ahead with that.

As an aside, is there any purpose to a swap partition on a RAIDZ1 disk that is only used for storage? Or, in my case, 2 of 4 disks?

Slighty smaller drives getting used as a replacement. 1TB isn’t always the same, even in the same manufacturer drive lines and models

SSDs were a problem too

It didn’t seem to have the same effect, I guess since the data is in partition too. Here’s the result for sdc2:

root@truenas[~]# lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
NAME MODEL ROTA PTTYPE TYPE   START          SIZE PARTTYPENAME PARTUUID
sda  ST400    1 gpt    disk         4000787030016              
└─sda1
              1 gpt    part    2048 4000785964544 Solaris /usr & Apple ZFS
                                                               0c29fd38-a52a-4ca4-ad0c-cb2fe9d807f6
sdb  ST400    1 gpt    disk         4000787030016              
└─sdb1
              1 gpt    part    2048 4000785964544 Solaris /usr & Apple ZFS
                                                               2746c644-6338-4024-8385-16f3779b5390
sdc  ST400    1 gpt    disk         4000787030016              
└─sdc2
              1 gpt    part 4194432 3998639463936 Solaris /usr & Apple ZFS
                                                               9e730103-0567-422e-9574-d432b1af1292
sdd  ST400    1 gpt    disk         4000787030016              
├─sdd1
│             1 gpt    part     128    2147418624 Linux swap   588f8179-21c4-4c66-acac-3d9f1ca25e87
└─sdd2
              1 gpt    part 4194432 3998639463936 Solaris /usr & Apple ZFS
                                                               10df4487-6743-4c9