Why "usable space" only at 56% of total capacity?

I have a TrueNAS Mini X+ running TrueNAS-SCALE-24.04.2.3. The NAS contains 2 14TB (12.73TiB) WD Red Pro drives set up as a 2-way mirror. Per the screenshot, TrueNAS knows the drives are 12.73TiB 2-way mirror but only shows 7.14TiB as “usable capacity”, which is only 56% of the actual capacity. Can anyone explain where the remaining space went? Below is the output of zfs list command, which shows the same info.

NAME                                                      USED  AVAIL  REFER  MOUNTPOINT
boot-pool                                                10.5G   199G    24K  none
boot-pool/ROOT                                           10.4G   199G    24K  none
boot-pool/ROOT/13.0-U5.3                                  188K   199G  1.29G  /
boot-pool/ROOT/13.0-U6.1                                 3.88G   199G  1.29G  /
boot-pool/ROOT/23.10.1                                   2.15G   199G  2.15G  legacy
boot-pool/ROOT/23.10.2                                   2.16G   199G  2.16G  legacy
boot-pool/ROOT/24.04.2.3                                 2.20G   199G   164M  legacy
boot-pool/ROOT/24.04.2.3/audit                            613K   199G   613K  /audit
boot-pool/ROOT/24.04.2.3/conf                            42.5K   199G  42.5K  /conf
boot-pool/ROOT/24.04.2.3/data                            7.39M   199G  7.39M  /data
boot-pool/ROOT/24.04.2.3/etc                             3.46M   199G  3.02M  /etc
boot-pool/ROOT/24.04.2.3/home                              27K   199G    27K  /home
boot-pool/ROOT/24.04.2.3/mnt                               24K   199G    24K  /mnt
boot-pool/ROOT/24.04.2.3/opt                             72.0M   199G  72.0M  /opt
boot-pool/ROOT/24.04.2.3/root                              42K   199G    42K  /root
boot-pool/ROOT/24.04.2.3/usr                             1.89G   199G  1.89G  /usr
boot-pool/ROOT/24.04.2.3/var                             76.1M   199G  19.4M  /var
boot-pool/ROOT/24.04.2.3/var/ca-certificates               24K   199G    24K  /var/local/ca-certificates
boot-pool/ROOT/24.04.2.3/var/log                         56.2M   199G  56.2M  /var/log
boot-pool/ROOT/Initial-Install                              1K   199G  1.29G  legacy
boot-pool/ROOT/default                                    202K   199G  1.29G  legacy
boot-pool/grub                                           1.91M   199G  1.91M  legacy
pool-1                                                   3.67T  3.47T   112K  /mnt/pool-1
pool-1/.system                                           1.52G  3.47T  1.23G  legacy
pool-1/.system/configs-2c41d412711f430381c6bee374b3106b  28.6M  3.47T  28.6M  legacy
pool-1/.system/cores                                       96K  1024M    96K  legacy
pool-1/.system/ctdb_shared_vol                             96K  3.47T    96K  legacy
pool-1/.system/glusterd                                   104K  3.47T   104K  legacy
pool-1/.system/netdata-2c41d412711f430381c6bee374b3106b   261M  3.47T   261M  legacy
pool-1/.system/rrd-2c41d412711f430381c6bee374b3106b      7.61M  3.47T  7.61M  legacy
pool-1/.system/samba4                                    1.28M  3.47T   580K  legacy
pool-1/.system/services                                    96K  3.47T    96K  legacy
pool-1/.system/webui                                       96K  3.47T    96K  legacy
pool-1/dataset-1                                          754G  3.47T   754G  /mnt/pool-1/dataset-1
pool-1/dataset-2                                         1.25G  3.47T  1.25G  /mnt/pool-1/dataset-2
pool-1/dataset-3                                         1.88T  3.47T  1.88T  /mnt/pool-1/dataset-3
pool-1/dataset-4                                          592G  3.47T   592G  /mnt/pool-1/dataset-4
pool-1/dataset-5                                          485G   215G   485G  /mnt/pool-1/dataset-5
pool-1/dataset-6                                         27.4M  3.47T  27.4M  /mnt/pool-1/dataset-6

What do you get when you run sudo zpool list pool-1?

This is what I got for zpool list pool-1:

NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool-1  7.27T  3.67T  3.59T        -         -     6%    50%  1.00x    ONLINE  /mnt

Actually now that I think of it, this pool used to be 2 8TB (7.23TiB) drives and I one by one replaced them with 14TB drives. Is it possible that TrueNAS doesn’t know that it can use the full capacity of the bigger drive?

You probably got bit by the “partition bug” on SCALE. (It has since been fixed.)

I have a hunch that your drives contain the 8-TiB partitions at the start, and likely a 2-GiB “swap” partition after that.

lsblk -o name,type,partn,size,fstype

This is the output of lsblk -o name,type,size,fstype (the partn option gave me an error). Looks like there’s no swap partition on the 2 14TB disks. Not sure where the extra space went.

NAME          TYPE    SIZE FSTYPE
sda           disk   12.7T 
└─sda1        part    7.3T zfs_member
sdb           disk   12.7T 
└─sdb1        part    7.3T zfs_member
nvme0n1       disk  232.9G 
├─nvme0n1p1   part    260M vfat
├─nvme0n1p2   part  216.6G zfs_member
└─nvme0n1p3   part     16G 
  └─nvme0n1p3 crypt    16G swap

What should I do next?

Manually use the Expand button, it’s in your first screenshot in the upper right corner, as per the documentation:

Ideally, it would have auto-expanded when you replaced the disks though. Not sure why it hasn’t done that.

This is related to the bug that I mentioned. Rather than “use first partition for swap and use the remainder for the zfs_member”, it explicitly set the size of the replacement partitions to match the former partitions. This was during a transition period where SCALE was exploring with swap, swappiness, and outright removing swap completely. There we hiccups along the way.

Even if swap partitions are no longer being created, new replacement drives still do not just “use the entire thing”. They will create a partition that is sized to match the previous disk’s partition size.


This won’t work, since the zfs_members are 8-TiB block devices (partitions). The pool is as big as it can be.


You might have to upgrade SCALE.

Then detach one of the drives from the mirror. (This will downgrade the vdev to a risky “single-drive stripe”.)

Then you add (“extend”)[1] this disk back into the mirror vdev, in which SCALE will hopefully use the entire size for the zfs_member this time.

Then you repeat with the other drive.

Once you re-add the other drive back into the mirror, ZFS will autoexpand the pool to the larger capacity.


  1. For some reason the GUI calls this “extend”, which is misleading and inaccurate. It should be called “attach”. ↩︎

Interesting quirk, thanks for explaining!

Before doing two resilvers at risk, try Uncle Fester’s advice:
parted /dev/sda resizepart 1 100%
parted /dev/sdb resizepart 1 100%
Probably need to prefix with sudo .

1 Like

I think that might work - might need a reboot for ZFS to recognise the bigger partitions and do an expand/extend (the terminology in inconsistent).

Thanks everyone. I detached one drive and tried the “expand” button on the remaining drive and that seemed to fix the problem on the attached drive. I then “extended”’ the pool by adding the detached drive back. The pool is now recognized as 2-way mirror, “usable capacity” is now 12.59TiB and resilvering is in progress. It looks like I could have just used the “expand” button instead of detach/reattaching.

Output of lsblk -o name,type,size,fstype:

NAME          TYPE    SIZE FSTYPE
sda           disk   12.7T 
└─sda1        part   12.7T zfs_member
sdb           disk   12.7T 
└─sdb1        part   12.7T zfs_member
nvme0n1       disk  232.9G 
├─nvme0n1p1   part    260M vfat
├─nvme0n1p2   part  216.6G zfs_member
└─nvme0n1p3   part     16G 
  └─nvme0n1p3 crypt    16G swap

Output of zpool list pool-1:

NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool-1  12.7T  3.67T  9.05T        -         -     3%    28%  1.00x    ONLINE  /mnt
2 Likes

This is good to know!

I wonder if the button in the TrueNAS GUI also uses parted in the background, to borrow what @etorix posted above?