TrueNAS RAIDZ1 Expansion Issue – Adding a 4th Disk to a 3-Disk RAIDZ1, Capacity Unchanged

I have four drives in my PC. When I first started with TrueNAS, I created a simple striped pool using Disk 1. Later, I created a new RAIDZ1 pool with Disks 2, 3, and 4. After the new RAIDZ1 pool was stable, I destroyed the original striped pool and used Storage → rz1_1(new RAIDZ1) → Manage Devices → RAIDZ1 → Extend to add Disk 1 to the RAIDZ1 vdev. However, the reported capacity did not change.

Platform: Generic
Version: ElectricEel-24.10.2.2

You need to show a bit more data about your pool, but this could be an extreme case of the borked space reporting after raidz expansion.

My raidz1 storage pool appears to be healthy: Topology, Usage, ZFS Health, and Disk Health all show green, and I haven’t seen any warnings.

What about, er… describing your hardware and pool? :roll_eyes:
Or sharing the output of /usr/sbin/zpool list -v so we have some numbers to churn? (Please use </> to format.)

Thanks—here’s the command output (after scrubbing the data the reported capacity went up, but it still hasn’t reached the expected size; I’ve got four 16 TB drives, so excluding checksum overhead it should be about 43.65 TB):

truenas_admin@cangspirittruenas[~]$ /usr/sbin/zpool list -v
NAME                                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool                                   63G  3.97G  59.0G        -         -     2%     6%  1.00x    ONLINE  -
  mirror-0                                  63G  3.97G  59.0G        -         -     2%  6.29%      -    ONLINE
    sdb3                                  63.5G      -      -        -         -      -      -      -    ONLINE
    sda3                                  63.5G      -      -        -         -      -      -      -    ONLINE
os                                        31.5G  4.61G  26.9G        -         -    14%    14%  1.00x    ONLINE  /mnt
  mirror-0                                31.5G  4.61G  26.9G        -         -    14%  14.6%      -    ONLINE
    b819d52f-5bc9-4947-aaad-c33c5f4e3a55  32.0G      -      -        -         -      -      -      -    ONLINE
    5aa07676-6f9f-4bbd-b79d-7a380a1c0d72  32.0G      -      -        -         -      -      -      -    ONLINE
rz1_1                                     58.2T  16.0T  42.2T        -         -     0%    27%  1.00x    ONLINE  /mnt
  raidz1-0                                58.2T  16.0T  42.2T        -         -     0%  27.5%      -    ONLINE
    57033dc8-354d-45db-8600-d03df5bdaf6b  14.6T      -      -        -         -      -      -      -    ONLINE
    91fba567-f8da-4e66-9e6d-323f7fd2180a  14.6T      -      -        -         -      -      -      -    ONLINE
    cb9c39d4-58d3-4d97-9ee3-e3a847274e5b  14.6T      -      -        -         -      -      -      -    ONLINE
    10d97ee4-1ddc-450e-8efd-0ac7e8f6356f  14.6T      -      -        -         -      -      -      -    ONLINE
cache                                         -      -      -        -         -      -      -      -         -
  c8b3e81f-7d3b-4cb7-95fe-659538a363bd     256G   719M   255G        -         -     0%  0.27%      -    ONLINE

That’s not much of a discrepancy, and is likely fully explained by this.

1 Like

OK, I think I’ve got a good understanding now—thanks!