ZFS extend and rewrite

Hello everyone,

I have an issue with my ZFS Pool after extending my vdev.
I had a 3 wide RAIDZ-1 pool of 8TB disks, and I added a 1 8TB disk. After the attach operation, the UI displayed a usable capacity of 19.26TiB and 10.72TiB of used capacity (see screenchot below)

I read that after an extension of a VDEV you have to do a zfs rewrite to reorder the blocks of the pool and this helps regaining the full usable capacity. Si I did a rewrite on my whole pool, and now the UI displays this :

Doing the maths, I should have a roughly 21TiB of usable capacity with a 4 wide 8 TB pool. Why do I only have 19.26 and is that normal ?

Search for RAID-Zx-expansion tag and use the CLI / Shell for space reporting. It is, currently, the more accurate method.

Hi,

When I used these commands it gave me strange outputs as well :

zpool status poule -v
  pool: poule
 state: ONLINE
  scan: scrub repaired 0B in 05:50:41 with 0 errors on Wed Apr  1 17:16:24 2026
expand: expanded raidz1-0 copied 16.0T in 15:33:06, on Wed Apr  1 11:25:43 2026
config:

        NAME                                      STATE     READ WRITE CKSUM
        poule                                     ONLINE       0     0     0
          raidz1-0                                ONLINE       0     0     0
            466f0043-0bb9-4ddc-b199-72a55f89cf10  ONLINE       0     0     0
            a5104c40-e4aa-41ff-acbd-1132e7b88858  ONLINE       0     0     0
            2f527afa-c33c-4fa2-8af5-52959b9547e7  ONLINE       0     0     0
            e424a8a5-c9c8-4e1f-83ea-d4efa9e76620  ONLINE       0     0     0

errors: No known data errors
zpool list -v poule  
NAME                                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
poule                                     29.1T  14.7T  14.4T        -         -     1%    50%  1.00x    ONLINE  /mnt
  raidz1-0                                29.1T  14.7T  14.4T        -         -     1%  50.5%      -    ONLINE
    466f0043-0bb9-4ddc-b199-72a55f89cf10  7.28T      -      -        -         -      -      -      -    ONLINE
    a5104c40-e4aa-41ff-acbd-1132e7b88858  7.28T      -      -        -         -      -      -      -    ONLINE
    2f527afa-c33c-4fa2-8af5-52959b9547e7  7.28T      -      -        -         -      -      -      -    ONLINE
    e424a8a5-c9c8-4e1f-83ea-d4efa9e76620  7.28T      -      -        -         -      -      -      -    ONLINE

Also on a side note, i saw in this thread that there were discrepancies when using du and ls :

admin@truenas[~]$ ls /mnt/poule/poupoule/pretty_big_file -hl
-rwxrwx--- 1 lirayah root 65G Jan  6  2024 /mnt/poule/poupoule/pretty_big_file 
admin@truenas[~]$ du /mnt/poule/poupoule/pretty_big_file  -h 
60G     /mnt/poule/poupoule/pretty_big_file 

Is that normal, and expected because of the new parity ratio?

Also, I don’t know where to find the usable capacity in the zfs commands (the one where I should see either 19.26 TiB as reported in the UI, or the roughly 21TiB expected for this kind of setup)

ZFS space reporting is a bit complicated.

I guess imma go down the rabbit hole then…