I have an issue with my ZFS Pool after extending my vdev.
I had a 3 wide RAIDZ-1 pool of 8TB disks, and I added a 1 8TB disk. After the attach operation, the UI displayed a usable capacity of 19.26TiB and 10.72TiB of used capacity (see screenchot below)
I read that after an extension of a VDEV you have to do a zfs rewrite to reorder the blocks of the pool and this helps regaining the full usable capacity. Si I did a rewrite on my whole pool, and now the UI displays this :
Also on a side note, i saw in this thread that there were discrepancies when using du and ls :
admin@truenas[~]$ ls /mnt/poule/poupoule/pretty_big_file -hl
-rwxrwx--- 1 lirayah root 65G Jan 6 2024 /mnt/poule/poupoule/pretty_big_file
admin@truenas[~]$ du /mnt/poule/poupoule/pretty_big_file -h
60G /mnt/poule/poupoule/pretty_big_file
Is that normal, and expected because of the new parity ratio?
Also, I don’t know where to find the usable capacity in the zfs commands (the one where I should see either 19.26 TiB as reported in the UI, or the roughly 21TiB expected for this kind of setup)