Less available disk pace than expexcated in TrueNAS

Hi Everyone

Soo, I’ve used new VDEV expansion function for my RaidZ2 pool. It worked, which is great. I’ve expanded my VDEV of four 6Tbs with another 6Tb. No I’m supposed to get smth like 6X3, or to be exact 5.46X3 = 16,38 Tb of available disk space. But I see in my TrueNAS GUI only: Usable Capacity: 13.09 TiB
My question is - where is the rest 3.29 Tb of disk space? Some space is probably was taken by the system, but it can’t be all 3+ tb.
For the comparasion Synology with it’s Raid 6 or SHR 2 whould’ve left me with exactly 16,4 Tb, according to their web site.

Long story short – you probably have your space. But reporting of free/total space is now screwed.

Try this: GitHub - markusressel/zfs-inplace-rebalancing: Simple bash script to rebalance pool data between all mirrors when adding vdevs to a pool.

I don’t think it will help. The script is for rebalancing pool with multiple VDEVs. OP, apparently, has singular VDEV pool.

Mis-reported space

Do a sudo zpool status if you want to confirm that the expansion has finished.

Do a sudo zpool list if you want to confirm that the space has been added.

The space is mis-reported in TrueNAS for 3 reasons:

  1. TrueNAS UI uses zfs stats rather than zpool stats when reporting pool usage.

  2. During expansion, the original files are all moved based on the original data:redundancy ratio i.e. for a 4x Z2, 2 data blocks and 2 redundancy blocks - whilst new files will be written 3 data blocks and 2 redundancy blocks. So an old file with 24KB of data will use 3x 4 blocks = 12 blocks, whilst the same file written anew will use 2x 5 blocks = 10 blocks which is c. 16% less space.

  3. There is a bug / feature in the ZFS expansion whereby (AFAIK) it reports free space based on the old redundancy ratio rather than the new redundancy ratio i.e. it reports free space as 16% too low.

The Rebalancing Script

Actually, whilst it was originally intended for rebalancing mirror vDevs there is nothing in the script specific to that - all it is doing is replacing files with a full copy of the file (and avoiding block cloning).

So it should work just fine to rewrite the files using the more efficient redundancy ratio.

But you do need to understand the impact of snapshots because rewriting the file will create a new copy and the old one will be retained if there is a snapshot containing it thus doubling your used space. To avoid this you need to remove all snapshots containing the files you are going to rebalance - which definitely means all snapshots on one or more datasets, and often means all snapshots in the pool.

Yeah, same here - I’ve rebooted during the expansion as well
Right, so everything (my space) should be there. Thanks for the info !

And yes, I have just one VDEV pool.

“sudo zpool list” showed me that I’ve got 15,3 Tb of free space (27,3 Tb total, 12 is alocated, plus 126 Gb - boot pool), what is more or less expected amount of free space.

It very likely will. Although it was written for a different purpose (which, as you said, was rebalancing a pool after you add a vdev), rewriting data affects many ZFS changes you can make to a pool or dataset after data is written to it. A few examples that come to mind:

  • ZFS expansion, the topic of this thread. ZFS expansion rewrites the vdev’s data across all the disks (now) in the vdev, but at its original data:parity ratio. So, a three-disk RAIDZ1 vdev would ordinarily have a ratio of 2:1,[1] and if you add a fourth disk, the existing data will still be at 2:1, just spread across four disks. Newly-written data (including data you “rebalance”) would be at 3:1. This means that rewriting the data will actually reduce your used space (though snapshots may mean you see a temporary increase).
  • Compression. If you create a dataset with, say, no compression, write data to it, and then change to lz4, existing data won’t be compressed. But if you rewrite it, it will.
  • Deduplication. Same story there.
  • Checksum algorithm. Don’t know why you’d want to change this on the fly (other than enabling dedup), but same story there.

There are probably others (encryption, maybe?), but you get the idea.


  1. Well, approximately–due to ZFS’ dynamic stripe width, this may vary slightly. ↩︎

2 Likes