Help to clarify TrueNAS Scale storage usage

Dear community, could someone help me to clarify the TrueNAS Scale storage usage?
I have a 21TiB pool with available space: 10TiB:

From Dataset screen I can see:

While from CLI:

tank                  19T  8.1T   11T  44% /mnt/tank
tank/share            12T  1.2T   11T  10% /mnt/tank/share
tank/share/MEDIA      13T  2.2T   11T  18% /mnt/tank/share/MEDIA

First question is, why I got 11 TiB free while my data reach only 5TiB?
Then why share and MEDIA datasets show “only” 12 and 13TiB as size?

I checked the snapshot, and I don’t have them on the system, so I’m getting crazy to understand why I can see 11TiB used.
Thank you

Clearly you have data on the pool that isn’t in the share dataset, about 8 TB of it (your share dataset only has 3 TB, not 5 TB).

Because, unless you’ve set a quota or reservation for the dataset, ZFS is going to report its size as the amount of data it contains + the remaining free space on the pool.

1 Like

Oh damn! You’re right. I completely missed a directory under /mnt/tank (so outside share) that contains all my proxmox VMs backup… It’s time to reduce the retention! :slight_smile:

Thanks for explanation, now is clear. I’ll move my data outside the datasets and delete them as I don’t need them.

Even with reduced retention never put data in the top level dataset of your pool. Always use child datasets for sharing. That’s even in the documentation.


It’s like throwing files into the root of the C:\ drive on Windows, only crazed criminals do that.

1 Like

I see. This was a temporary location as I planned to use a separate dataset for it. But then I forgot to do it. That’s why I wasn’t able to identify the pool used space.
Out of curiosity, is there any specific technical reason to not use the top level dataset of the pool? e.g. data integrity or something like that? Other than the abilities like to use separate snapshots/replications and just better data organization

Replication to a remote machine will prove difficult. I don’t remember all the details but we had various forum threads about that in the past.