Why is my capacity so far off (much higher than expected)

You can check if there’s block-cloning happening.

Although, that should only affect the zfs command and “dataset” readouts. The pool itself (seen with zpool) should remain at a steady 16 TiB total capacity. (Unless there’s more ZFS magic going on.)

Previously, your zpool command was reporting “raw” capacity at ~24 TiB (rounding up to make this easier), rather than “usable” capacity at ~16 TiB (also rounding).[1]

This might even be a difference between OpenZFS 2.2 and 2.3, in which the earlier version accurately reports the pool’s total (and usable) capacity.[2]

This will be interesting to see:

zpool list -o name,size,cap,alloc,free,bcloneratio,bcloneused,bclonesaved TRUENASPOOL
zfs list -o space TRUENASPOOL

DR;TL

The zpool output for “capacity” should remain at a stable amount. This is your total usable capacity. If zfs (or the GUI), or even user-tools report a larger-than-expected size, this is because they are unaware of deduplication, snapshots, and block-cloning that happens outside of their purview. It’s especially true if such space-savings happen across datasets.

If the pool’s total capacity is changing, based on how much data is being written, then that’s just silly ZFS math.

Think about it like this: If you have a storage bucket of 1 TiB with only 5% free space remaining, and every time you make a copy of a 128-GiB file it uses “block-cloning” to consume zero extra space, you would expect that the pool’s total capacity remains at 1 TiB with 5% free space remaining.

With each subsequence “zero-consumption” copy, it wouldn’t make sense for the storage bucket’s total capacity (or even free space) to keep changing. Sure, the filesystems (“datasets”) might report larger sizes, as will the filesize properties of each “copied” file. But in reality, zero extra usable capacity has been used up. Your storage bucket is, and always was, 1 TiB of total usable capacity for all intents and purposes, whether dedup, block-cloning, or snapshots are involved.


  1. By all means, you should expect the pool to offer 16 TiB total usable capacity, regardless of any space-saving techniques. If a dataset reports it is consuming 1,000 TiB (because you went crazy with block-cloning!), the pool’s total capacity should still remain at 16 TiB. This is because the true consumption is much, much lower than 1,000 TiB, even if that is what the dataset and files are reporting. ↩︎

  2. We still don’t know, since we missed our shot at a more “controlled” test. Too many variables and usages are changing between each test. ↩︎

1 Like


So you definitely used block-cloning.

But your pool’s total capacity still reads ~24 TiB (I’m rounding up so we can always compared “16” to “24”).

You know yourself that this isn’t possible. You know from the start, regardless of space savings from dedup, compression, or block-cloning, you only have ~16 TiB of usable capacity. (Four RAIDZ1 vdevs, which each yield ~4 TiB of usable capacity.)

So tonight I removed all the files from the NAS and the dashboard went back to showing 14.4TiB in both the Storage and Pool Usage panels on the dashboard. So what I was seeing was indeed the block-cloning. So everything seems to be working as it should. It’s just very jarring to see wildly inflated numbers on the dashboard and (at the time) not understand why they are different. I don’t really see a case where I would normally copy such large amounts of data already on the NAS to a different location on the same NAS, but if ZFS has the ability to manage it all then I guess it’s fine. I’m assuming if I load up the NAS with unique non-cloned data that I’m only going to be able to put on the shown amount minus any compresion or other space saving methods. Thanks again for all the help.

Does “CAP” (“capacity”) even change in this output?

zpool list -o name,size,cap,alloc,free,bcloneratio,bcloneused,bclonesaved TRUENASPOOL

Or does it remain at ~21.8 TiB?