On an Arch Linux system using a dummy “test” pool, with the following versions, I did a test.
- Kernel: 6.6.52
- Coreutils: 9.5
- ZFS: 2.2.6
Between datasets, I copied a file without any special flags, simply with the cp
command and a large 1 GiB non-compressible file comprised of random data.
cp /testpool/mydata/bigfile.dat /testpool/yourdata/
Here are the results. You’ll see that the command-line output is a more accurate representation of what’s going on.
zpool list -o name,size,capacity,alloc,free,bcloneratio,bcloneused,bclonesaved
NAME SIZE CAP ALLOC FREE BCLONE_RATIO BCLONE_USED BCLONE_SAVED
testpool 3.75G 26% 1.00G 2.75G 2.00x 1G 1G
zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
testpool 2.62G 2.00G 0B 25K 0B 2.00G
testpool/mydata 2.62G 1.00G 0B 1.00G 0B 0B
testpool/yourdata 2.62G 1.00G 0B 1.00G 0B 0B
If you add the “USED” of both datasets, it equals 2 GiB, which is reported by the parent dataset. But, according to the pool’s properties, only 1 GiB of space is being used.
See? ZFS math is tricky to work with.
So if you want to truly know how much space is being used on the pool overall, then you should only rely on the zpool
command. Don’t rely on the zfs
command or any (parent) dataset properties. Don’t rely on the dashboard or GUI.
Why didn’t my test require --sparse=never
? It could be a combination of the ZFS version + kernel version + Coreutils version, or perhaps because my test file is “friendly” with cp’s “sparseyness” heuristics.