Hardlinks accross Datasets?

Ignore the GUI / dashboard for now. (There have been mismatches between the GUI and command-line in the past.)

To truly see what’s going on:

zpool list -o name,size,cap,alloc,free,bcloneratio,bcloneused,bclonesaved

Also keep in mind any snapshots involved, as well as blocks that still remain, even after “deleting” a particular file if you had been testing out block-cloning earlier.

EDIT: Fixed the command’s properties. Thanks to @Protopia for pointing that out.

1 Like

I just tried that on SCALE and got:

$ sudo zpool list -o space,bcloneratio,bcloneused,bclonesaved hdd-pool
bad property list: invalid property 'avail'
usage:
        list [-gHLpPv] [-o property[,...]] [-T d|u] [pool] ... 
            [interval [count]]

the following properties are supported:

        PROPERTY             EDIT   VALUES

        allocated              NO   <size>
        bcloneratio            NO   <1.00x or higher if cloned>
        bclonesaved            NO   <size>
        bcloneused             NO   <size>
        capacity               NO   <size>
        checkpoint             NO   <size>
        dedupratio             NO   <1.00x or higher if deduped>
        expandsize             NO   <size>
        fragmentation          NO   <percent>
        free                   NO   <size>
        freeing                NO   <size>
        guid                   NO   <guid>
        health                 NO   <state>
        leaked                 NO   <size>
        load_guid              NO   <load_guid>
        size                   NO   <size>
        altroot               YES   <path>
        ashift                YES   <ashift, 9-16, or 0=default>
        autoexpand            YES   on | off
        autoreplace           YES   on | off
        autotrim              YES   on | off
        bootfs                YES   <filesystem>
        cachefile             YES   <file> | none
        comment               YES   <comment-string>
        compatibility         YES   <file[,file...]> | off | legacy
        delegation            YES   on | off
        failmode              YES   wait | continue | panic
        listsnapshots         YES   on | off
        multihost             YES   on | off
        readonly              YES   on | off
        version               YES   <version>
        feature@...           YES   disabled | enabled | active

The feature@ properties must be appended with a feature name.
See zpool-features(7).

Since I didn’t give “avail” as a property to list, I have no idea what went wrong.

1 Like

You didn’t do anything wrong. Apparently, the “combo” property space is only for datasets. (It “combines” a bunch of properties together for convenience. Ironically, it uses properties for a dataset that are named differently for a pool. Go figure.)

You’ll have to instead type out all properties:

zpool list -o name,size,cap,alloc,free,bcloneratio,bcloneused,bclonesaved
1 Like

yeah - I worked it out and used:

sudo zpool list -o name,size,capacity,allocated,free,bcloneratio,bcloneused,bclonesaved
1 Like

@gigagames

On an Arch Linux system using a dummy “test” pool, with the following versions, I did a test.

  • Kernel: 6.6.52
  • Coreutils: 9.5
  • ZFS: 2.2.6

Between datasets, I copied a file without any special flags, simply with the cp command and a large 1 GiB non-compressible file comprised of random data.

cp /testpool/mydata/bigfile.dat /testpool/yourdata/

Here are the results. You’ll see that the command-line output is a more accurate representation of what’s going on.

zpool list -o name,size,capacity,alloc,free,bcloneratio,bcloneused,bclonesaved
NAME       SIZE    CAP  ALLOC   FREE  BCLONE_RATIO  BCLONE_USED  BCLONE_SAVED
testpool  3.75G    26%  1.00G  2.75G         2.00x           1G            1G


zfs list -o space
NAME               AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
testpool           2.62G  2.00G        0B     25K             0B      2.00G
testpool/mydata    2.62G  1.00G        0B   1.00G             0B         0B
testpool/yourdata  2.62G  1.00G        0B   1.00G             0B         0B

If you add the “USED” of both datasets, it equals 2 GiB, which is reported by the parent dataset. But, according to the pool’s properties, only 1 GiB of space is being used.

See? ZFS math is tricky to work with. :wink:

So if you want to truly know how much space is being used on the pool overall, then you should only rely on the zpool command. Don’t rely on the zfs command or any (parent) dataset properties. Don’t rely on the dashboard or GUI.


Why didn’t my test require --sparse=never? It could be a combination of the ZFS version + kernel version + Coreutils version, or perhaps because my test file is “friendly” with cp’s “sparseyness” heuristics.

1 Like

Just for fun, I did this to demonstrate how “ZFS math” is pure silliness! :crazy_face:

zpool list -o name,size,capacity,alloc,free,bcloneratio,bcloneused,bclonesaved
NAME       SIZE    CAP  ALLOC   FREE  BCLONE_RATIO  BCLONE_USED  BCLONE_SAVED
testpool  3.75G    26%  1.00G  2.75G         7.00x           1G            6G


zfs list -o space
NAME               AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
testpool           2.62G  7.00G        0B     25K             0B      7.00G
testpool/mydata    2.62G  6.00G        0B   6.00G             0B         0B
testpool/yourdata  2.62G  1.00G        0B   1.00G             0B         0B

Did you catch the “anamoly”? :wink:

My 4 GB pool is storing 7 GB of data! Wow! :laughing:

2 Likes

I suppose a fourth option is to create a script that invokes cp --sparse=never, and use that to copy the files, rather than Radarr’s own built-in feature.

But it is… and it has 2.62G free too.

How much is allocated?

1 GiB, from the one large file that was copied 7 times.

Yet “testpool” shows that “USED” is 7 GiB.

You guys are awesome.
Thanks you, i have no further question regarding this thread

~~ imo works better than spoiler for this…

can be done across different pools also needs to be in the same pool.

1 Like

Thx. I changed it (though so long ago probably no one new will actually read it).

1 Like