Unexpected Space Usage in Root Dataset

Version: TrueNAS-13.3-U1 Core

Issue Summary:
I’m encountering an issue with my TrueNAS system where the root dataset (NAS-main) is showing unexpected space usage of 50.8GB, which I cannot account for. This dataset gets replicated to the secondary pool (secondary), and the issue persists even after destroying and recreating the snapshot. Its probably just my lack of understanding but i would like to ask.


System Information:

  • Source System: TrueNAS Core
  • Target System: TrueNAS Core
  • Primary Pool: NAS-main
  • Secondary Pool: secondary
  • Replication Type: Recursive

Problem Details:

  1. Unexpected Space Usage:
    • Running zfs list -r NAS-main shows usedbydataset is 50.8GB, but there’s no visible data at the root level.
    • Snapshots:
      sudo zfs list -t snapshot -r NAS-main
      NAME                                                                               USED  AVAIL  REFER  MOUNTPOINT
      NAS-main@auto-2025-01-19_17-32                                                       0B      -  50.8G  -
      NAS-main/.system@manual-2025-01-19_15-23                                          1.01M      -  21.0M  -
      NAS-main/.system@auto-2025-01-19_15-25                                            1.02M      -  21.0M  -
      NAS-main/.system@auto-2025-01-19_17-32                                            16.6M      -  22.3M  -
      
    • Listing contents of /mnt/NAS-main shows only the expected datasets.
    • Running sudo zfs get reservation,refreservation NAS-main confirms no reservations are set.
    • The space is recreated whenever a full recursive snapshot of NAS-main is taken.

I dont mind it being on NAS-main so much but when i replicate to my secondary pool its always replicates it there too.

I have NAS-main/.system and NAS-main/iocage in the exclude list for the replication.

  1. Steps Tried:

    • Verified that no unexpected datasets exist.
    • Double-checked snapshot settings.
    • Ran zfs get used,usedbydataset,usedbysnapshots,usedbychildren NAS-main to analyze space.
    • Attempted to locate large files using ls -lah /mnt/NAS-main and partial du commands.
    • Checked for hidden datasets using zfs list -r -o name,mountpoint,canmount NAS-main.
    • Looked inside .zfs/snapshot for hidden snapshots.
    • Attempted a pool scrub.
    • Checked running processes with lsof +D /mnt/NAS-main.
  2. Findings:

    • The root dataset (NAS-main) continues to show 50.8GB usage without visible files.
    • Destroying the snapshots does not remove the space; it reappears with a new snapshot.

Questions:

  1. How can I identify what’s consuming the 50.8GB space in the root dataset when no files seem present?
  2. Is there a way to free up this space without deleting the entire dataset?

Any advice, troubleshooting tips, or suggestions would be greatly appreciated!

This could be a case where a folder with the same name of a child dataset exists (for one reason or another), and so its usage is hidden when the child dataset gets overlaid on top when it is mounted.

zfs list -t filesystem,volume -r -o space,encryption,encroot,mountpoint NAS-main

zpool list -o name,size,alloc,free,bcloneused NAS-main

zfs mount | grep NAS-main

ls -l /mnt/NAS-main
jack@TruenasCore ~ $ sudo zfs list -t filesystem,volume -r -o space,encryption,encroot,mountpoint NAS-main
Password:
NAME                                                      AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  ENCRYPTION   ENCROOT  MOUNTPOINT
NAS-main                                                  5.89T  1.03T        0B   50.8G             0B      1005G  off          -        /mnt/NAS-main
NAS-main/.system                                          5.89T  71.9M     36.5M   22.1M             0B      13.3M  off          -        legacy
NAS-main/.system/samba4                                   5.89T  1.04M      488K    575K             0B         0B  off          -        legacy
NAS-main/.system/syslog-daeb7be0eae547028f28998beacf9023  5.89T  12.2M      843K   11.4M             0B         0B  off          -        legacy
NAS-main/cavern                                           5.89T   238G     79.4M    238G             0B         0B  off          -        /mnt/NAS-main/cavern
NAS-main/home                                             5.89T  16.9G      465K   16.9G             0B         0B  off          -        /mnt/NAS-main/home
NAS-main/mandie-home                                      5.89T  1.95G      298M   1.66G             0B         0B  off          -        /mnt/NAS-main/mandie-home
NAS-main/manjaro-home                                     5.89T   334G     29.3G    305G             0B         0B  off          -        /mnt/NAS-main/manjaro-home
NAS-main/media                                            5.89T   383G     6.98G    376G             0B         0B  off          -        /mnt/NAS-main/media
NAS-main/scale-apps                                       5.89T  8.24G      430K    209K             0B      8.24G  off          -        /mnt/NAS-main/scale-apps
NAS-main/scale-apps/dockge                                5.89T  2.13M        0B    151K             0B      1.99M  off          -        /mnt/NAS-main/scale-apps/dockge
NAS-main/scale-apps/dockge/data                           5.89T   825K      651K    174K             0B         0B  off          -        /mnt/NAS-main/scale-apps/dockge/data
NAS-main/scale-apps/dockge/stacks                         5.89T  1.18M     1.03M    157K             0B         0B  off          -        /mnt/NAS-main/scale-apps/dockge/stacks
NAS-main/scale-apps/gluetun                               5.89T   668K      418K    250K             0B         0B  off          -        /mnt/NAS-main/scale-apps/gluetun
NAS-main/scale-apps/nextcloud                             5.89T  7.58G        0B    151K             0B      7.58G  off          -        /mnt/NAS-main/scale-apps/nextcloud
NAS-main/scale-apps/nextcloud/appdata                     5.89T   591M     1.82M    589M             0B         0B  off          -        /mnt/NAS-main/scale-apps/nextcloud/appdata
NAS-main/scale-apps/nextcloud/postgres                    5.89T   547M      251M    296M             0B         0B  off          -        /mnt/NAS-main/scale-apps/nextcloud/postgres
NAS-main/scale-apps/nextcloud/userdata                    5.89T  6.47G     69.5M   6.40G             0B         0B  off          -        /mnt/NAS-main/scale-apps/nextcloud/userdata
NAS-main/scale-apps/npm                                   5.89T  11.2M     81.4K    151K             0B      11.0M  off          -        /mnt/NAS-main/scale-apps/npm
NAS-main/scale-apps/npm/certs                             5.89T  1.02M      814K    232K             0B         0B  off          -        /mnt/NAS-main/scale-apps/npm/certs
NAS-main/scale-apps/npm/data                              5.89T  9.94M     8.25M   1.70M             0B         0B  off          -        /mnt/NAS-main/scale-apps/npm/data
NAS-main/scale-apps/prowlarr                              5.89T  77.1M        0B    140K             0B      76.9M  off          -        /mnt/NAS-main/scale-apps/prowlarr
NAS-main/scale-apps/prowlarr/config                       5.89T  76.9M     67.7M   9.28M             0B         0B  off          -        /mnt/NAS-main/scale-apps/prowlarr/config
NAS-main/scale-apps/qbittorrent                           5.89T   229M     13.2M    205M             0B      10.9M  off          -        /mnt/NAS-main/scale-apps/qbittorrent
NAS-main/scale-apps/qbittorrent/qbitconfig                5.89T  10.9M     4.47M   6.48M             0B         0B  off          -        /mnt/NAS-main/scale-apps/qbittorrent/qbitconfig
NAS-main/scale-apps/radarr                                5.89T   294M      105K    680K             0B       293M  off          -        /mnt/NAS-main/scale-apps/radarr
NAS-main/scale-apps/radarr/config                         5.89T   293M      139M    154M             0B         0B  off          -        /mnt/NAS-main/scale-apps/radarr/config
NAS-main/scale-apps/syncthing-config                      5.89T  59.2M     53.0M   6.20M             0B         0B  off          -        /mnt/NAS-main/scale-apps/syncthing-config
NAS-main/scale-home                                       5.89T  17.0G      355K   17.0G             0B         0B  off          -        /mnt/NAS-main/scale-home
NAS-main/syncthing-data                                   5.89T  4.83G      210M   4.63G             0B         0B  off          -        /mnt/NAS-main/syncthing-data
jack@TruenasCore ~ $ sudo zpool list -o name,size,alloc,free,bcloneused NAS-main
NAME       SIZE  ALLOC   FREE  BCLONE_USED
NAS-main  14.5T  2.13T  12.4T            0
jack@TruenasCore ~ $ zfs mount | grep NAS-main
NAS-main                        /mnt/NAS-main
NAS-main/.system                /var/db/system
NAS-main/.system/samba4         /var/db/system/samba4
NAS-main/.system/syslog-daeb7be0eae547028f28998beacf9023  /var/db/system/syslog-daeb7be0eae547028f28998beacf9023
NAS-main/manjaro-home           /mnt/NAS-main/manjaro-home
NAS-main/scale-apps             /mnt/NAS-main/scale-apps
NAS-main/scale-apps/dockge      /mnt/NAS-main/scale-apps/dockge
NAS-main/scale-apps/dockge/data  /mnt/NAS-main/scale-apps/dockge/data
NAS-main/scale-apps/dockge/stacks  /mnt/NAS-main/scale-apps/dockge/stacks
NAS-main/scale-apps/gluetun     /mnt/NAS-main/scale-apps/gluetun
NAS-main/scale-apps/nextcloud   /mnt/NAS-main/scale-apps/nextcloud
NAS-main/scale-apps/nextcloud/appdata  /mnt/NAS-main/scale-apps/nextcloud/appdata
NAS-main/scale-apps/nextcloud/postgres  /mnt/NAS-main/scale-apps/nextcloud/postgres
NAS-main/scale-apps/nextcloud/userdata  /mnt/NAS-main/scale-apps/nextcloud/userdata
NAS-main/scale-apps/npm         /mnt/NAS-main/scale-apps/npm
NAS-main/scale-apps/npm/certs   /mnt/NAS-main/scale-apps/npm/certs
NAS-main/scale-apps/npm/data    /mnt/NAS-main/scale-apps/npm/data
NAS-main/scale-apps/prowlarr    /mnt/NAS-main/scale-apps/prowlarr
NAS-main/scale-apps/prowlarr/config  /mnt/NAS-main/scale-apps/prowlarr/config
NAS-main/scale-apps/qbittorrent  /mnt/NAS-main/scale-apps/qbittorrent
NAS-main/scale-apps/qbittorrent/qbitconfig  /mnt/NAS-main/scale-apps/qbittorrent/qbitconfig
NAS-main/scale-apps/radarr      /mnt/NAS-main/scale-apps/radarr
NAS-main/scale-apps/radarr/config  /mnt/NAS-main/scale-apps/radarr/config
NAS-main/scale-apps/syncthing-config  /mnt/NAS-main/scale-apps/syncthing-config
NAS-main/syncthing-data         /mnt/NAS-main/syncthing-data
NAS-main/home                   /mnt/NAS-main/home
NAS-main/cavern                 /mnt/NAS-main/cavern
NAS-main/media                  /mnt/NAS-main/media
NAS-main/scale-home             /mnt/NAS-main/scale-home
NAS-main/mandie-home            /mnt/NAS-main/mandie-home
jack@TruenasCore ~ $ ls -l /mnt/NAS-main
total 202
drwxrwx---+  13 smbuser  builtin_administrators   12 Jan 10 16:55 cavern
drwxr-xr-x   16 jack     wheel                    26 Jan  8 10:17 home
drwxr-xr-x   16 jack     wheel                    27 Jan 19 10:57 home_backup
drwxrwx---+   7 3001     builtin_administrators   15 Jan  1 14:52 mandie-home
drwxrwx---+   7 smbuser  builtin_administrators    6 Oct 30 14:31 manjaro-home
drwxr-x---+ 103 mediax   3004                    116 Jan 18 12:44 media
drwxrwx---+  11 root     wheel                    10 Jan  4 00:26 scale-apps
drwx------+  14 3003     3003                     28 Jan 17 14:50 scale-home
drwxr-x---   12 568      568                      11 Dec 31 11:39 syncthing-data
jack@TruenasCore

There was folders before, but i deleted them before sending a remote replication that then created the new datasets. Everything was empty im 99% sure.

The only way to find out for certain is to umount the following datasets, then issue du -hs for all directories within /mnt/NAS-main, except for .system and scale-apps.

  • manjaro-home
  • syncthing-data
  • home
  • cavern
  • media
  • scale-home
  • mandie-home

They should all report 0 usage.

I understand that this means you’ll need to stop services and shares, and possibly apps, but there’s no other way to know for certain unless you umount them. The middleware doesn’t like it if you unmount datasets with the command-line, from what I remember. (Maybe there’s a supported TrueNAS CLI command?)

I excluded .system and scale-apps, since I assume those are unlikely culprits.


There’s clearly 50 GiB sitting directly inside the root dataset, not just referred by a snapshot(s).

Not too relevant, but might as well ask. What is the vdev layout of this pool? Had you ever used deduplication?

heres the layout and me messing with unmounts.

No i never used dedup.

I could not unmount my home dataset as i am ssh’ed into it and prob other reasons too, but the other i did. I hope this was what you meant.

jack@TruenasCore ~ $ sudo zpool status
  pool: NAS-main
 state: ONLINE
  scan: scrub repaired 0B in 01:27:33 with 0 errors on Thu Jan  9 05:57:34 2025
config:

        NAME                                            STATE     READ WRITE CKSUM
        NAS-main                                        ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/a6a6224b-8d18-11ed-b40c-c4346b65b332  ONLINE       0     0     0
            gptid/a6be27d2-8d18-11ed-b40c-c4346b65b332  ONLINE       0     0     0
            gptid/a68db70b-8d18-11ed-b40c-c4346b65b332  ONLINE       0     0     0
            gptid/a6b252a4-8d18-11ed-b40c-c4346b65b332  ONLINE       0     0     0

errors: No known data errors

  pool: boot-pool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:00:23 with 0 errors on Sun Jan 19 03:45:23 2025
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada1p2  ONLINE       0     0     0
            ada0p2  ONLINE       0     0     0

errors: No known data errors

  pool: secondary
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 01:10:33 with 0 errors on Thu Jan  9 05:40:33 2025
config:

        NAME                                            STATE     READ WRITE CKSUM
        secondary                                       ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/d13e21da-96b9-11ef-8b70-a0d3c13491f6  ONLINE       0     0     0
            gptid/d14a61a2-96b9-11ef-8b70-a0d3c13491f6  ONLINE       0     0     0
            gptid/7792e84f-9938-11ef-af94-a0d3c13491f6  ONLINE       0     0     0

errors: No known data errors
jack@TruenasCore ~ $ sudo zfs unmount NAS-main/manjaro-home
Password:
jack@TruenasCore ~ $ sudo du -hs /mnt/NAS-main/manjaro-home
512B    /mnt/NAS-main/manjaro-home
jack@TruenasCore ~ $ ls -al /mnt/NAS-main/
total 202
drwxr-xr-x   12 root     wheel                    11 Jan 19 17:20 .
drwxr-xr-x    4 root     wheel                   192 Jan 19 19:05 ..
dr-xr-xr-x+   3 root     wheel                     3 Jan  5  2023 .zfs
drwxrwx---+  13 smbuser  builtin_administrators   12 Jan 10 16:55 cavern
drwxr-xr-x   16 jack     wheel                    26 Jan  8 10:17 home
drwxr-xr-x   16 jack     wheel                    27 Jan 19 10:57 home_backup
drwxrwx---+   7 3001     builtin_administrators   15 Jan  1 14:52 mandie-home
drwxr-xr-x    2 root     wheel                     2 Jan 19 12:42 manjaro-home
drwxr-x---+ 103 mediax   3004                    116 Jan 18 12:44 media
drwxrwx---+  11 root     wheel                    10 Jan  4 00:26 scale-apps
drwx------+  14 3003     3003                     28 Jan 17 14:50 scale-home
drwxr-x---   12 568      568                      11 Dec 31 11:39 syncthing-data
jack@TruenasCore ~ $ sudo zfs unmount NAS-main/syncthing-data
jack@TruenasCore ~ $ ls -al /mnt/NAS-main/
total 156
drwxr-xr-x   12 root     wheel                    11 Jan 19 17:20 .
drwxr-xr-x    4 root     wheel                   192 Jan 19 19:05 ..
dr-xr-xr-x+   3 root     wheel                     3 Jan  5  2023 .zfs
drwxrwx---+  13 smbuser  builtin_administrators   12 Jan 10 16:55 cavern
drwxr-xr-x   16 jack     wheel                    26 Jan  8 10:17 home
drwxr-xr-x   16 jack     wheel                    27 Jan 19 10:57 home_backup
drwxrwx---+   7 3001     builtin_administrators   15 Jan  1 14:52 mandie-home
drwxr-xr-x    2 root     wheel                     2 Jan 19 12:42 manjaro-home
drwxr-x---+ 103 mediax   3004                    116 Jan 18 12:44 media
drwxrwx---+  11 root     wheel                    10 Jan  4 00:26 scale-apps
drwx------+  14 3003     3003                     28 Jan 17 14:50 scale-home
drwxr-xr-x    2 root     wheel                     2 Jan 19 13:06 syncthing-data
jack@TruenasCore ~ $ sudo zfs unmount NAS-main/home
cannot unmount '/mnt/NAS-main/home': pool or dataset is busy
jack@TruenasCore ~ $ sudo zfs unmount NAS-main/cavern
jack@TruenasCore ~ $ sudo zfs unmount NAS-main/media
jack@TruenasCore ~ $ sudo zfs unmount NAS-main/scale-home
jack@TruenasCore ~ $ sudo zfs unmount NAS-main/mandie-home
jack@TruenasCore ~ $ sudo du -hs /mnt/NAS-main
^C
jack@TruenasCore ~ $ sudo du -hs /mnt/NAS-main/manjaro-home
512B    /mnt/NAS-main/manjaro-home
jack@TruenasCore ~ $ sudo du -hs /mnt/NAS-main/syncthing-data
512B    /mnt/NAS-main/syncthing-data
jack@TruenasCore ~ $ sudo du -hs /mnt/NAS-main/cavern
512B    /mnt/NAS-main/cavern
jack@TruenasCore ~ $ sudo du -hs /mnt/NAS-main/media
512B    /mnt/NAS-main/media
jack@TruenasCore ~ $ sudo du -hs /mnt/NAS-main/scale-home
512B    /mnt/NAS-main/scale-home
jack@TruenasCore ~ $ sudo du -hs /mnt/NAS-main/mandie-home
512B    /mnt/NAS-main/mandie-home
jack@TruenasCore ~ $ sudo zfs mount NAS-main/manjaro-home
jack@TruenasCore ~ $ sudo zfs mount NAS-main/syncthing-data
jack@TruenasCore ~ $ sudo zfs mount NAS-main/cavern
jack@TruenasCore ~ $ sudo zfs mount NAS-main/media
jack@TruenasCore ~ $ sudo zfs mount NAS-main/scale-home
jack@TruenasCore ~ $ sudo zfs mount NAS-main/mandie-home
jack@TruenasCore ~ $ ls -al /mnt/NAS-main/
total 214
drwxr-xr-x   12 root     wheel                    11 Jan 19 17:20 .
drwxr-xr-x    4 root     wheel                   192 Jan 19 19:05 ..
dr-xr-xr-x+   3 root     wheel                     3 Jan  5  2023 .zfs
drwxrwx---+  13 smbuser  builtin_administrators   12 Jan 10 16:55 cavern
drwxr-xr-x   16 jack     wheel                    26 Jan  8 10:17 home
drwxr-xr-x   16 jack     wheel                    27 Jan 19 10:57 home_backup
drwxrwx---+   7 3001     builtin_administrators   15 Jan  1 14:52 mandie-home
drwxrwx---+   7 smbuser  builtin_administrators    6 Oct 30 14:31 manjaro-home
drwxr-x---+ 103 mediax   3004                    116 Jan 18 12:44 media
drwxrwx---+  11 root     wheel                    10 Jan  4 00:26 scale-apps
drwx------+  14 3003     3003                     28 Jan 17 14:50 scale-home
drwxr-x---   12 568      568                      11 Dec 31 11:39 syncthing-data
jack@TruenasCore ~ $ 

actually thinking about it it could well be something to do with my home folder/dataset as i was using a folder as home for a short while but i did delete it (im sure i did :P) , but i dont see how to dismount it and see whats “underneath”

What about home_backup ?

I notice it’s not even a dataset.

1 Like

I have never heard of home_backup ?

Because while in TrueNAS there are shares, services, and even login sessions constantly “using” paths and mountpoints, the only true way for a clean environment is to boot into an Ubuntu live ISO, and manually mount/unmount while in the live session.

Don’t get me started about the “System Dataset”.

What is this? It’s not accounted for with a dataset.

o actually yes, i do remember home_backup, it was whenn i was doing a load of testing and had to create a quick backup, it was for like a couple of seconds.

ah ha, you are the man :slight_smile:

I dont know how i forgot about that, and was blind to it.

Let me tinker some more, but i think that is it, it’s size is right.

Im so grateful

I was almost right ^^

Hey there. That was a brilliant idea to start a new thread on the forums where I could swoop in with an easy solution! Glad we scheduled this in advance and coordinated together behind the scenes. Would have been terrible if someone else interfered with our plan.

I’m still working on racking up more “solutions” to build my clout. Let me know when you want to do this again.

Thanks, Jack.

I meant to send a DM. Everything looks the same on Discourse! :expressionless:

1 Like

Lol, keep an eye out for my posts in future, im always asking stupid questions. :wink:

You can use the same btc wallet as usual :coin:

Have a great evening.

2 Likes