Issue Summary:
I’m encountering an issue with my TrueNAS system where the root dataset (NAS-main) is showing unexpected space usage of 50.8GB, which I cannot account for. This dataset gets replicated to the secondary pool (secondary), and the issue persists even after destroying and recreating the snapshot. Its probably just my lack of understanding but i would like to ask.
System Information:
Source System: TrueNAS Core
Target System: TrueNAS Core
Primary Pool:NAS-main
Secondary Pool:secondary
Replication Type: Recursive
Problem Details:
Unexpected Space Usage:
Running zfs list -r NAS-main shows usedbydataset is 50.8GB, but there’s no visible data at the root level.
This could be a case where a folder with the same name of a child dataset exists (for one reason or another), and so its usage is hidden when the child dataset gets overlaid on top when it is mounted.
zfs list -t filesystem,volume -r -o space,encryption,encroot,mountpoint NAS-main
zpool list -o name,size,alloc,free,bcloneused NAS-main
zfs mount | grep NAS-main
ls -l /mnt/NAS-main
The only way to find out for certain is to umount the following datasets, then issue du -hs for all directories within /mnt/NAS-main, except for .system and scale-apps.
manjaro-home
syncthing-data
home
cavern
media
scale-home
mandie-home
They should all report 0 usage.
I understand that this means you’ll need to stop services and shares, and possibly apps, but there’s no other way to know for certain unless you umount them. The middleware doesn’t like it if you unmount datasets with the command-line, from what I remember. (Maybe there’s a supported TrueNAS CLI command?)
I excluded .system and scale-apps, since I assume those are unlikely culprits.
There’s clearly 50 GiB sitting directly inside the root dataset, not just referred by a snapshot(s).
Not too relevant, but might as well ask. What is the vdev layout of this pool? Had you ever used deduplication?
I could not unmount my home dataset as i am ssh’ed into it and prob other reasons too, but the other i did. I hope this was what you meant.
jack@TruenasCore ~ $ sudo zpool status
pool: NAS-main
state: ONLINE
scan: scrub repaired 0B in 01:27:33 with 0 errors on Thu Jan 9 05:57:34 2025
config:
NAME STATE READ WRITE CKSUM
NAS-main ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/a6a6224b-8d18-11ed-b40c-c4346b65b332 ONLINE 0 0 0
gptid/a6be27d2-8d18-11ed-b40c-c4346b65b332 ONLINE 0 0 0
gptid/a68db70b-8d18-11ed-b40c-c4346b65b332 ONLINE 0 0 0
gptid/a6b252a4-8d18-11ed-b40c-c4346b65b332 ONLINE 0 0 0
errors: No known data errors
pool: boot-pool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 00:00:23 with 0 errors on Sun Jan 19 03:45:23 2025
config:
NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada1p2 ONLINE 0 0 0
ada0p2 ONLINE 0 0 0
errors: No known data errors
pool: secondary
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 01:10:33 with 0 errors on Thu Jan 9 05:40:33 2025
config:
NAME STATE READ WRITE CKSUM
secondary ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/d13e21da-96b9-11ef-8b70-a0d3c13491f6 ONLINE 0 0 0
gptid/d14a61a2-96b9-11ef-8b70-a0d3c13491f6 ONLINE 0 0 0
gptid/7792e84f-9938-11ef-af94-a0d3c13491f6 ONLINE 0 0 0
errors: No known data errors
jack@TruenasCore ~ $ sudo zfs unmount NAS-main/manjaro-home
Password:
jack@TruenasCore ~ $ sudo du -hs /mnt/NAS-main/manjaro-home
512B /mnt/NAS-main/manjaro-home
jack@TruenasCore ~ $ ls -al /mnt/NAS-main/
total 202
drwxr-xr-x 12 root wheel 11 Jan 19 17:20 .
drwxr-xr-x 4 root wheel 192 Jan 19 19:05 ..
dr-xr-xr-x+ 3 root wheel 3 Jan 5 2023 .zfs
drwxrwx---+ 13 smbuser builtin_administrators 12 Jan 10 16:55 cavern
drwxr-xr-x 16 jack wheel 26 Jan 8 10:17 home
drwxr-xr-x 16 jack wheel 27 Jan 19 10:57 home_backup
drwxrwx---+ 7 3001 builtin_administrators 15 Jan 1 14:52 mandie-home
drwxr-xr-x 2 root wheel 2 Jan 19 12:42 manjaro-home
drwxr-x---+ 103 mediax 3004 116 Jan 18 12:44 media
drwxrwx---+ 11 root wheel 10 Jan 4 00:26 scale-apps
drwx------+ 14 3003 3003 28 Jan 17 14:50 scale-home
drwxr-x--- 12 568 568 11 Dec 31 11:39 syncthing-data
jack@TruenasCore ~ $ sudo zfs unmount NAS-main/syncthing-data
jack@TruenasCore ~ $ ls -al /mnt/NAS-main/
total 156
drwxr-xr-x 12 root wheel 11 Jan 19 17:20 .
drwxr-xr-x 4 root wheel 192 Jan 19 19:05 ..
dr-xr-xr-x+ 3 root wheel 3 Jan 5 2023 .zfs
drwxrwx---+ 13 smbuser builtin_administrators 12 Jan 10 16:55 cavern
drwxr-xr-x 16 jack wheel 26 Jan 8 10:17 home
drwxr-xr-x 16 jack wheel 27 Jan 19 10:57 home_backup
drwxrwx---+ 7 3001 builtin_administrators 15 Jan 1 14:52 mandie-home
drwxr-xr-x 2 root wheel 2 Jan 19 12:42 manjaro-home
drwxr-x---+ 103 mediax 3004 116 Jan 18 12:44 media
drwxrwx---+ 11 root wheel 10 Jan 4 00:26 scale-apps
drwx------+ 14 3003 3003 28 Jan 17 14:50 scale-home
drwxr-xr-x 2 root wheel 2 Jan 19 13:06 syncthing-data
jack@TruenasCore ~ $ sudo zfs unmount NAS-main/home
cannot unmount '/mnt/NAS-main/home': pool or dataset is busy
jack@TruenasCore ~ $ sudo zfs unmount NAS-main/cavern
jack@TruenasCore ~ $ sudo zfs unmount NAS-main/media
jack@TruenasCore ~ $ sudo zfs unmount NAS-main/scale-home
jack@TruenasCore ~ $ sudo zfs unmount NAS-main/mandie-home
jack@TruenasCore ~ $ sudo du -hs /mnt/NAS-main
^C
jack@TruenasCore ~ $ sudo du -hs /mnt/NAS-main/manjaro-home
512B /mnt/NAS-main/manjaro-home
jack@TruenasCore ~ $ sudo du -hs /mnt/NAS-main/syncthing-data
512B /mnt/NAS-main/syncthing-data
jack@TruenasCore ~ $ sudo du -hs /mnt/NAS-main/cavern
512B /mnt/NAS-main/cavern
jack@TruenasCore ~ $ sudo du -hs /mnt/NAS-main/media
512B /mnt/NAS-main/media
jack@TruenasCore ~ $ sudo du -hs /mnt/NAS-main/scale-home
512B /mnt/NAS-main/scale-home
jack@TruenasCore ~ $ sudo du -hs /mnt/NAS-main/mandie-home
512B /mnt/NAS-main/mandie-home
jack@TruenasCore ~ $ sudo zfs mount NAS-main/manjaro-home
jack@TruenasCore ~ $ sudo zfs mount NAS-main/syncthing-data
jack@TruenasCore ~ $ sudo zfs mount NAS-main/cavern
jack@TruenasCore ~ $ sudo zfs mount NAS-main/media
jack@TruenasCore ~ $ sudo zfs mount NAS-main/scale-home
jack@TruenasCore ~ $ sudo zfs mount NAS-main/mandie-home
jack@TruenasCore ~ $ ls -al /mnt/NAS-main/
total 214
drwxr-xr-x 12 root wheel 11 Jan 19 17:20 .
drwxr-xr-x 4 root wheel 192 Jan 19 19:05 ..
dr-xr-xr-x+ 3 root wheel 3 Jan 5 2023 .zfs
drwxrwx---+ 13 smbuser builtin_administrators 12 Jan 10 16:55 cavern
drwxr-xr-x 16 jack wheel 26 Jan 8 10:17 home
drwxr-xr-x 16 jack wheel 27 Jan 19 10:57 home_backup
drwxrwx---+ 7 3001 builtin_administrators 15 Jan 1 14:52 mandie-home
drwxrwx---+ 7 smbuser builtin_administrators 6 Oct 30 14:31 manjaro-home
drwxr-x---+ 103 mediax 3004 116 Jan 18 12:44 media
drwxrwx---+ 11 root wheel 10 Jan 4 00:26 scale-apps
drwx------+ 14 3003 3003 28 Jan 17 14:50 scale-home
drwxr-x--- 12 568 568 11 Dec 31 11:39 syncthing-data
jack@TruenasCore ~ $
actually thinking about it it could well be something to do with my home folder/dataset as i was using a folder as home for a short while but i did delete it (im sure i did :P) , but i dont see how to dismount it and see whats “underneath”
Because while in TrueNAS there are shares, services, and even login sessions constantly “using” paths and mountpoints, the only true way for a clean environment is to boot into an Ubuntu live ISO, and manually mount/unmount while in the live session.
o actually yes, i do remember home_backup, it was whenn i was doing a load of testing and had to create a quick backup, it was for like a couple of seconds.
Hey there. That was a brilliant idea to start a new thread on the forums where I could swoop in with an easy solution! Glad we scheduled this in advance and coordinated together behind the scenes. Would have been terrible if someone else interfered with our plan.
I’m still working on racking up more “solutions” to build my clout. Let me know when you want to do this again.
Thanks, Jack.
I meant to send a DM. Everything looks the same on Discourse!