Truenas Scale not showing freed up drive space after large delete

Hi There,

I recently copied the contents of my Secondary ZFS1 Array to my Primary array temporarily so I could destroy the secondary array and recreate with an additional drive.

That process all went smoothly and the data was copied back onto the now larger secondary array with no problem.

However on deleting the backup folder on my Primary array the space hasn’t been released and is still showing 70% full when it should be nearer 35 %.

I’ve tried restarting, running scrub tasks, checked for Snapshots that might be linking to deleted files (there are none as I don’t use snapshots) and still can’t work out whats happening.

System is based on Supermicro X10SDV-4C-TLN4F
64Gb ECC RAM
LSI 9300-16i HBA
5 x 4TB WD Red SA500 Sata SSD in ZFS1 (Primary array)
6 x 4TB Crucial MX500 Sata SSD in ZFS1 (Secondary array)

Any suggestion appreciated as this is driving me nuts now…

Mike

I have found that after deleting a large quantity of data it can take a while for the free space to show - I am talking minutes before the space starts showing up and can take a while for all the free space to show up. But we ain’t talking hours

From memory, a reboot made it all appear faster

How did you copy the data - from an external host, commandline on the NAS or through snapshot replication?

Hi NugentS,

Thanks so much for the response!

It was about 6 Tb of data that I copied between mapped SMB network shares using my Windows PC on 10Gb LAN. I didn’t do replication as the data was just backed up to sub folder on my primary array.

It’s now been over 24 hours since the 6Tb of data was deleted from my primary array (just by deleting the whole folder over the network, it took over 2 hours…), I’ve done multiple restarts and scrubs and its still showing as 10Tb of used space or 70% of the whole array on the NAS when it should be nearer 5Tb and 30’ish %

Weirdly doing right click and properties on the mapped drive on my PC its only showing capacity as being 9.37Tb with 5.13Tb used and 4.24 Tb free when the array is actually 14.39 Tb as reported by the NAS but only showing as 4.25 Tb free. This makes me wonder if Samba is the issue here?

I did read something about there being an SMB recycle bin option or something? But haven’t found where that is or if its enabled on my NAS.

Any advice very much appreciated!

What happens if you look at the folder using the commandline? Are there any hidden folders there?

I am just guessing here

Prepare for a world of pain…


For future reference, such massive operations should be done on the server itself, not over the network / SMB.[1] Simply logging in via SSH and familiarizing yourself with Linux commands will take you far.


To rule out much guesswork, what is the output of these two commands:

zpool list <mainpool>

zfs list -r -t filesystem -o space <mainpool>

  1. Not only do you hit a bottleneck with many metadata operations over SMB, and not only do you suffer a sheer throughput penalty, but you will risk inadvertently “renaming” files, since SMB does not support the full character map that is supported by native *nix filesystems. This might not apply to you if everything was done over SMB. ↩︎

1 Like

Thanks all for the help and apologies for the slow response.

Just see the files and folders I expect to see… No hidden folders or anything as far as I can tell.

I’m pretty new to Truenas, so guessing I should be using Snapshots?

I’ve used Linux for a few years but am far from comfortable with the Shell command and just unaware of what I should and shouldn’t be doing in Shell in case I mess up truenas somehow…

I’ve copied the output of those command below. Thanks again for all the help.

zpool list <mainpool>
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
ZFSArray1  18.2T  12.7T  5.47T        -         -     0%    69%  1.00x    ONLINE  /mnt

zfs list -r -t filesystem -o space

NAME                                                                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
ZFSArray1                                                                     4.25T  10.1T        0B    166K             0B      10.1T
ZFSArray1/.system                                                             4.25T  1.58G        0B   1.20G             0B       388M
ZFSArray1/.system/configs-388408e95929461db941ccc50979f8fb                    4.25T  6.22M        0B   6.22M             0B         0B
ZFSArray1/.system/configs-b09bccc19b894d4c89051b8f49af8857                    4.25T  10.5M        0B   10.5M             0B         0B
ZFSArray1/.system/cores                                                       1024M   153K        0B    153K             0B         0B
ZFSArray1/.system/ctdb_shared_vol                                             4.25T   153K        0B    153K             0B         0B
ZFSArray1/.system/glusterd                                                    4.25T   166K        0B    166K             0B         0B
ZFSArray1/.system/netdata-b09bccc19b894d4c89051b8f49af8857                    4.25T   306M        0B    306M             0B         0B
ZFSArray1/.system/rrd-388408e95929461db941ccc50979f8fb                        4.25T  29.7M        0B   29.7M             0B         0B
ZFSArray1/.system/rrd-b09bccc19b894d4c89051b8f49af8857                        4.25T  22.1M        0B   22.1M             0B         0B
ZFSArray1/.system/samba4                                                      4.25T   709K        0B    709K             0B         0B
ZFSArray1/.system/services                                                    4.25T   153K        0B    153K             0B         0B
ZFSArray1/.system/syslog-388408e95929461db941ccc50979f8fb                     4.25T  8.65M        0B   8.65M             0B         0B
ZFSArray1/.system/syslog-b09bccc19b894d4c89051b8f49af8857                     4.25T  3.69M        0B   3.69M             0B         0B
ZFSArray1/.system/webui                                                       4.25T   153K        0B    153K             0B         0B
ZFSArray1/Rocky1                                                              4.25T  10.1T     5.01T   5.13T             0B         0B
ZFSArray1/ix-applications                                                     4.25T  4.03G      262K    390K             0B      4.03G
ZFSArray1/ix-applications/catalogs                                            4.25T  1.17G        0B   1.17G             0B         0B
ZFSArray1/ix-applications/default_volumes                                     4.25T   153K        0B    153K             0B         0B
ZFSArray1/ix-applications/k3s                                                 4.25T  2.85G     1.68M   2.84G             0B      1.54M
ZFSArray1/ix-applications/k3s/kubelet                                         4.25T  1.54M        0B   1.54M             0B         0B
ZFSArray1/ix-applications/releases                                            4.25T  2.60M      102K    153K             0B      2.35M
ZFSArray1/ix-applications/releases/plex                                       4.25T  2.35M        0B    153K             0B      2.20M
ZFSArray1/ix-applications/releases/plex/charts                                4.25T   856K        0B    856K             0B         0B
ZFSArray1/ix-applications/releases/plex/volumes                               4.25T  1.37M        0B    153K             0B      1.22M
ZFSArray1/ix-applications/releases/plex/volumes/ix_volumes                    4.25T  1.22M      115K    192K             0B       939K
ZFSArray1/ix-applications/releases/plex/volumes/ix_volumes/config             4.25T   153K        0B    153K             0B         0B
ZFSArray1/ix-applications/releases/plex/volumes/ix_volumes/data               4.25T   153K        0B    153K             0B         0B
ZFSArray1/ix-applications/releases/plex/volumes/ix_volumes/ix-plex_config     4.25T   173K        0B    173K             0B         0B
ZFSArray1/ix-applications/releases/plex/volumes/ix_volumes/ix-plex_data       4.25T   153K        0B    153K             0B         0B
ZFSArray1/ix-applications/releases/plex/volumes/ix_volumes/ix-plex_transcode  4.25T   153K        0B    153K             0B         0B
ZFSArray1/ix-applications/releases/plex/volumes/ix_volumes/transcode          4.25T   153K        0B    153K             0B         0B
ZFSArray1/vms    

You are in fact using snapshots.

A total of 5 TiB is being consumed by all snapshots of the Rocky1 dataset.

If you want an overview of your snapshots for the dataset in question (“Rocky1”), this command will clue you in:

zfs list -t snap -o name,used -S used ZFSArray1/Rocky1 
1 Like

LOL - thats just funny

That’s all fine: You can set up snapshots and replication tasks from the GUI, for recurring as well as one-time tasks.
Now, let’s find which snapshot retains your temporary data…

LOL, who knew :sweat_smile:
so result of zfs list -t snap -o name,used -S used ZFSArray1/Rocky1

ZFSArray1/Rocky1@auto-2024-05-01_17-28

Can’t seem to view this by accessing the drive through shell though?

Agreed :slight_smile:
Now just need to know how to remove it. Doesn’t appear in the snapshot list in the gui.

So, worked it out and managed to delete the snapshot :slight_smile:

Many thanks all for the help!

Now what would be the reccmendation for using regular snapshots moving forward?

Thanks again all!

Mike

Set up periodic snapshot tasks in the GUI. Like: daily, retained for 2 weeks; weekely, retained for 2 months; monthly, retained for 2 years. Possibly different sets with different values for different datasets.

And then, if you have a second ZFS NAS, use these snapshots for periodic replication.

2 Likes