I recently copied the contents of my Secondary ZFS1 Array to my Primary array temporarily so I could destroy the secondary array and recreate with an additional drive.
That process all went smoothly and the data was copied back onto the now larger secondary array with no problem.
However on deleting the backup folder on my Primary array the space hasn’t been released and is still showing 70% full when it should be nearer 35 %.
I’ve tried restarting, running scrub tasks, checked for Snapshots that might be linking to deleted files (there are none as I don’t use snapshots) and still can’t work out whats happening.
System is based on Supermicro X10SDV-4C-TLN4F
64Gb ECC RAM
LSI 9300-16i HBA
5 x 4TB WD Red SA500 Sata SSD in ZFS1 (Primary array)
6 x 4TB Crucial MX500 Sata SSD in ZFS1 (Secondary array)
Any suggestion appreciated as this is driving me nuts now…
I have found that after deleting a large quantity of data it can take a while for the free space to show - I am talking minutes before the space starts showing up and can take a while for all the free space to show up. But we ain’t talking hours
From memory, a reboot made it all appear faster
How did you copy the data - from an external host, commandline on the NAS or through snapshot replication?
It was about 6 Tb of data that I copied between mapped SMB network shares using my Windows PC on 10Gb LAN. I didn’t do replication as the data was just backed up to sub folder on my primary array.
It’s now been over 24 hours since the 6Tb of data was deleted from my primary array (just by deleting the whole folder over the network, it took over 2 hours…), I’ve done multiple restarts and scrubs and its still showing as 10Tb of used space or 70% of the whole array on the NAS when it should be nearer 5Tb and 30’ish %
Weirdly doing right click and properties on the mapped drive on my PC its only showing capacity as being 9.37Tb with 5.13Tb used and 4.24 Tb free when the array is actually 14.39 Tb as reported by the NAS but only showing as 4.25 Tb free. This makes me wonder if Samba is the issue here?
I did read something about there being an SMB recycle bin option or something? But haven’t found where that is or if its enabled on my NAS.
For future reference, such massive operations should be done on the server itself, not over the network / SMB.[1] Simply logging in via SSH and familiarizing yourself with Linux commands will take you far.
To rule out much guesswork, what is the output of these two commands:
zpool list <mainpool>
zfs list -r -t filesystem -o space <mainpool>
Not only do you hit a bottleneck with many metadata operations over SMB, and not only do you suffer a sheer throughput penalty, but you will risk inadvertently “renaming” files, since SMB does not support the full character map that is supported by native *nix filesystems. This might not apply to you if everything was done over SMB. ↩︎
Thanks all for the help and apologies for the slow response.
Just see the files and folders I expect to see… No hidden folders or anything as far as I can tell.
I’m pretty new to Truenas, so guessing I should be using Snapshots?
I’ve used Linux for a few years but am far from comfortable with the Shell command and just unaware of what I should and shouldn’t be doing in Shell in case I mess up truenas somehow…
I’ve copied the output of those command below. Thanks again for all the help.
zpool list <mainpool>
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
ZFSArray1 18.2T 12.7T 5.47T - - 0% 69% 1.00x ONLINE /mnt
That’s all fine: You can set up snapshots and replication tasks from the GUI, for recurring as well as one-time tasks.
Now, let’s find which snapshot retains your temporary data…
Set up periodic snapshot tasks in the GUI. Like: daily, retained for 2 weeks; weekely, retained for 2 months; monthly, retained for 2 years. Possibly different sets with different values for different datasets.
And then, if you have a second ZFS NAS, use these snapshots for periodic replication.