I am in the process of re-balancing my NAS to better take advantage of my sVDEV. As part of the process, I turned off replication and snapshot tasks, followed by deleting all snapshot using the GUI. That in turn should ensure that the snapshots start anew with a clean pool once the rebalance is complete.
Unfortunately, a snapshot from 2021 is refusing to be deleted. It claims it is busy but no process exists that is writing to it (and the NAS has been restarted plenty of times since 2021). The snapshot is associated with the âPicturesâ dataset and any attempt to remove it is foiled. Itâs filename is âauto-2021-08-11_04-00â and itâs located in the usual place, i.e.
/mnt/pool/Pictures/.zfs/snapshot
So, I tried deleting it using the destroy command
zfs destroy -F Pictures@auto-2021-08-11_04-00
and I got âdataset does not existâ. Do I have to include the Pool name in the dataset reference and if so, how? Or what am I doing wrong?
Other posts in the old forum suggest that rebooting the system may fix the issue. Should I try that next? I wonât be able to do that just yet due to the rebalancing running ATM but once that has done, I can try again.
If memory serves, you need to use the âzfs release âŚâ as many times as required until the holds is no longer applied. The hold and release are using counters in the event multiple replication to various destination need to be performed while preventing snapshot to expire and be destroyed.
Iâm about to send you a DM but the content suggests nothing related to the snapshot at hand. The results of that command suggest nothing but datasets, iocage, and boot images.
Nothing related to Pictures other than the top-level âPicturesâ data set reference.
@winnielinnie for the win! I had to reboot to make the âbusyâ snapshot from 2021 eligible for deletion, which is a bug. Donât see the point in reporting it though since CORE is not going to get big maintenance updates and I donât know a easy path to replicating it for the iXsystems team (unlike the snapshot UI bug that you reported).
Even better, killing that snapshot freed up 3.2TB of data that persistently hung on in my file system, preventing me from finishing my rebalancing efforts for the sVDEV. That final rebalancing is now under way as any recordsize under 512kiB is now going to live on the sVDEV while 99% will be in compressed 1M HDD storage.
sVDEV is a really nifty way to create tiered storage on a TrueNAS - i.e. create a dataset with the default recordsize but then set the recordsize and sVDEV cutoff parameter of the âfastâ dataset such that the entire dataset will reside solely on the sVDEV. That avoids the need for separate âfastâ pools. Super niftyâŚ