How to force-delete a fake-"busy" snapshot

Good afternoon,

I am in the process of re-balancing my NAS to better take advantage of my sVDEV. As part of the process, I turned off replication and snapshot tasks, followed by deleting all snapshot using the GUI. That in turn should ensure that the snapshots start anew with a clean pool once the rebalance is complete.

Unfortunately, a snapshot from 2021 is refusing to be deleted. It claims it is busy but no process exists that is writing to it (and the NAS has been restarted plenty of times since 2021). The snapshot is associated with the ‘Pictures’ dataset and any attempt to remove it is foiled. It’s filename is ‘auto-2021-08-11_04-00’ and it’s located in the usual place, i.e.

/mnt/pool/Pictures/.zfs/snapshot

So, I tried deleting it using the destroy command

zfs destroy -F Pictures@auto-2021-08-11_04-00

and I got “dataset does not exist”. Do I have to include the Pool name in the dataset reference and if so, how? Or what am I doing wrong?

Yes.

The proper format is:

zfs destroy <pool>/dataset@snapshot

With your specific instance:

zfs destroy pool/Pictures@auto-2021-08-11_04-00

You might have, in the past, “protected” this snapshot.

You can check with:

zfs holds pool/Pictures@auto-2021-08-11_04-00

(If the output is empty, it means the snapshot is not protected with a “hold”.)


Here’s an example of an unprotected snapshot:

zfs holds mypool/archives@manual-20221225

NAME                             TAG   TIMESTAMP

Here’s an example of a protected snapshot:

zfs holds mypool/archives@manual-20221101

NAME                             TAG   TIMESTAMP
mypool/archives@manual-20221101  save  Tue Nov 01 10:00 2022
1 Like

It is held! Argh! But it won’t release either if I implore the system with

zfs release -r keep pool/Pictures@auto-2021-08-11_04-00

as suggested over at Oracle. Do I drop the “keep”?

It depends what tag you used. The zfs holds command will reveal the tag. You must match the tag when you issue a zfs release.

In my previous example, the tag was “save”. So I would have to invoke “save” in my release command, like this:

zfs release save mypool/archives@manual-20221101

*As you know, you can add the -r flag to apply the release recursively for all matching snapshot names (and tags) “down the nest”.

**Protip: To check all protected snapshots across all pools:

zfs list -H -t snap -o name -r | xargs zfs holds
1 Like

Don’t leave us hanging, Vampire Pig! :scream:

Did it work?

You are amazing! The hold is released, but the snapshot is still impossible to destroy, i.e. busy.

Apologies, we were temporarily raptured by the eclipse. But now that it’s over, we find ourselves back at home again. Sorry to keep you hanging.

Other posts in the old forum suggest that rebooting the system may fix the issue. Should I try that next? I won’t be able to do that just yet due to the rebalancing running ATM but once that has done, I can try again.

1 Like

If memory serves, you need to use the “zfs release …” as many times as required until the holds is no longer applied. The hold and release are using counters in the event multiple replication to various destination need to be performed while preventing snapshot to expire and be destroyed.

This can further clue you in:

It’s also possible that you might have a dependent clone tethered to this particular snapshot?

zfs get -r -t filesystem origin pool
1 Like
zfs list -H -t snap -o name -r | xargs zfs holds

returns zilch… nothing… :frowning:

zfs get -r -t filesystem origin pool

That returns a whole bunch of snapshots associated with top level stuff but nothing with the auto backup. I think rebooting is in order…

Wait, what? It shouldn’t show any snapshots, unless you do in fact have dependent clones created from origin snapshots…

I’m about to send you a DM but the content suggests nothing related to the snapshot at hand. The results of that command suggest nothing but datasets, iocage, and boot images.

Nothing related to Pictures other than the top-level “Pictures” data set reference.

1 Like

Okay. There is one snapshot, but it’s related to a jail.

Perhaps a reboot is needed after all.

1 Like

will do later tonight. First I have to finish my rebalance and re-enable snapshots

Almost there. Thank you!!!

Postscript:

@winnielinnie for the win! I had to reboot to make the “busy” snapshot from 2021 eligible for deletion, which is a bug. Don’t see the point in reporting it though since CORE is not going to get big maintenance updates and I don’t know a easy path to replicating it for the iXsystems team (unlike the snapshot UI bug that you reported).

Even better, killing that snapshot freed up 3.2TB of data that persistently hung on in my file system, preventing me from finishing my rebalancing efforts for the sVDEV. That final rebalancing is now under way as any recordsize under 512kiB is now going to live on the sVDEV while 99% will be in compressed 1M HDD storage.

sVDEV is a really nifty way to create tiered storage on a TrueNAS - i.e. create a dataset with the default recordsize but then set the recordsize and sVDEV cutoff parameter of the “fast” dataset such that the entire dataset will reside solely on the sVDEV. That avoids the need for separate “fast” pools. Super nifty…

1 Like