Can't destroy empty pool?

Can anyone help me with why I can’t destroy a now empty pool after migrating to SCALE from CORE?

I have two pools. tank and backup. The latter is on the same server to replicate to as backup. As I am making config changes, stopped the automatic replication and deleted all snapshots on this pool. It should be empty, but there appear to be ghosts…

root@bosnas[/mnt/backup]# ll
total 31
drwxr-xr-x 10 root 9 Jan 14 16:26 ./
drwxr-xr-x  5 root 5 Feb  2 12:14 ../
drwxr-xr-x  7   80 8 Nov 16  2014 .freenas/
drwxrwxrwx  1 root 0 Feb 10 08:14 .zfs/
drwxr-xr-x  2 root 2 Dec 29  2021 apps/
drwxr-xr-x  2 root 2 Dec 29  2021 bin/
drwxr-xr-x  2 root 2 Dec 29  2021 iocage/
drwxr-xr-x  2 root 2 Jan 14 16:26 ix-applications/
drwxr-xr-x  2 root 2 Dec 29  2021 media/
drwxr-xr-x  2 root 2 Jan 26  2023 timemachine/

These appear to be read only filesystems

root@bosnas[/mnt/backup]# rm -rf *
zsh: sure you want to delete all 6 files in /mnt/backup [yn]? y
rm: cannot remove 'apps': Read-only file system
rm: cannot remove 'bin': Read-only file system
rm: cannot remove 'iocage': Read-only file system
rm: cannot remove 'ix-applications': Read-only file system
rm: cannot remove 'media': Read-only file system
rm: cannot remove 'timemachine': Read-only file system

If I try to remount as read-write I get

root@bosnas[/mnt/backup]# mount -o remount,rw backup
filesystem 'backup' cannot be mounted due to invalid option 'nfs4acl'.

linux thinks only 256k is used

root@bosnas[/mnt/backup]# df -ah --total /mnt/backup
Filesystem      Size  Used Avail Use% Mounted on
backup           13T  256K   13T   1% /mnt/backup
total            13T  256K   13T   1% -

but zfs thinks it’s 622M

root@bosnas[/mnt/backup]# zfs list backup
NAME     USED  AVAIL  REFER  MOUNTPOINT
backup   622M  12.9T   256K  /mnt/backup

and, yes, I checked there are no snapshots

root@bosnas[/mnt/backup]# zfs list -t snapshot backup
no datasets available

If I try to Export/disconnect pool: backup via the UI, (with or without checking the destroy option) I get an error

[EFAULT] cannot unmount '/mnt/backup': pool or dataset is busy

So before I go behind the UI’s back and zpool destroy -f backup I thought I’d ask if anyone has any ideas. Where’d my 622M go? Is it just ZFS overhead?
How do I blow away the readonly filesystems?

Thanks

Why are you even doing all this?

Why would you consider using rm -rf * at all? Had you been in the wrong directory, you could have deleted everything.

If you want to start over with an empty backup pool, just create a new pool using the existing drives.

1 Like

Thanks @winnielinnie. I’m well aware of the dangers of rm -rf. Not my first rodeo (I’ve been sysadmining UNIX systems before linux’s first release - I even sometimes cross the road without holding someone’s hand :upside_down_face:).

Not sure how you can create a new pool using drives that are already in use by an existing pool. Surely you have to either export the pool first, or destroy the pool :slightly_smiling_face:

FWIW I managed to remount with the sloppy option and also figured out why the filesystem was in use. I have now succesfully exported the pool, and the disks can now be used to create a new pool.

Anyway - thanks for chiming in.

That should have been the first thing you did.

No destroying or rm’ing. Just export the pool from the GUI before you try anything. :+1:

It was. That’s where the yak shaving began :slight_smile:

From your post, it looks like you tried it after doing rm -rf.

And yet nothing I said indicated it was a chronological narrative :slightly_smiling_face:

Even though I managed to achieve my goal, I had to do it from the command line; I had to use -f to force the export; and I still don’t know why it was mounted ro in the first place, or where >600MB went ¯_(ツ)_/¯

Which was why I posted.

  1. Every dataset on your backup pool is a separate filesystem from a Linux perspective which is why rm -f doesn’t work (you cannot rm a filesystem, only files and directories).

  2. If you were posting about “why it was mounted ro in the first place” or “where the >600MB went” and you wanted help e.g. making it rw or recovering the >600MB then you should probably have asked about that rather than asking about how to destroy your pool.

My best guess about the answers to these questions are:

  1. It was mounted ro because you set up the replication task to create ro datasets.
  2. The 600MB may not be missing at all. zfs list isn’t accurate - you should do zpool list instead to get an accurate statement of block usage in a pool.

Thanks @Protopia. I appreciate your thoughts.

Yes - thanks. I am aware of datasets being file systems. It turns out the “directories” in questions were in fact datasets, though trying to do the right thing and use the UI - they don’t show up in the list of datasets. They seem to be created automatically. That was one of the key pieces of info I didn’t realise.

Actually if you read the start of the post, I was asking about why I wasn’t able to destroy the pool - not how to do that. Hence I didn’t give a chronological narrative but a set of observations, trying to find the gaps in my understanding. I realise many people post just wanting the “how do I achieve x”. As such the questions at the end, and in reply to other comments are in line with my opening question :slightly_smiling_face:

That makes sense. Thank you. I did have a replication task - though had deleted it, but that wouldn’t have removed the ro mount of course. Again the key thing I was missing was that they were datasets not just directories.

Good point. I think I already knew that - but it can be a long time between dusting off the ZFS chops when the system has been running well for so long.

Thanks again for your thoughts.

No problem.