Problem: my SCALE system is mounting datasets in a locked dataset

I have two pools in my system: Primus (the system and apps dataset) and Velox. I also have an eSATA drive containing an encrypted dataset named Maior. I have a replication task set to replicate both Primus and Velox from a snapshot task into Maior. Velox and Primus are not encrypted, but I set “Inherit Encryption” in the replication task. I run this replication task manually. The idea is to replicate the pools weekly and take the external drive offsite until the next weekly replication.

My process:

  1. Connect the drive.
  2. Import the pool using the web UI.
  3. “Unlock” the pool, including child datasets, using the web UI.

I’m finding that after the initial replication, attempting to unlock Maior results in an error that it can’t be unlocked because /mnt/Maior isn’t empty.

An inspection shows that the directory is, in fact, not empty and contains several mounts from ix-applications:

`mount | grep Maior`
/mnt/Maior/Primus on /mnt/Primus type zfs (rw,relatime,xattr,nfs4acl,casesensitive)
/mnt/Maior/Primus/ix-applications on /mnt/Primus/ix-applications type zfs (rw,noatime,xattr,posixacl,casesensitive)
/mnt/Maior/Primus/ix-applications/releases on /mnt/Primus/ix-applications/releases type zfs (rw,noatime,xattr,posixacl,casesensitive)
/mnt/Maior/Primus/ix-applications/releases/tautulli on /mnt/Primus/ix-applications/releases/tautulli type zfs (rw,noatime,xattr,posixacl,casesensitive)
/mnt/Maior/Primus/ix-applications/releases/overseerr on /mnt/Primus/ix-applications/releases/overseerr type zfs (rw,noatime,xattr,posixacl,casesensitive)
/mnt/Maior/Primus/ix-applications/releases/syncthing on /mnt/Primus/ix-applications/releases/syncthing type zfs (rw,noatime,xattr,posixacl,casesensitive)
/mnt/Maior/Primus/ix-applications/releases/wireguard-server on /mnt/Primus/ix-applications/releases/wireguard-server type zfs (rw,noatime,xattr,posixacl,casesensitive)
/mnt/Maior/Primus/ix-applications/releases/syncthing/volumes on /mnt/Primus/ix-applications/releases/syncthing/volumes type zfs (rw,noatime,xattr,posixacl,casesensitive)
/mnt/Maior/Primus/ix-applications/releases/tautulli/volumes on /mnt/Primus/ix-applications/releases/tautulli/volumes type zfs (rw,noatime,xattr,posixacl,casesensitive)
/mnt/Maior/Primus/ix-applications/releases/wireguard-server/volumes on /mnt/Primus/ix-applications/releases/wireguard-server/volumes type zfs (rw,noatime,xattr,posixacl,casesensitive)
/mnt/Maior/Primus/ix-applications/releases/overseerr/volumes on /mnt/Primus/ix-applications/releases/overseerr/volumes type zfs (rw,noatime,xattr,posixacl,casesensitive)
/mnt/Maior/Primus/ix-applications/releases/wireguard-server/volumes/ix_volumes on /mnt/Primus/ix-applications/releases/wireguard-server/volumes/ix_volumes type zfs (rw,noatime,xattr,posixacl,casesensitive)
/mnt/Maior/Primus/ix-applications/releases/tautulli/volumes/ix_volumes on /mnt/Primus/ix-applications/releases/tautulli/volumes/ix_volumes type zfs (rw,noatime,xattr,posixacl,casesensitive)
/mnt/Maior/Primus/ix-applications/releases/overseerr/volumes/ix_volumes on /mnt/Primus/ix-applications/releases/overseerr/volumes/ix_volumes type zfs (rw,noatime,xattr,posixacl,casesensitive)
/mnt/Maior/Primus/ix-applications/releases/wireguard-server/volumes/ix_volumes/config on /mnt/Primus/ix-applications/releases/wireguard-server/volumes/ix_volumes/config type zfs (rw,noatime,xattr,posixacl,casesensitive)
/mnt/Maior/Primus/ix-applications/releases/overseerr/volumes/ix_volumes/config on /mnt/Primus/ix-applications/releases/overseerr/volumes/ix_volumes/config type zfs (rw,noatime,xattr,posixacl,casesensitive)
/mnt/Maior/Primus/ix-applications/releases/syncthing/volumes/ix_volumes on /mnt/Primus/ix-applications/releases/syncthing/volumes/ix_volumes type zfs (rw,noatime,xattr,posixacl,casesensitive)
/mnt/Maior/Primus/ix-applications/releases/syncthing/volumes/ix_volumes/config on /mnt/Primus/ix-applications/releases/syncthing/volumes/ix_volumes/config type zfs (rw,noatime,xattr,posixacl,casesensitive)
Maior/Primus/ix-applications/releases/syncthing/volumes/ix_volumes/config on /mnt/Maior/Primus/ix-applications/releases/syncthing/volumes/ix_volumes/config type zfs (ro,noatime,xattr,posixacl,casesensitive)
Maior/Primus/ix-applications/releases/tautulli/volumes/ix_volumes/config on /mnt/Maior/Primus/ix-applications/releases/tautulli/volumes/ix_volumes/config type zfs (ro,noatime,xattr,posixacl,casesensitive)
Maior/Primus/ix-applications/releases/wireguard-server/volumes/ix_volumes/config on /mnt/Maior/Primus/ix-applications/releases/wireguard-server/volumes/ix_volumes/config type zfs (ro,noatime,xattr,posixacl,casesensitive)
Maior/Primus/ix-applications/releases/overseerr/volumes/ix_volumes/config on /mnt/Maior/Primus/ix-applications/releases/overseerr/volumes/ix_volumes/config type zfs (ro,noatime,xattr,posixacl,casesensitive)

These mounts appear even if I haven’t tried to unlock the datasets after import. None of the directories in the tree below /mnt/Maior contain files. I can umount each of the config directories, but I can’t unmount /mnt/Maior/Primus even if it’s empty. Oddly, I have many more apps installed than just these four, so I’m not sure why just these four mount this way. I don’t see anything unusual about the mountpoint property:

`zfs get mountpoint`
$ sudo zfs get mountpoint Maior/Primus/ix-applications/releases/syncthing/volumes/ix_volumes/config
NAME                                                                       PROPERTY    VALUE                                                                           SOURCE
Maior/Primus/ix-applications/releases/syncthing/volumes/ix_volumes/config  mountpoint  /mnt/Maior/Primus/ix-applications/releases/syncthing/volumes/ix_volumes/config  default
$ sudo zfs get mountpoint Primus/ix-applications/releases/syncthing/volumes/ix_volumes/config
NAME                                                                 PROPERTY    VALUE                                                                     SOURCE
Primus/ix-applications/releases/syncthing/volumes/ix_volumes/config  mountpoint  /mnt/Primus/ix-applications/releases/syncthing/volumes/ix_volumes/config  default

And the configuration of the replication task:

replication task config
+-------------------------------------+----------------------+
|                                  id | 5                    |
|                      target_dataset | Maior                |
|                           recursive | true                 |
|                         compression | <null>               |
|                         speed_limit | <null>               |
|                             enabled | true                 |
|                           direction | PUSH                 |
|                           transport | LOCAL                |
|                                sudo | false                |
|                  netcat_active_side | <null>               |
|         netcat_active_side_port_min | <null>               |
|         netcat_active_side_port_max | <null>               |
|                     source_datasets | Velox                |
|                                     | Primus               |
|                             exclude | <empty list>         |
|                       naming_schema | <empty list>         |
|                          name_regex | <null>               |
|                                auto | true                 |
|              only_matching_schedule | false                |
|                            readonly | SET                  |
|                  allow_from_scratch | true                 |
|              hold_pending_snapshots | false                |
|                    retention_policy | SOURCE               |
|                       lifetime_unit | <null>               |
|                      lifetime_value | <null>               |
|                           lifetimes | <empty list>         |
|                         large_block | true                 |
|                               embed | false                |
|                          compressed | true                 |
|                             retries | 5                    |
|   netcat_active_side_listen_address | <null>               |
| netcat_passive_side_connect_address | <null>               |
|                       logging_level | <null>               |
|                                name | Velox,Primus - Maior |
|                               state | <dict>               |
|                          properties | true                 |
|                  properties_exclude | <empty list>         |
|                 properties_override | <dict>               |
|                           replicate | true                 |
|                          encryption | true                 |
|                  encryption_inherit | true                 |
|                      encryption_key | <null>               |
|               encryption_key_format | <null>               |
|             encryption_key_location | <null>               |
|                     ssh_credentials | <null>               |
|             periodic_snapshot_tasks | <list>               |
|          also_include_naming_schema | <empty list>         |
|                            schedule | <null>               |
|                   restrict_schedule | <null>               |
|                                 job | <null>               |
+-------------------------------------+----------------------+

Any recommendations so I don’t have to blow away and recreate Maior/Primus from scratch every week?

These entries don’t make sense to me:

/mnt/Maior/Primus on /mnt/Primus type zfs (rw,relatime,xattr,nfs4acl,casesensitive)

In fact, I’d be worried that where you thought you had been saving files, might not have been inside the intended pool/dataset.


Rather than only grep the particular pool’s name, what about these plain two commands:
zfs mount
and
mount

Thanks for the response. Sounds like a reasonable concern. Fortunately, those config directories are empty. I assume they’re supposed to be PVCs, but I’m using host path bind mounts for the config volumes.

After posting, I realized that I could work around the issue by enabling the Force option during unlock. But, that’s just a temporary solution.

I went ahead and ran a replication to get caught up. I then exported Maior, confirmed that /mnt/Maior no longer existed, and imported it, leaving the datasets locked.

I took zfs mount and mount after both the replication and after export/import. Unfortunately, pastebin flagged the mount pastes for moderation.

You’ll note that Tautulli has disappeared from the list of inappropriate mounts. I believe this is because I used the cli app chart_release remove_ix_volume command to delete it’s config volume.

I likely had used a PVC while first testing these apps. Testing complete, I believe I deleted the app instances and recreated them using a host path for their config mounts. Perhaps I hit some bug. If you or anyone has other theories, feel free to drop it here.

For the time being, I’ll leave my env in a broken, but usable state, in case I need to submit a bug report. I’ll also try to reproduce it on a clean install.

I wasn’t able to reproduce the problem with a fresh deployment in a VM.