[EFAULT] Failed to stop docker service: [EFAULT] Failed to umount dataset

Truenas version: ElectricEel-24.10.2.4

Hi everyone,

I’m trying to Export/disconnect a pool so that I can rename it, when I try to do this I get the following error:

[EFAULT] Failed to stop docker service: [EFAULT] Failed to umount dataset: cannot unmount '/mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone': unmount failed

This error causes the app services to shut down and become unusable unless the system is restarted.

I have deleted TailScale from the apps list, however this didn’t fix the issue, the directory still exists in /mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone, and I’m fine with removing it and reinstalling it all together, just wasn’t sure how to go about it, or if that would even be the right thing to do.

If anyone has similar experience or any tips, I’d greatly appreciate it.

Full error:

concurrent.futures.process._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/dataset_actions.py", line 77, in umount
    with libzfs.ZFS() as zfs:
  File "libzfs.pyx", line 534, in libzfs.ZFS.__exit__
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/dataset_actions.py", line 79, in umount
    dataset.umount(force=options['force'])
  File "libzfs.pyx", line 4287, in libzfs.ZFSDataset.umount
libzfs.ZFSException: cannot unmount '/mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone': unmount failed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.11/concurrent/futures/process.py", line 261, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 112, in main_worker
    res = MIDDLEWARE._run(*call_args)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 34, in _call
    with Client(f'ws+unix://{MIDDLEWARE_RUN_DIR}/middlewared-internal.sock', py_exceptions=True) as c:
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call
    return methodobj(*params)
           ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 183, in nf
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/dataset_actions.py", line 82, in umount
    raise CallError(f'Failed to umount dataset: {e}')
middlewared.service_exception.CallError: [EFAULT] Failed to umount dataset: cannot unmount '/mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone': unmount failed
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/plugins/docker/update.py", line 97, in do_update
    await self.middleware.call('service.stop', 'docker')
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1629, in call
    return await self._call(
           ^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1460, in _call
    return await methodobj(*prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 179, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 49, in nf
    res = await f(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/service.py", line 267, in stop
    await service_object.after_stop()
  File "/usr/lib/python3/dist-packages/middlewared/plugins/service_/services/docker.py", line 72, in after_stop
    await self.mount_umount_ix_apps(False)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/service_/services/docker.py", line 19, in mount_umount_ix_apps
    await self.middleware.call('zfs.dataset.umount', docker_ds, {'force': True})
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1629, in call
    return await self._call(
           ^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1468, in _call
    return await self._call_worker(name, *prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1474, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1380, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1364, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
middlewared.service_exception.CallError: [EFAULT] Failed to umount dataset: cannot unmount '/mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone': unmount failed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 509, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 554, in __run_body
    rv = await self.method(*args)
         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 179, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 49, in nf
    res = await f(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/export.py", line 105, in export
    await delegate.delete(attachments)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/docker/attachments.py", line 31, in delete
    await (await self.middleware.call('docker.update', {'pool': None})).wait(raise_error=True)
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 457, in wait
    raise self.exc_info[1]
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 509, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 554, in __run_body
    rv = await self.method(*args)
         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 49, in nf
    res = await f(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 179, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/docker/update.py", line 111, in do_update
    raise CallError(f'Failed to stop docker service: {e}')
middlewared.service_exception.CallError: [EFAULT] Failed to stop docker service: [EFAULT] Failed to umount dataset: cannot unmount '/mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone': unmount failed

so I removed the “/mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone” directory with rmdir, it’s fully gone, now I get this error

[EFAULT] Failed to stop docker service: [EFAULT] Failed to umount dataset: cannot unmount '/mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone': no such pool or dataset

I get the following when I run lsof /mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone

truenas_admin@truenas[~]$ lsof /mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone
lsof: WARNING: can't stat() zfs file system /mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone
      Output information may be incomplete.
lsof: WARNING: can't stat() overlay file system /mnt/.ix-apps/docker/overlay2/f56620b6357deba3b2a7d57c51d54dce5c6c177ee69968748646ae9317f7a442/merged
      Output information may be incomplete.
lsof: WARNING: can't stat() overlay file system /mnt/.ix-apps/docker/overlay2/617cc887933be8799d2c339266d3de490ae9f7937408dcb98584df8e28c40b2f/merged
      Output information may be incomplete.
lsof: WARNING: can't stat() overlay file system /mnt/.ix-apps/docker/overlay2/bdd3ee48521063f1955e8b51e9405a94b3cc736d3d1aeb7cb309272886703b8b/merged
      Output information may be incomplete.
lsof: WARNING: can't stat() nsfs file system /run/docker/netns/55cf075613b6
      Output information may be incomplete.
lsof: status error on /mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone: No such file or directory

I am now trying to delete the clone snapshot, getting this error

Delete

Warning: 1 of 1 snapshots could not be deleted.

*** [EINVAL] options.defer: Please set this attribute as ‘NAS 16TB 2 Vdevs Mirrored /ix-apps/app_mounts/tailscale@1.2.7’ snapshot has dependent clones: NAS 16TB 2 Vdevs Mirrored /ix-apps/app_mounts/tailscale-1.2.7-clone

No idea what’s using the service, running

“root@truenas[/mnt/.ix-apps/app_mounts]# lsof +D /mnt/.ix-apps/app_mounts”:

COMMAND   PID          USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
zsh     27361 truenas_admin  cwd    DIR   0,82       17   34 /mnt/.ix-apps/app_mounts
sudo    35661          root  cwd    DIR   0,82       17   34 /mnt/.ix-apps/app_mounts
zsh     35662          root  cwd    DIR   0,82       17   34 /mnt/.ix-apps/app_mounts
lsof    44551          root  cwd    DIR   0,82       17   34 /mnt/.ix-apps/app_mounts
lsof    44552          root  cwd    DIR   0,82       17   34 /mnt/.ix-apps/app_mounts

Got me these results, it looks like tailscale isn’t running, or using any files, so this has me confused af, no idea what’s causing all this issue.

When I try to rmdir tailscale, I get

root@truenas[/mnt/.ix-apps/app_mounts]# rmdir tailscale
rmdir: failed to remove 'tailscale': Device or resource busy

Running this command got me the process that’s messing everything up I think:

root@truenas[/mnt/.ix-apps/app_mounts/tailscale]# ps -ef | grep /mnt/.ix-apps/app_mounts/tailscale
root 67106 56490 0 18:20 pts/4 00:00:00 grep /mnt/.ix-apps/app_mounts/tailscale

However I can’t for the life of me end it, only thing that’s worked has been sudo kill -9 56490, but that just created a new process now:

truenas+ 73592 56099 0 18:25 pts/4 00:00:00 grep /mnt/.ix-apps/app_mounts/tailscale

When I try to sudo kill -9 56099 it’s just frozen

Restarting truenas again, but this has been one of the most frustrating troubleshooting sessions I’ve ever had, all caused by a simple disconnect of the pool.

I ended up finally being able to promote /ix-apps/app_mounts/tailscale-1.2.7-clone with “zfs promote”, and then ran zfs destroy -r mnt/ix-apps/app_mounts/tailscale-1.2.7-clone, it destroyed the snapshot, it no longer appears in the snapshots tab, but when I try to disconnect pool, I still get the “[EFAULT] Failed to stop docker service: [EFAULT] Failed to umount dataset: cannot unmount '/mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone': unmount failed

Here’s my full shell:

root@truenas[/mnt/.ix-apps/app_mounts]# zfs promote NAS\ 16TB\ 2\ Vdevs\ Mirrored\ /ix-apps/app_mounts/tailscale-1.2.7-clone
root@truenas[/mnt/.ix-apps/app_mounts]# zfs promote NAS\ 16TB\ 2\ Vdevs\ Mirrored\ /ix-apps/app_mounts/tailscale-1.2.7-clone
cannot promote 'NAS 16TB 2 Vdevs Mirrored /ix-apps/app_mounts/tailscale-1.2.7-clone': not a cloned filesystem
root@truenas[/mnt/.ix-apps/app_mounts]# ls
actual-budget  dockge  flaresolverr  immich  jellyseerr  joplin  metube  pihole  prowlarr  radarr  sonarr  tailscale  tailscale-1.2.7-clone  tftpd-hpa  urbackup
root@truenas[/mnt/.ix-apps/app_mounts]# zfs promote NAS\ 16TB\ 2\ Vdevs\ Mirrored\ /ix-apps/app_mounts/tailscale            
root@truenas[/mnt/.ix-apps/app_mounts]# zfs promote NAS\ 16TB\ 2\ Vdevs\ Mirrored\ /ix-apps/app_mounts/tailscale-1.2.7-clone
root@truenas[/mnt/.ix-apps/app_mounts]# zfs destroy NAS\ 16TB\ 2\ Vdevs\ Mirrored\ /ix-apps/app_mounts/tailscale-1.2.7-clone
cannot destroy 'NAS 16TB 2 Vdevs Mirrored /ix-apps/app_mounts/tailscale-1.2.7-clone': filesystem has children
use '-r' to destroy the following datasets:
NAS 16TB 2 Vdevs Mirrored /ix-apps/app_mounts/tailscale-1.2.7-clone@1.2.7
root@truenas[/mnt/.ix-apps/app_mounts]# zfs -r destroy NAS\ 16TB\ 2\ Vdevs\ Mirrored\ /ix-apps/app_mounts/tailscale-1.2.7-clone
unrecognized command '-r'
usage: zfs command args ...
where 'command' is one of the following:

        version [-j]

        create [-Pnpuv] [-o property=value] ... <filesystem>
        create [-Pnpsv] [-b blocksize] [-o property=value] ... -V <size> <volume>
        destroy [-fnpRrv] <filesystem|volume>
        destroy [-dnpRrv] <filesystem|volume>@<snap>[%<snap>][,...]
        destroy <filesystem|volume>#<bookmark>

        snapshot [-r] [-o property=value] ... <filesystem|volume>@<snap> ...
        rollback [-rRf] <snapshot>
        clone [-p] [-o property=value] ... <snapshot> <filesystem|volume>
        promote <clone-filesystem>
        rename [-f] <filesystem|volume|snapshot> <filesystem|volume|snapshot>
        rename -p [-f] <filesystem|volume> <filesystem|volume>
        rename -u [-f] <filesystem> <filesystem>
        rename -r <snapshot> <snapshot>
        bookmark <snapshot|bookmark> <newbookmark>
        program [-jn] [-t <instruction limit>] [-m <memory limit (b)>]
            <pool> <program file> [lua args...]

        list [-Hp] [-j [--json-int]] [-r|-d max] [-o property[,...]] [-s property]...
            [-S property]... [-t type[,...]] [filesystem|volume|snapshot] ...

        set [-u] <property=value> ... <filesystem|volume|snapshot> ...
        get [-rHp] [-j [--json-int]] [-d max] [-o "all" | field[,...]]
            [-t type[,...]] [-s source[,...]]
            <"all" | property[,...]> [filesystem|volume|snapshot|bookmark] ...
        inherit [-rS] <property> <filesystem|volume|snapshot> ...
        upgrade [-v]
        upgrade [-r] [-V version] <-a | filesystem ...>

        userspace [-Hinp] [-o field[,...]] [-s field] ...
            [-S field] ... [-t type[,...]] <filesystem|snapshot|path>
        groupspace [-Hinp] [-o field[,...]] [-s field] ...
            [-S field] ... [-t type[,...]] <filesystem|snapshot|path>
        projectspace [-Hp] [-o field[,...]] [-s field] ... 
            [-S field] ... <filesystem|snapshot|path>

        project [-d|-r] <directory|file ...>
        project -c [-0] [-d|-r] [-p id] <directory|file ...>
        project -C [-k] [-r] <directory ...>
        project [-p id] [-r] [-s] <directory ...>

        mount [-j]
        mount [-flvO] [-o opts] <-a|-R filesystem|filesystem>
        unmount [-fu] <-a | filesystem|mountpoint>
        share [-l] <-a [nfs|smb] | filesystem>
        unshare <-a [nfs|smb] | filesystem|mountpoint>

        send [-DLPbcehnpsVvw] [-i|-I snapshot]
             [-R [-X dataset[,dataset]...]]     <snapshot>
        send [-DnVvPLecw] [-i snapshot|bookmark] <filesystem|volume|snapshot>
        send [-DnPpVvLec] [-i bookmark|snapshot] --redact <bookmark> <snapshot>
        send [-nVvPe] -t <receive_resume_token>
        send [-PnVv] --saved filesystem
        receive [-vMnsFhu] [-o <property>=<value>] ... [-x <property>] ...
            <filesystem|volume|snapshot>
        receive [-vMnsFhu] [-o <property>=<value>] ... [-x <property>] ... 
            [-d | -e] <filesystem>
        receive -A <filesystem|volume>

        allow <filesystem|volume>
        allow [-ldug] <"everyone"|user|group>[,...] <perm|@setname>[,...]
            <filesystem|volume>
        allow [-ld] -e <perm|@setname>[,...] <filesystem|volume>
        allow -c <perm|@setname>[,...] <filesystem|volume>
        allow -s @setname <perm|@setname>[,...] <filesystem|volume>

        unallow [-rldug] <"everyone"|user|group>[,...]
            [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-rld] -e [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-r] -c [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-r] -s @setname [<perm|@setname>[,...]] <filesystem|volume>

        hold [-r] <tag> <snapshot> ...
        holds [-rHp] <snapshot> ...
        release [-r] <tag> <snapshot> ...
        diff [-FHth] <snapshot> [snapshot|filesystem]
        load-key [-rn] [-L <keylocation>] <-a | filesystem|volume>
        unload-key [-r] <-a | filesystem|volume>
        change-key [-l] [-o keyformat=<value>]
            [-o keylocation=<value>] [-o pbkdf2iters=<value>]
            <filesystem|volume>
        change-key -i [-l] <filesystem|volume>
        redact <snapshot> <bookmark> <redaction_snapshot> ...
        wait [-t <activity>] <filesystem>
        zone <nsfile> <filesystem>
        unzone <nsfile> <filesystem>

Each dataset is of the form: pool/[dataset/]*dataset[@name]

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow

For further help on a command or topic, run: zfs help [<topic>]
root@truenas[/mnt/.ix-apps/app_mounts]# zfs destroy NAS\ 16TB\ 2\ Vdevs\ Mirrored\ /ix-apps/app_mounts/tailscale-1.2.7-clone 
cannot destroy 'NAS 16TB 2 Vdevs Mirrored /ix-apps/app_mounts/tailscale-1.2.7-clone': filesystem has children
use '-r' to destroy the following datasets:
NAS 16TB 2 Vdevs Mirrored /ix-apps/app_mounts/tailscale-1.2.7-clone@1.2.7
root@truenas[/mnt/.ix-apps/app_mounts]# zfs -r destroy NAS\ 16TB\ 2\ Vdevs\ Mirrored\ /ix-apps/app_mounts/tailscale-1.2.7-clone
unrecognized command '-r'
usage: zfs command args ...
where 'command' is one of the following:

        version [-j]

        create [-Pnpuv] [-o property=value] ... <filesystem>
        create [-Pnpsv] [-b blocksize] [-o property=value] ... -V <size> <volume>
        destroy [-fnpRrv] <filesystem|volume>
        destroy [-dnpRrv] <filesystem|volume>@<snap>[%<snap>][,...]
        destroy <filesystem|volume>#<bookmark>

        snapshot [-r] [-o property=value] ... <filesystem|volume>@<snap> ...
        rollback [-rRf] <snapshot>
        clone [-p] [-o property=value] ... <snapshot> <filesystem|volume>
        promote <clone-filesystem>
        rename [-f] <filesystem|volume|snapshot> <filesystem|volume|snapshot>
        rename -p [-f] <filesystem|volume> <filesystem|volume>
        rename -u [-f] <filesystem> <filesystem>
        rename -r <snapshot> <snapshot>
        bookmark <snapshot|bookmark> <newbookmark>
        program [-jn] [-t <instruction limit>] [-m <memory limit (b)>]
            <pool> <program file> [lua args...]

        list [-Hp] [-j [--json-int]] [-r|-d max] [-o property[,...]] [-s property]...
            [-S property]... [-t type[,...]] [filesystem|volume|snapshot] ...

        set [-u] <property=value> ... <filesystem|volume|snapshot> ...
        get [-rHp] [-j [--json-int]] [-d max] [-o "all" | field[,...]]
            [-t type[,...]] [-s source[,...]]
            <"all" | property[,...]> [filesystem|volume|snapshot|bookmark] ...
        inherit [-rS] <property> <filesystem|volume|snapshot> ...
        upgrade [-v]
        upgrade [-r] [-V version] <-a | filesystem ...>

        userspace [-Hinp] [-o field[,...]] [-s field] ...
            [-S field] ... [-t type[,...]] <filesystem|snapshot|path>
        groupspace [-Hinp] [-o field[,...]] [-s field] ...
            [-S field] ... [-t type[,...]] <filesystem|snapshot|path>
        projectspace [-Hp] [-o field[,...]] [-s field] ... 
            [-S field] ... <filesystem|snapshot|path>

        project [-d|-r] <directory|file ...>
        project -c [-0] [-d|-r] [-p id] <directory|file ...>
        project -C [-k] [-r] <directory ...>
        project [-p id] [-r] [-s] <directory ...>

        mount [-j]
        mount [-flvO] [-o opts] <-a|-R filesystem|filesystem>
        unmount [-fu] <-a | filesystem|mountpoint>
        share [-l] <-a [nfs|smb] | filesystem>
        unshare <-a [nfs|smb] | filesystem|mountpoint>

        send [-DLPbcehnpsVvw] [-i|-I snapshot]
             [-R [-X dataset[,dataset]...]]     <snapshot>
        send [-DnVvPLecw] [-i snapshot|bookmark] <filesystem|volume|snapshot>
        send [-DnPpVvLec] [-i bookmark|snapshot] --redact <bookmark> <snapshot>
        send [-nVvPe] -t <receive_resume_token>
        send [-PnVv] --saved filesystem
        receive [-vMnsFhu] [-o <property>=<value>] ... [-x <property>] ...
            <filesystem|volume|snapshot>
        receive [-vMnsFhu] [-o <property>=<value>] ... [-x <property>] ... 
            [-d | -e] <filesystem>
        receive -A <filesystem|volume>

        allow <filesystem|volume>
        allow [-ldug] <"everyone"|user|group>[,...] <perm|@setname>[,...]
            <filesystem|volume>
        allow [-ld] -e <perm|@setname>[,...] <filesystem|volume>
        allow -c <perm|@setname>[,...] <filesystem|volume>
        allow -s @setname <perm|@setname>[,...] <filesystem|volume>

        unallow [-rldug] <"everyone"|user|group>[,...]
            [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-rld] -e [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-r] -c [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-r] -s @setname [<perm|@setname>[,...]] <filesystem|volume>

        hold [-r] <tag> <snapshot> ...
        holds [-rHp] <snapshot> ...
        release [-r] <tag> <snapshot> ...
        diff [-FHth] <snapshot> [snapshot|filesystem]
        load-key [-rn] [-L <keylocation>] <-a | filesystem|volume>
        unload-key [-r] <-a | filesystem|volume>
        change-key [-l] [-o keyformat=<value>]
            [-o keylocation=<value>] [-o pbkdf2iters=<value>]
            <filesystem|volume>
        change-key -i [-l] <filesystem|volume>
        redact <snapshot> <bookmark> <redaction_snapshot> ...
        wait [-t <activity>] <filesystem>
        zone <nsfile> <filesystem>
        unzone <nsfile> <filesystem>

Each dataset is of the form: pool/[dataset/]*dataset[@name]

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow

For further help on a command or topic, run: zfs help [<topic>]
root@truenas[/mnt/.ix-apps/app_mounts]# zfs destroy -r NAS\ 16TB\ 2\ Vdevs\ Mirrored\ /ix-apps/app_mounts/tailscale-1.2.7-clone
cannot destroy 'NAS 16TB 2 Vdevs Mirrored /ix-apps/app_mounts/tailscale-1.2.7-clone': filesystem has dependent clones
use '-R' to destroy the following datasets:
NAS 16TB 2 Vdevs Mirrored /ix-apps/app_mounts/tailscale
root@truenas[/mnt/.ix-apps/app_mounts]# zfs destroy -R NAS\ 16TB\ 2\ Vdevs\ Mirrored\ /ix-apps/app_mounts/tailscale-1.2.7-clone
cannot unmount '/mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone': unmount failed
root@truenas[/mnt/.ix-apps/app_mounts]# zfs promote NAS\ 16TB\ 2\ Vdevs\ Mirrored\ /ix-apps/app_mounts/tailscale-1.2.7-clone   
cannot promote 'NAS 16TB 2 Vdevs Mirrored /ix-apps/app_mounts/tailscale-1.2.7-clone': not a cloned filesystem
root@truenas[/mnt/.ix-apps/app_mounts]# zfs -r destroy NAS\ 16TB\ 2\ Vdevs\ Mirrored\ /ix-apps/app_mounts/tailscale-1.2.7-clone

Also tried killing the process again, here’s the shell:

root@truenas[/mnt/.ix-apps/app_mounts]# ps ax | grep tailscale
1155464 pts/3 S+ 0:00 grep tailscale
root@truenas[/mnt/.ix-apps/app_mounts]# kill 1155464
kill: kill 1155464 failed: no such process

Now ran a command to see if I could locate the parent process, I think I found the parent process ID that doesn’t change, but can’t kill it for shit:

root@truenas[/mnt/.ix-apps/app_mounts]# ps -o pid,ppid,cmd -U root | grep tailscale
1169788 1126922 grep tailscale

Going to repost everything to a new thread that’s better formated