[EFAULT] Failed to stop docker service: [EFAULT] Failed to umount dataset:

TN version: ElectricEel-24.10.2.4

Hello everyone,

I am having issues disconnecting and destroying data on a specific pool, initially, I just wanted to disconnect the pool, change it’s name, and reconnect it, I assumed that when I tried to disconnect it, it would automatically stop all docker apps before disconnecting, however when I tried to disconnect it, I got an error message: [EFAULT] Failed to stop docker service: [EFAULT] Failed to umount dataset: cannot unmount '/mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone': unmount failed

After receiving this error, I:

  1. Deleted Tailscale from the apps, didn’t fix it
  2. Removed the /mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone dir, but when I tried to disconnect the pool, I got a new error [EFAULT] Failed to stop docker service: [EFAULT] Failed to umount dataset: cannot unmount '/mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone': no such pool or dataset
  3. I then ran lsof /mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone and got the following output
lsof: WARNING: can't stat() zfs file system /mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone

Output information may be incomplete.

lsof: WARNING: can't stat() overlay file system /mnt/.ix-apps/docker/overlay2/f56620b6357deba3b2a7d57c51d54dce5c6c177ee69968748646ae9317f7a442/merged

Output information may be incomplete.

lsof: WARNING: can't stat() overlay file system /mnt/.ix-apps/docker/overlay2/617cc887933be8799d2c339266d3de490ae9f7937408dcb98584df8e28c40b2f/merged

Output information may be incomplete.

lsof: WARNING: can't stat() overlay file system /mnt/.ix-apps/docker/overlay2/bdd3ee48521063f1955e8b51e9405a94b3cc736d3d1aeb7cb309272886703b8b/merged

Output information may be incomplete.

lsof: WARNING: can't stat() nsfs file system /run/docker/netns/55cf075613b6

Output information may be incomplete.

lsof: status error on /mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone: No such file or directory

  1. I then tried deleting all tailscale snapshots, but there was one that I wasn’t able to remove, I would get the following error:

Warning: 1 of 1 snapshots could not be deleted.

*** [EINVAL] options.defer: Please set this attribute as ‘NAS 16TB 2 Vdevs Mirrored /ix-apps/app_mounts/tailscale@1.2.7’ snapshot has dependent clones: NAS 16TB 2 Vdevs Mirrored /ix-apps/app_mounts/tailscale-1.2.7-clone

  1. I ended up finally being able to promote “/ix-apps/app_mounts/tailscale-1.2.7-clone” with “zfs promote”, and then ran “zfs destroy -r mnt/ix-apps/app_mounts/tailscale-1.2.7-clone”, it destroyed the snapshot, it no longer appears in the snapshots tab, but when I try to disconnect pool, I still get the following error:

[EFAULT] Failed to stop docker service: [EFAULT] Failed to umount dataset: cannot unmount ‘/mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone’: unmount failed

  1. I tried running rmdir tailscale and got the following output rmdir: failed to remove 'tailscale': Device or resource busy
  2. Running “ps -ef | grep /mnt/.ix-apps/app_mounts/tailscale”, I get the following output:

root 67106 56490 0 18:20 pts/4 00:00:00 grep /mnt/.ix-apps/app_mounts/tailscale

Which I think is the process that’s running, however no matter what I do I can’t kill it, as it keeps changing the PID every, every time I run the ps -ef command, there’s a new PID

  1. Now ran a command to see if I could locate the parent process, I think I found the parent process ID that doesn’t change, but can’t kill it for shit:

root@truenas[/mnt/.ix-apps/app_mounts]# ps -o pid,ppid,cmd -U root | grep tailscale

1169788 1126922 grep tailscale
  1. I still had tailscale-1.2.7-clone in my .ix-apps/app_mounts, but I no longer have the regular tailscale folder.
  2. I removed the /mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone dir with rmdir, but that didn’t fix the issue, it was now saying it couldn’t locate the dir when unmounting, so I re-created the dir, but now it’s empty
  3. At this point I got sick of messing with it, and decided to cut my losses, ordered a new pair of drives, moved the data to the pool on the new drives, and tried deleting the pool on the old drives, however I didn’t have any luck, I got the same error:
[EFAULT] Failed to stop docker service: [EFAULT] Failed to umount dataset: cannot unmount '/mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone': unmount failed

  1. When I run sudo zfs list | grep ix-apps I see the tailscale clone that’s causing issues

NAS 16TB 2 Vdevs Mirrored /ix-apps/app_mounts/tailscale-1.2.7-clone 96K 5.77T 96K /mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone

  1. When I try unmounting it with sudo zfs unmount -f /mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone I get “unmount failed”
  2. I went to the snapshots list in truenas, and was able to locate the tailscale clone snapshot, which I deleted, and the clone dir is gone from the app mounts, but now when I try to disconnect the pool, I get [EFAULT] Failed to stop docker service: [EFAULT] Failed to umount dataset: cannot unmount '/mnt/.ix-apps/app_mounts/tailscale-1.2.7-clone': no such pool or dataset
  3. At this point I have no clue what to try other than resetting my whole truenas instance, which I doubt would even work

I’m willing to pay money for help at this point if someone can help me fix this, it’s insanely annoying, anything is appreciated.

if anyone knows how to preform a hard reset or force it to wipe the pool and drives, pls lmk

Update: to anyone having this issue, I just exported the new data pool, and did a fresh install of truenas on the system, sucks, but glad it fixed the issue

1 Like