SCALE 25.04.2.6 - Persistent [EFAULT] 'tank/ix-apps' not mounted error after zfs destroy/reboot

Hello,

I am running TrueNAS SCALE version 25.04.2.6 and I am encountering a persistent and critical issue with the Applications service that I cannot resolve with standard troubleshooting.

The Problem

When attempting to Choose Pool and configure the Apps service (or when it tries to start automatically), it consistently fails with the following error:

[EFAULT] 'tank/ix-apps' dataset is not mounted on '/mnt/.ix-apps'

My custom application configurations are stored separately on /mnt/tank/configs. The system appears to be incorrectly mounting or trying to mount the dataset at /mnt/mnt/.ix-apps.

Extensive Troubleshooting Performed

To resolve the persistent incorrect mount, I performed the following commands and steps, none of which resolved the issue:

  1. Corrected ZFS Properties:

    • sudo zfs set mountpoint=/mnt/.ix-apps tank/ix-apps

    • sudo zfs set canmount=on tank/ix-apps

  2. Clearing Locks/Cache: Attempted forceful unmounts and pool operations:

    • Used sudo zpool export tank (failed due to ‘busy’ dataset).

    • Used sudo zfs unmount -f tank/ix-apps multiple times.

    • Used sudo zpool clear tank and sudo mount -a -F to clear kernel/ZFS cache before reboot.

  3. Full Data Reset (Final Attempt):

    • Unset the Apps pool in the GUI.

    • Executed sudo zfs destroy tank/ix-apps (to remove the corrupted dataset).

    • Executed sudo rm -rf /mnt/.ix-apps (to remove any lingering directory).

    • Performed a clean system reboot via sudo shutdown -r now.

  4. Result: After destroying the dataset and rebooting, when attempting to re-select the pool, the system fails with the exact same error (meaning it failed to create and mount the new tank/ix-apps dataset correctly).

The issue appears to be a bug where the TrueNAS middleware is actively overriding the correct ZFS property during the pool configuration/startup process.

Request

I am requesting assistance in diagnosing this core issue. I have a debug file ready and would appreciate instructions on how to securely upload or share it with a staff member or developer for deeper analysis.

Thank you for your help.

ZFS errors like this always seem to linger after a dataset is destroyed, especially when SCALE still thinks an app pool should exist. I’ve run into similar problems where leftover mounts, stale ix-app records, or kube references kept throwing EFAULT even after rebooting. Cleaning out the remaining dataset entries and refreshing the apps configuration usually clears the state so the pool can mount properly again.