Nested NFS Dataset gives access denied error when mounting

  1. Steps to recreate
  • Created a Generic type dataset (1) under the pool.
  • Created a dataset (2) under that with type Multiprotocol with an NFS share
  • Added ACE for NAS user with Modify permissions
  • Connected to the NFS share with NAS user credentials successfully with write permission
  • Created another multiprotocol dataset (3) under existing dataset (2) with NFS share and added the same NSA user with Modify permissions
  1. Expected

I’d expect the client to be able to connect to both datasets using the same credentials

  1. Actual
    When trying to connect to the NFS share on dataset (3) I get the error “mount.nfs: access denied by server while mounting [share]”

Hi and welcome to the forums.

You may need to access permissions for dataset2 from within the UI and check apply permissions recursively and also check include child datasets.

Thanks for the quick response.

I’ve tried your suggestion with no luck.
Any other ideas?

I also raised a ticket with the support team and to be fair they responded really quickly too but I don’t understand their response and they’ve closed the ticket with…

If a dataset is not explicitly exported then it will not be accessible over NFS protocol.

What do they mean “explicitly exported”?

I think they are saying if you export dataset2 then you won’t be able to see dataset3 unless that is also exported. But I think you’ve exported both haven’t you?

Can you share screenshots of your permissions for each dataset please.

If by exported you mean created NFS shares then yes for both datasets.

1 Like

Here you go…
The ‘parent’ dataset (which backupuser can access):


The ‘child’ dataset (for which user backupuser gets access denied)

I wonder if it’s because your backupuser doesn’t have traversal rights on the first dataset?

I thought that too then I checked here - Permissions | TrueNAS Documentation Hub

It says…

  • Modify (rwxpDdaARWc--s): adjust file or directory contents, attributes, and named attributes. Create new files or subdirectories. Includes the Traverse permission. Changing the ACL contents or owner is not allowed.

But that may mean traversing directories within the dataset not traversing datasets.

  1. Did you verify via su <username> in shell session that the user can actually access those paths? That’s pretty deeply nested. Your user needs minimally access to all path components otherwise access will be denied.
  2. Does access to any of those paths rely on group membership? This is relevant because you can configure nfsd to either get group membership from the client or determine it server-side.
  3. Does the uid for “backupuser” match for the client and server? NFS relies on uids and not names.

Do note that our bugtracker isn’t a support channel.

Thanks for the clarification questions - here we go…

  1. I don’t know the su command sorry but I tried it in a shell session on the NAS for both folders and got the same error - “This account is currently not available.”. These are just test datasets I’ve been playing with as I had issues setting up some production folders.
  2. I’m sorry I don’t know - I haven’t changed any defaults when I setup the NFS shares but I have disabled NFS3 at the service level.
  3. Yes, I manually set the UID and GID to be the same on both NAS and client Ubuntu machine.

Yes, sorry, I raised a bug ticket as I think I’ve followed the guidance correctly - sorry if it is user error!

Yes, the account needs to have a valid shell before you can test this way.The gist is you need to check that the uid can access every path component:
/mnt/tank
/mnt/tank/foo
/mnt/tank/foo/bar
/mnt/tank/foo/bar/tar

If they lack execute permissions on any of those dirs they can’t access the deepest nested one.

Note also that some NFS clients are buggy and will give spurious access errors if you remove everyone@ entries from the v4 ACL.

Thanks, I understand what you mean with the permissions side but I’m don’t have a lot of Linux experience so not completely sure how I’d test but this is what I have done…

Logged into the shell as the user account.
User the su command and typed in the password when prompted.
Navigated to each dataset (as per your example above).
Run ls -l command to confirm permissions on the dataset below.

Here are the results:
drwxr-xr-x 3 root root 3 Jul 3 15:51 TestRootStore
drwxr-xr-x 3 root root 3 Jul 3 15:52 TestSubStore
drwx–x— 3 root root 3 Jul 3 15:54 TestDataStore3
drwx–x— 2 root root 2 Jul 3 15:54 TestDataStore4

Why are the permissions missing in the last two when I have specifically set them in the GUI in the screenshots above?

Try nfs4xdr_getfacl /mnt/DataPool1/TestRootStore/ etc

The ‘parent’ datasets are using default POSIX permissions so I get an error for those (“Failed to get NFSv4 ACL”) but here are two ‘child’ dataset outputs:

'# File: /mnt/DataPool1/TestRootStore/TestSubStore/TestDataStore3
'# owner: 0
'# group: 0
'# mode: 0o40710
’ trivial_acl: false
'# ACL flags: none
’ owner@:rwxpDdaARWcCos:fd-----:allow
’ group@:–x—a-R-c—:fd-----:allow
'group:builtin_administrators:rwxpDdaARWcCos:fd-----:allow
’ user:backupuser:rwxpDdaARWc–s:fd-----:allow

'# File: '/mnt/DataPool1/TestRootStore/TestSubStore/TestDataStore3/TestDataStore4
'# owner: 0
'# group: 0
'# mode: 0o40710
'# trivial_acl: false
'# ACL flags: none
’ owner@:rwxpDdaARWcCos:fd----I:allow
’ group@:–x—a-R-c—:fd----I:allow
'group:builtin_administrators:rwxpDdaARWcCos:fd----I:allow
’ user:backupuser:rwxpDdaARWc–s:fd----I:allow

Does that help?

What is the getfacl output then of the parents (with POSIX acl type). Once again, this is required to determine whether the user in question can even access the path.

Here are the outputs from getfacl on the two parent datasets:
'# file: mnt/DataPool1/TestRootStore
'# owner: root
'# group: root
'user::rwx
'group::r-x
'other::r-x

'# file: mnt/DataPool1/TestRootStore/TestSubStore
'# owner: root
'# group: root
'user::rwx
'group::r-x
'other::r-x

This means that the user should have read and execute on the two parent folders, right?

Can you share the mount cmd you’re using from the client side and also the NFS export page in TN.