NFSv4 + Kerberos: idmap / access problems

NFS with Kerberos used to work fine. After upgrading both my TrueNAS SCALE to 25.10.0.1 and my Fedora client from version 42 to 43, I no longer have my user’s privileges when accessing NFS with Kerberos. I just have a small home setup with one Kerberos client only (and some other VMs using sec=sys).

Mount works, but I effectively have the privileges of the nobody user.

Here’s what I’ve checked so far:

  • Kerberos authentication works with my KDC VM (client obtains nfs/<hostname>@REALM ticket)

  • DNS forward + reverse are correct

  • Hostname is correct (jonas.lan.fa2k.net)

  • Keytab contains the correct principal:

  • nfs/jonas.lan.fa2k.net@LAN.FA2K.NET
    
    
  • rpc.idmapd maps UID ↔ user@domain correctly

  • Local filesystem permissions allow write for my user

  • gssproxy is running and configured (kernel_nfsd = yes)

  • Client mounts with sec=krb5 (verified via mount command output)


Problems

  • Writes from the client give “Permission denied” unless the permissions allow writing by anyone (chmod 777 …). If the permissions are open for write, new created files are owned by nobody / nogroup.

  • Server never reports UID mapping failures

  • /proc/net/rpc/auth.rpcsec.gss does not exist on the server

ChatGPT suggest that the last part seems to be the root problem. My server shows:

lsmod | grep rpcsec
rpcsec_gss_krb5
auth_rpcgss

ls /proc/net/rpc
# (various files)
ls /proc/net/rpc/auth.rpcsec.gss
# No such file or directory

Anyone else seen these issues or have any ideas?

It works when I set the mapall user and mapall group to my own user. It works well and is secure for my use case, but I’m quite surprised to learn that multiuser NFS krb5 is another level of difficulty above just getting the kerberos etc to work. I thought I had it before the upgrade though… I don’t know. I now tested from an Ubuntu client as well, and I get the same behavior. I get permission errors unless mapall is specified.

I didn’t mention what version I upgraded from when I went to 25.10.0.1 - that’s because I’d basically skipped over the 25.04 series (because of the VM issues). I had 24.10 and I stopped by one of the 25.04 series on the way, but didn’t really use the NFS client on it. Essentially, it stopped working somewhere between 24.10 and 25.10.

Hi,

I’ve had precisely this issue since upgrading to 25.10. What I’ve found in summary is that it looks like the kernel logic to use gssproxyto authenticate the Kerberos client is not working (i.e. disabled).

Some observations:

Even with set kernel_nfsd = yes in the gssproxy.conf, /proc/net/rpc/use-gss-proxy still shows a value of zero.

If I stop the systemd started version of gssproxy and run it manually with debugging and tracing, it looks like there is no request receive whilst trying to mount a filesystem with sec=krb5p:

KRB5_TRACE=/dev/stderr gssproxy -i -d --debug-level=3 --syslog-status
[2025/11/04 17:20:45]: Debug Level changed to 3

If instead of gssproxy, I run rpc.svcgssd, mounting with sec=krb5p works normally and I see activity in rpc.svcgssd.

I’ve worked around (and I must stress this is just a hacky workaround to get me running again, which may or may not work for others) the issue by masking gssproxy so that the legacy rpc.svcgssd runs instead:

systemctl disable --now gssproxy; systemctl mask gssproxy

Then after rebooting I see:

ps auxww | grep gss

root 23935 0.0 0.0 13180 2068 ? Ssl 18:13 0:00 /usr/sbin/rpc.gssd root 23939 0.0 0.0 5096 2548 ? Ss 18:13 0:00 /usr/sbin/rpc.svcgssd

1 Like

Thanks joed96, this work-around fixed it for me too!