Can someone help me with NFS

I have tried off and on for the past few days to set up an NFS share to Proxmox for added storage, and just inside an LXC container. I have tried multiple different things to get this working and have yet to be successful.

I know some users have experienced permission errors, but right now, I can not even get the mount to work or show up. I have multiple SMB shares that work fine within Proxmox and my Mac can access them too.

This is what my settings look like:

Some of those settings are not default; I tried troubleshooting the problem. I have also tried changing the dataset to “root” privileges. I tried to set the “Maproot User” and “Maproot Group” to “root” or to the user that has permissions for that dataset—or switching one at a time.

The system hangs if I try to run the mount command from an LXC container. If I go to the add storage section within Proxmox, I fill in the field for the server’s IP, but the dropdown menu that should show the mount point shows nothing.

Another thing that I have tried is to specify the server’s IP address, or in the “allowed networks” field in TrueNas, I put my router’s IP, and still nothing.

I do not know how to troubleshoot things any further. I still consider myself new to the world of Linux, and input would be helpful. Thank you!!!

Where is that LXC container running and what is your network?

1 Like

The LXC is at
TrueNas is at
Proxmox is at
Everything, as far as I can tell is on the /24 subnet.

Edit- the LXC is running inside proxmox.

I do have 2 NICs on the TrueNas host, but I was sure to bind the NFS share to the one that has network access, as you can see in the photo.

Not sure if this was the info you were after…let me know if you need something else.

Does the LXC have “privileged” ticked?

Not sure if my exp with esxi can help; I also had lots issues with esxi NFS share. First think i would try is to ping ur TrueNas from whereever u wanted to access the share, If you can’t ping both ends then that tells you you probably got firewalled or some network setup isn’t right. If you are able to ping, try both ipv4 or ipv6, and see if proxmox NFS requires any special format for ipv6. I find virtualized network sometimes has lots layers of security that could potentially block the access, specifically in esxi the vswitch. In esxi I can’t access NFS if it’s NFS4, with Ipv6. and it requires me to put in the format of ipv6%“virtual nic adapter name” in order to be accessible, I dont have issue with ipv6 NFS3, or ipv4 however.

Since you mentioned you have 2 physical NICs, one has network access which i assume is connected to the router directly and is grabbing ips from the DHCP server from the router, assuming if your router is also on the subnet of, if not that could be problematic. if your router has different subnet than what you manually assigned for your VMs and enivorment, then i think you have 2 options to try, DHCP ipv4 and let yoru router assign ips for your LXC Truenas and proxmox, or disconenct from your router and manually assign subnet for manual testing. I don’t think you need network access to see other devices, if you don’t have DHCP, the only thing you need is making sure they are all in the same subnet like you have right now to talk to each other. Also for Esxi to access NFS, the permission is usermap : root, groupmap: nogroup. proxmox i haven’t tried. Hope this helps.

1 Like

yes - it is a privileged LXC

It turns out it was not really a network issue. This was security stuff on TrueNas. I did get it to work by going to Settings> Services > NFS and then disabling NFSv4. (only using NSFv3)

Also, after doing some more reading, it looks like there is an ACL type for NFSv4. Go to Datasets > Edit Dataset > Advanced, and you will see ACL type and an option for NSFv4. I have not tried that yet. I am not doing any production type stuff. Hope this helps anyone else who comes across this.

1 Like