Unable to write to child datasets within NFS share

Hi I am having problems setting up my nfs share to be used for docker volumes. The docker host is on another machine and my intention is to create a nfs share on the truenas and then create child datasets within that for each docker volume. I have been able to create the share and mount it on the host to test. Once finished I wouldn’t need to have it mounted.

First oddity is when I create the child dataset, I can see it when looking at it mounted on the host and I can write a file in it, but that file then doesn’t increase the dataset storage but it does increase the parent storage. I also can’t find this file in the truenas shell. Same vice versa, if I create a file in here in the truenas shell then it cannot be seen in the directory when mounted on the host.

I have run a test ubuntu container and can create a volume against this dataset. Then I exec into the container and can create files in it, but again, any data seems to only be counted against the parent dataset. I can see the file I created in the host mount but not the file created in the truenas shell.

Seems that I am somehow creating ghost directories of the same name under the nfs dataset at the point of trying to write to them. I really can’t figure it out!!

I have tried with my main share being owned:group by root:root and by user:user. Child datasets are owned by the same user:user that runs the target container. User uid and guid exists on host, container and truenas.

I have set the MaprootUser to user and Maproot group to wheel in the advanced nfs settings as I gather this is the way to do no root squash.

I havent set the mapall user or mapall group as I gather this forces the user of a single user for the nfs share and if I get this working then I will be adding containers that need specific users, and was planning to make these the owners of those specific datasets

Maybe I have this wrong, I am migrating from a similar system on openmedia vault, obviously no datasets there so it is just directories but I wanted to be able to make seperate snapshots for each container. I am sure I have seen others do this on youtube but can’t find any of those videos now! I have run it through AI several times at it seem convinced that I need to export each of the child datasets as nfs shares to get this to work. I feel that is wrong.

Any help very appreciated!

Replying here to my own post…

After looking at the TrueNAS documentation I did see that under the nfs shares section it actually describes the exeact scenario that I am looking at. Its headed in blue as a note. Therefore it is correct behaviour that a child dataset within an nfs share cannot be used. I definately should have gone to the documentation earlier… but as I could see the dataset appearing as a directory under the share it really did seem possible and that my issues were down to permissions… as is usually the case…

So to have my docker volumes managed in this way I now have to share each child as its own nfs share. Unsure if this is going to be a good idea in the long run as it means I will have many more share services, but does mean I can still have zfs snapshots per container which was what i wanted. One nfs share + directories for all volumes seems useless as I could only ever restore all volumes at the same time. This would be the same if they were bind mounts or docker volumes.

I do wonder if this behaviour has always been this way as I have seen a YT video by Cristian Lempa that appeared as if he had a single nfs share with container datasets underneath.

It seems that the AI was actually right when I aksed about this problem and it was me that may have been hallucinating!

I would be intrested to know what others do with their docker volumes stored on truenas. My truenas and the docker engine are both VMs on the same host. I replicate this zfs to another truenas on a bare metal install.