Hi I am having problems setting up my nfs share to be used for docker volumes. The docker host is on another machine and my intention is to create a nfs share on the truenas and then create child datasets within that for each docker volume. I have been able to create the share and mount it on the host to test. Once finished I wouldn’t need to have it mounted.
First oddity is when I create the child dataset, I can see it when looking at it mounted on the host and I can write a file in it, but that file then doesn’t increase the dataset storage but it does increase the parent storage. I also can’t find this file in the truenas shell. Same vice versa, if I create a file in here in the truenas shell then it cannot be seen in the directory when mounted on the host.
I have run a test ubuntu container and can create a volume against this dataset. Then I exec into the container and can create files in it, but again, any data seems to only be counted against the parent dataset. I can see the file I created in the host mount but not the file created in the truenas shell.
Seems that I am somehow creating ghost directories of the same name under the nfs dataset at the point of trying to write to them. I really can’t figure it out!!
I have tried with my main share being owned:group by root:root and by user:user. Child datasets are owned by the same user:user that runs the target container. User uid and guid exists on host, container and truenas.
I have set the MaprootUser to user and Maproot group to wheel in the advanced nfs settings as I gather this is the way to do no root squash.
I havent set the mapall user or mapall group as I gather this forces the user of a single user for the nfs share and if I get this working then I will be adding containers that need specific users, and was planning to make these the owners of those specific datasets
Maybe I have this wrong, I am migrating from a similar system on openmedia vault, obviously no datasets there so it is just directories but I wanted to be able to make seperate snapshots for each container. I am sure I have seen others do this on youtube but can’t find any of those videos now! I have run it through AI several times at it seem convinced that I need to export each of the child datasets as nfs shares to get this to work. I feel that is wrong.
Any help very appreciated!