If there is a documented, safe alternative, I can test it after-hours - I have a few servers directly accessible via SSH, breaking Tailscale wouldn’t be a big problem on them.
It’s supposed to be a non-breaking update. And even after removing the docker_compatible option in the future there’s nothing stopping users from manually configuring equivalent options.
Question… I have a container (systemd_nspawn_user_args=--bind='/mnt/Vortex/Media:/mnt/media' --bind=/dev/fuse). I then mount /mnt/media with rar2fs under /mnt/rar2fs using AutoFS. At this point /mnt/rar2fs works as expected.
Now, is it possible to somehow make the /mnt/rar2fs mount visible in the host? Or is there a different way I can use a container with rar2fs where the output filesystem is visible in the host?
I don’t think this is possible, even if you were to somehow take the isolation by the jail out of the equation. SMB on SCALE integrates heavily with ZFS (datasets). Your AutoFS mount is not a ZFS dataset so you can’t share it from the TrueNAS GUI. Your best bet is to share (via SMB or otherwise) from inside the jail. Or you’re going to have to extract the rar file or copy the contents of the mount to a dataset which can be shared via SMB.
Is there something special I missed when trying to do so?
TrueNAS is using x.x.x.124 and the rar2fs container x.x.x.125. I’ve installed samba and configured the share like I always do, added a new smb user (smbpasswd -a <username>) but I can’t seem to be able to mount it from any debian VM or access it from Windows… (I can ping the container from Windows but when trying to access the share I get an instant error message: Windows cannot access \x.x.x.125)
Sorry it took me a week to do any testing, I got ready last week and was pulled by a work project. Cloned my docker config dataset, pulled the dev jlmkr.py, created a test jail and used the github docker config file switching out macvlan with bridge and also turned off the nvidia passthrough default (is that intentional Jip-Hop?). Homeassistant in host network mode works fine for me. This is not a comprehensive test at all but I’m planning to rebuild my production jail using that methodology once the new version is released - if someone could intervene if I did something wrong.
Users who are still using host networking will run into issues when they upgrade docker to v26.0.1 and above. I recommend anyone using docker inside their jail to stop using the docker_compatible config option a.s.a.p. and start using macvlan or bridge networking. See the full announcement on GitHub.
Can someone explain to me the appeal of using docker / containers within a Linux “jail”, when the jail is already a container itself?
Is it really that inconvenient for users to occasionally run apt update or pacman -Syyu within a jail’s sandboxed environment, in order to keep their “apps” up-to-date?