Novice myself but I’m posting on the basis of Cunningham’s Law.
I’m using dockge as well, on TrusNAS scale 24.10.2.
This is my type up of my learning so far (which may be wrong!)
Datasets
Create a dataset for each app and use this as the host path for the container. This gives better visibility, transferability and enables snapshots to be taken for each app supporting roll-back.
I’ve been using pool> applications > app
I save a separate stacks dataset for dockge as well where the yaml files are stored
Can’t see any need to nest any deeper or split storage beyond this
Permissions
If no user is specified then Docker will runs commands as the root user. This is generally described as unadvisable and a security risk and not confirming to the principle of least privilege.
My understanding is if the container is compromised the attacker will have root level access but in principle only to those resources allocated to the container.
568 is obviously the default for apps in TrueNas
The downside of assigning this to all apps would seem to be if an app was ever compromised it would impact all apps and any assigning storage permissions. However, at the end of the day I’m not building fort Knox.
My basic approach is to run it with 568, unless I can see a reason to create a separate user.
I can however see that a number of the built in apps seem to run as root and can’t see a way to change this without moving them over to dockge or I guess command lining into the yaml file in the .ix-apps but I’m assuming that will get overwritten with updates?
Container approach
I’ve read some mixed views on how to structure apps within and between containers
Best practice for containers is that each container should do one thing and do it well.
However some people seem to say better to separate apps as this gives better segmentation, and allows multiple apps to make use of the same service (for example a VPN). This would be more resource efficient than creating a VPN per service within each container. Although probably with the query as to whether the overhead difference would be impactful in a home lab environment. The issue flagged seems to be packing a stack of apps together is neater with fewer points of failure, things to keep track of, once up and running and without the need to specify a specific run order. If the central container fails the apps would become unresponsive and more containers require you to keep track of inter-dependencies if performing maintenance etc. Also seems good practice to comment ports so it is clear what is being used by what.
My emerging rule of thumb would be if services are only used by each other (so a WordPress with a database) then put them in the same container. If they are used by multiple apps (or like with a reverse proxy this is the only way it makes sense) then separate the containers.
Networks
This is the one where I am currently haziest as I’m looking at setting up Traefik
I understand that Truenas currently has no application support for networks so you can’t bind a container to a specific network.
You can allow containers to connect to each other by naming the container within the composer.yaml in the receiving container
e.g. container_name: gluetun
then using that name in the network mode setting on the ‘sending’ container yaml file.
network_mode: container:gluetun
The relevant ports must be included in the receiving container.
I’ve also read that if a network is not specified then each container creates its own external network which produces an error after 30+ containers. This can be fixed by specifying a default network (or allocating docker more IP ranges)
networks:
default:
name: my-default-network
external: true
I can see this is the way to reference the network
networks:
proxy_network:
external: true
I’ve seen in general in docker a network can be created using
docker network create new-network-name
and one tutorial doing this in TrueNas but others describing this as hacky
I don’t know if there is anyway to achieve the same in the TrueNas networks screen?