First, let me explain the background. I run pihole and caddy as docker containers. Hence I have to add a macvlan with a post-init script as the following:
#!/usr/bin/bash
set -x
# creagte a mvlan0 interface
ip link add mvlan0 link bond0 type macvlan mode bridge
# add address for mvlan0
ip addr add 10.27.0.198/32 dev mvlan0
# bring it up
ip link set mvlan0 up
# add a route to the macvlan subnet via the mvlan0 interface
ip route add 10.27.0.192/29 dev mvlan0
And in order to use the macvlan created above, I have add a network in the docker with the following command:
docker network create \
-d macvlan \
-o parent=bond0 \
--subnet 10.27.0.0/24 \
--gateway 10.27.0.1 \
--ip-range 10.27.0.192/29 \
--aux-address 'host=10.27.0.198' \
poker
Now every time TrueNAS machine reboot, the docker loses all stacks, all containers and all networks as demonstrated by the following commands:
$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
8787328c7df2 bridge bridge local
3d0cef1ad81a host host local
9443873ae1bf none null local
They come back after performing Unset Pool and Choose Pool on the Apps β configuration via WebUI.
$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
76033d60199b linuxserver/homeassistant:latest "/init" 29 minutes ago Exited (137) 20 minutes ago homeassistant
f4195f995b2a plexinc/pms-docker:beta "/init" 30 minutes ago Exited (128) 20 minutes ago plex
8b376a093a3d linuxserver/tautulli:latest "/init" 30 minutes ago Exited (128) 21 minutes ago tautulli
dcf632195074 teddysun/v2ray:latest "/usr/bin/v2ray run β¦" 30 minutes ago Exited (128) 21 minutes ago v2ray
4ee01cc5818e beatkind/watchtower:latest "/watchtower" 31 minutes ago Exited (1) 21 minutes ago watchtower
fb23ecbb9210 portainer/agent:latest "./agent" 31 minutes ago Exited (2) 21 minutes ago portainer_agent
29f26eb9e905 nicolargo/glances:latest "/config/custom-entrβ¦" 31 minutes ago Up 2 minutes (healthy) glances
a5e957de188b ghcr.io/blakeblackshear/frigate:stable "/init" 35 minutes ago Exited (137) 20 minutes ago frigate
674c3f34fa49 caddy:latest "caddy run --config β¦" 37 minutes ago Exited (128) 21 minutes ago caddy
b02d867e9e0c pihole/pihole:latest "start.sh" 37 minutes ago Exited (128) 20 minutes ago pihole
a6302c85de31 portainer/portainer-ce:latest "/portainer" 39 minutes ago Exited (2) 21 minutes ago portainer
600b5f3c453e baneofserenity/dockge:latest "/usr/bin/dumb-init β¦" 41 minutes ago Exited (128) 21 minutes ago dockge
However, the containers still fail to start due to the fact that the network id is different from they were composed.
I have to go through docker compose down and docker compose up for each container in order to get they up again.
What is the proper way to get them up properly?
Environment:
- TrueNAS version: 25.04.2.4
- CPU: Intel Core i3 1340P
docker journal:
$ sudo journalctl -xeu docker.service
ββ Subject: A stop job for unit docker.service has begun execution
ββ Defined-By: systemd
ββ Support: https://www.debian.org/support
ββ
ββ A stop job for unit docker.service has begun execution.
ββ
ββ The job identifier is 1662.
Oct 15 22:16:51 nuc13 dockerd[9340]: time="2025-10-15T22:16:51.486804920+01:00" level=info msg="Processing signal 'terminated'"
Oct 15 22:16:51 nuc13 dockerd[9340]: time="2025-10-15T22:16:51.523812752+01:00" level=info msg="ignoring event" container=29f26eb9e905e1dd585e3265b49582ddc0bc8>
Oct 15 22:16:51 nuc13 dockerd[9340]: time="2025-10-15T22:16:51.539956409+01:00" level=warning msg="ShouldRestart failed, container will not be restarted" conta>
Oct 15 22:16:51 nuc13 dockerd[9340]: time="2025-10-15T22:16:51.792860525+01:00" level=info msg="stopping event stream following graceful shutdown" error="<nil>>
Oct 15 22:16:51 nuc13 dockerd[9340]: time="2025-10-15T22:16:51.793636459+01:00" level=info msg="Daemon shutdown complete"
Oct 15 22:16:51 nuc13 dockerd[9340]: time="2025-10-15T22:16:51.793731790+01:00" level=info msg="stopping event stream following graceful shutdown" error="conte>
Oct 15 22:16:56 nuc13 dockerd[9340]: time="2025-10-15T22:16:56.794239761+01:00" level=error msg="Error shutting down http server" error="context canceled"
Oct 15 22:16:56 nuc13 systemd[1]: docker.service: Deactivated successfully.
ββ Subject: Unit succeeded
ββ Defined-By: systemd
ββ Support: https://www.debian.org/support
ββ
ββ The unit docker.service has successfully entered the 'dead' state.
Oct 15 22:16:56 nuc13 systemd[1]: Stopped docker.service - Docker Application Container Engine.
ββ Subject: A stop job for unit docker.service has finished
ββ Defined-By: systemd
ββ Support: https://www.debian.org/support
ββ
ββ A stop job for unit docker.service has finished.
ββ
ββ The job identifier is 1662 and the job result is done.
Oct 15 22:16:56 nuc13 systemd[1]: docker.service: Consumed 3.765s CPU time.
ββ Subject: Resources consumed by unit runtime
ββ Defined-By: systemd
ββ Support: https://www.debian.org/support
ββ
ββ The unit docker.service completed and consumed the indicated resources.
Oct 15 22:16:56 nuc13 systemd[1]: docker.service: Start request repeated too quickly.
Oct 15 22:16:56 nuc13 systemd[1]: docker.service: Failed with result 'start-limit-hit'.
ββ Subject: Unit failed
ββ Defined-By: systemd
ββ Support: https://www.debian.org/support
ββ
ββ The unit docker.service has entered the 'failed' state with result 'start-limit-hit'.
Oct 15 22:16:56 nuc13 systemd[1]: Failed to start docker.service - Docker Application Container Engine.
ββ Subject: A start job for unit docker.service has failed
ββ Defined-By: systemd
ββ Support: https://www.debian.org/support
ββ
ββ A start job for unit docker.service has finished with a failure.
ββ
ββ The job identifier is 1662 and the job result is failed.