I recently upgraded to SCALE Electric Eel so I could try to install a few Docker containers, specifically for Pi-Hole and Unbound. I based my YAML file on the PiHole and Unbound config yaml by James Turland [JamesTurland/JimsGarage/Unbound on github] (made a few changes for truenas. See below). Whenever I try to install this, it spits out an error about an overlapping pool. I have no clue why this is happening. It is worth noting that I am using a dataset called “Docker” with the respective folders included in the YAML file already created. This dataset is owned by the admin user and group for the necessary permissions.
Here is the exact error from /var/log/app_lifecycle.log:
[2024/12/29 15:16:32] (ERROR) app_lifecycle.compose_action():56 - Failed 'up' action for 'pihole-unbound' app: Network ix-pihole-unbound_dns_net Creating
Network ix-pihole-unbound_dns_net Error
failed to create network ix-pihole-unbound_dns_net: Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
For the network config, I ran ip -a address in the TrueNAS shell to see that the docker network uses the 172.16.0.0/24 subnet. I am not using a proxy on my network (although I have plans to install NGINX sometime in the future).
I am still relatively new at this, so any help is appreciated. Thanks in advance.
admin@truenas[~]$ sudo systemctl status systemd-resolved.service
[sudo] password for admin:
Unit systemd-resolved.service could not be found.
admin@truenas[~]$ sudo systemctl disable systemd-resolved.service
Failed to disable unit: Unit file systemd-resolved.service does not exist.
admin@truenas[~]$ sudo systemctl stop systemd-resolved
Failed to stop systemd-resolved.service: Unit systemd-resolved.service not loaded.
admin@truenas[~]$ systemctl --type=service --state=running
UNIT LOAD ACTIVE SUB DESCRIPTION
avahi-daemon.service loaded active running Avahi mDNS/DNS-SD Stack
certmonger.service loaded active running Certificate monitoring and PKI enrollment
chrony.service loaded active running chrony, an NTP client/server
containerd.service loaded active running containerd container runtime
cron.service loaded active running Regular background program processing daemon
dbus.service loaded active running D-Bus System Message Bus
docker.service loaded active running Docker Application Container Engine
getty@tty1.service loaded active running Getty on tty1
gssproxy.service loaded active running GSSAPI Proxy Daemon
libvirtd.service loaded active running Virtualization daemon
middlewared.service loaded active running TrueNAS Middleware
netdata.service loaded active running netdata - Real-time performance monitoring
nfs-blkmap.service loaded active running pNFS block layout mapping daemon
nginx.service loaded active running A high performance web server and a reverse proxy server
nmbd.service loaded active running Samba NMB Daemon
nscd.service loaded active running Name Service Cache Daemon
rpcbind.service loaded active running RPC bind portmap service
smartmontools.service loaded active running Self Monitoring and Reporting Technology (SMART) Daemon
smbd.service loaded active running Samba SMB Daemon
syslog-ng.service loaded active running System Logger Daemon
systemd-journald.service loaded active running Journal Service
systemd-logind.service loaded active running User Login Management
systemd-machined.service loaded active running Virtual Machine and Container Registration Service
systemd-udevd.service loaded active running Rule-based Manager for Device Events and Files
user@950.service loaded active running User Manager for UID 950
virtlogd.service loaded active running Virtual machine log manager
winbind.service loaded active running Samba Winbind Daemon
wsdd.service loaded active running Web Services Dynamic Discovery host daemon
zfs-zed.service loaded active running ZFS Event Daemon (zed)
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
29 loaded units listed.
admin@truenas[~]$
I also ran systemctl --type=service --state=running after so you can see what all services are running.
Have some doubts about whether unbound will work within Truenas 24.10 because I think it doesn’t work in 24.04.2 and I could not make it work within the earlier Cobia version. Suggest you install PiHole first and be certain it’s functioning properly then attempt to add unbound on later, after all, the PiHole instructions for installing unbound assume there is already a working install of PiHole.
I removed all the parts from the YAML file referencing unbound, and I get the same error. The strange part is that when I remove the top network: section from the file, the container installs and deploys without errors, however I get a “403 Forbidden” error in my browser. I think this has something to do with the docker network driver, but when I change the default bridge driver to host, I get this error:
Network ix-pihole_dns_net Error
failed to create network ix-pihole_dns_net: Error response from daemon: only one instance of "host" network is allowed```
Any ideas? I'd really like to stick to docker for this instead if relying on the IX Applications catalog or another VM if possible.
I did forget about that part, and I tried it when I got the 403 forbidden message. That did get me to the admin interface for pihole. However, remember that I had to completely delete the network section of the yaml as seen below in order to get to the 403 forbidden message as seen in the exact yaml I used below:
services:
pihole:
container_name: pihole
hostname: pihole
image: pihole/pihole:latest # remember to change this if you're using rpi
ports:
- "53:53/tcp"
- "53:53/udp"
- "8000:80/tcp"
environment:
TZ: 'America/New_York'
WEBPASSWORD: 'password'
PIHOLE_DNS_: '10.10.1.1'
volumes:
- '/mnt/activeDirectory/Docker/pihole/etc-pihole/:/etc/pihole/'
- '/mnt/activeDirectory/Docker/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/'
restart: unless-stopped
This gives me the impression that something in the networking part of the yaml is what is causing the issue. Here are the networking parts of the yaml I deleted in the above script that worked:
I don’t get any of the pool errors after I delete this.
My paths are in the yaml files above, and I’m pretty sure the ACLs are properly configured since it works sometimes.
Upon doing more research, I think the subnet of the docker container may have something to do with this. When I run ip -a address I get inet 172.16.0.1/24 brd 172.16.0.255 scope global docker0 for the internal docker network, hence the reason I used 172.16.0.0/24 in the yaml. I checked /etc/docker/daemon.jason out of curiosity, and the entry for address pools is as follows:
Could this be causing my issues? If so, how can I fix it? I don’t want to mess anything up here, so any second opinions are greatly appreciated. And thanks for the help thus far.
If you can’t find a suitable network solution and the PiHole GUI is functional without including the network part of the script, then you might try leaving out that part and instead insert in the environment part:
INTERFACE: ‘the name of your truenas network interface’
For this to work “Permit All Origins” must be selected on the PiHole GUI’s DNS settings page.
Tried experimenting based on the evidence I had, and turns out I was on the right track. In the yaml file, I had to change the network subnet to subnet: 172.17.0.0/16 to match the line in /etc/docker/daemon.json. After changing this, the containers installed fine (both pihole on its own and the yaml config with both pihole and unbound), without any of the errors I previously encountered. Should have tried this sooner, but oh well. Thanks for the help, as ya’ll made the troubleshooting process a lot easier for me.
For future readers, set the subnet in your yaml file to be the same as the subnet listed in /etc/docker/daemon.json in the truenas CLI.That should fix this issue like it did for me.