Issues installing PiHole and Unbound in Portainer app on 25.10.1

Portainer is my sole native app on my TN server, hoping to keep all other apps in there so I can troubleshoot and update more easily with YAML files (although I’m losing steam on this with the headache it’s been for days on multiple apps lol). I’m trying to do what @gm1925 did last year on an earlier TN version. At first my problem seemed similar but things have shifted.

I was able to deploy their modified Pihole + Unbound stack in Portainer with their lessons learned about the daemon.json default subnet, using the YAML below:

networks:
  dns_net:
    driver: bridge
    ipam:
        config:
        - subnet: 172.17.0.0/16
  proxy:
    external: false

services:
  pihole:
    container_name: pi-hole
    hostname: pihole
    image: pihole/pihole:latest
    networks:
      dns_net:
        ipv4_address: 172.17.0.5
      proxy:
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "20080:80/tcp"
    environment:
      TZ: 'America/Chicago'
      WEBPASSWORD: 'password'
      PIHOLE_DNS_: '172.17.0.5#5053'
    volumes:
      - '/mnt/Shallow/AppData/Pi-Hole/:/etc/pihole/'
      - '/mnt/Shallow/AppData/Pi-Hole/dnsmasq/:/etc/dnsmasq.d/'
    restart: unless-stopped
  unbound:
    container_name: unbound
    image: mvance/unbound:latest
    networks:
      dns_net:
        ipv4_address: 172.17.0.6
    volumes:
      - '/mnt/Shallow/AppData/Unbound:/opt/unbound/etc/unbound'
    ports:
      - "5053:53/tcp"
      - "5053:53/udp"
    healthcheck:
      test: ["NONE"]
    restart: unless-stopped

I’m very new to all this, but from what I can tell you just need to give local addresses within that “dns_net” subnet to pihole and unbound, then refer pihole to port 5053 of unbound (I see 5335 used elsewhere, hopefully not important?), then make datasets with “apps” preset for relevant files for each. Seems simple enough, but I’m still not sure I have the datasets working properly…

Right now when deploying that YAML as a Portainer stack, I see odd behavior refreshing container status where unbound’s IP address appears and disappears periodically, and the unbound logs show major issues which I do not understand yet how to fix:

[1767312968] unbound[1:0] fatal error: Could not read config file: /opt/unbound/etc/unbound/unbound.conf. Maybe try unbound -dd, it stays on the commandline to see more errors, or unbound-checkconf
/opt/unbound/etc/unbound/unbound.conf:339: error: cannot open include file '/opt/unbound/etc/unbound/a-records.conf': No such file or directory
/opt/unbound/etc/unbound/unbound.conf:340: error: cannot open include file '/opt/unbound/etc/unbound/srv-records.conf': No such file or directory
/opt/unbound/etc/unbound/unbound.conf:346: error: cannot open include file '/opt/unbound/etc/unbound/forward-records.conf': No such file or directory
read /opt/unbound/etc/unbound/unbound.conf failed: 3 errors in configuration file

I can see in shell that unbound.conf is where it should be as far as I’ve ported it in the YAML file, but not sure what’s keeping the rest of those .conf files from being populated. Could my dozen or so attempts at deploying this have muddled the directories?? I can see old deleted datasets are still visible in the shell, not sure if that’s contributing to this issue.

Any help, advice, admonishment appreciated!!

Ah, in addition to the more glaring Unbound issues, I also cannot access the PiHole webgui with my YAML stated WEBPASSWORD variable… Never thought that would be an issue. Logs look benign to my novice eyes, maybe I’m missing something obvious?

2026-01-01 17:57:48.136 CST [71M] WARNING: Insufficient permissions to set system time (CAP_SYS_TIME required), NTP client not available
2026-01-01 17:57:48.136 CST [71/T72] INFO: NTP server listening on 0.0.0.0:123 (IPv4)
2026-01-01 17:57:48.136 CST [71/T73] INFO: NTP server listening on :::123 (IPv6)
2026-01-01 17:57:48.136 CST [71M] INFO: FTL is running as user pihole (UID 1000)
2026-01-01 17:57:48.136 CST [71M] INFO: Reading certificate from /etc/pihole/tls.pem ...
2026-01-01 17:57:48.137 CST [71M] INFO: Using SSL/TLS certificate file /etc/pihole/tls.pem
2026-01-01 17:57:48.137 CST [71M] INFO: Web server ports:
2026-01-01 17:57:48.137 CST [71M] INFO:   - 0.0.0.0:80 (HTTP, IPv4, optional, OK)
2026-01-01 17:57:48.137 CST [71M] INFO:   - 0.0.0.0:443 (HTTPS, IPv4, optional, OK)
2026-01-01 17:57:48.137 CST [71M] INFO:   - [::]:80 (HTTP, IPv6, optional, OK)
2026-01-01 17:57:48.137 CST [71M] INFO:   - [::]:443 (HTTPS, IPv6, optional, OK)
2026-01-01 17:57:48.137 CST [71M] INFO: Restored 0 API sessions from the database
2026-01-01 17:57:48.139 CST [71M] INFO: Blocking status is enabled
2026-01-01 17:57:48.237 CST [71/T74] INFO: Compiled 0 allow and 0 deny regex for 0 client in 0.0 msec

I’ve deployed PiHole alone at least a dozen times with various YAML scripts, and never had trouble with the password until trying this combo stack script.

Ah, forgot to check for new syntax since a year has passed. Using different password variable worked on the password issue. ‘FTLCONF_webserver_api_password’ not ‘WEBPASSWORD’

I’ll go looking for a newer Unbound version, hopefully something similarly simple breaking things?

Found a pull request describing my exact issue with Unbound, but after creating the three missing files, error logs now are showing various permission issues reading files I’m not at all familiar with, hmmm.

2026-01-02T01:51:29.440213107Z [1767318689] unbound[1:0] error: Could not open logfile /dev/null: Permission denied
2026-01-02T01:51:29.440450266Z [1767318689] unbound[1:0] warning: subnetcache: serve-expired is set but not working for data originating from the subnet module cache.
2026-01-02T01:51:29.440460896Z [1767318689] unbound[1:0] warning: subnetcache: prefetch is set but not working for data originating from the subnet module cache.
2026-01-02T01:51:29.440736486Z [1767318689] unbound[1:0] error: unable to open var/root.key for reading: Permission denied
2026-01-02T01:51:29.440760170Z [1767318689] unbound[1:0] error: error reading auto-trust-anchor-file: var/root.key
2026-01-02T01:51:29.440772924Z [1767318689] unbound[1:0] error: validator: error in trustanchors config
2026-01-02T01:51:29.440796117Z [1767318689] unbound[1:0] error: validator: could not apply configuration settings.
2026-01-02T01:51:29.440806917Z [1767318689] unbound[1:0] error: module init for module validator failed
2026-01-02T01:51:29.440817286Z [1767318689] unbound[1:0] fatal error: failed to init modules

Checked that the named files are indeed there, /dev/null gives error “this is a device file” when I try to open with nano, so probably source of error is that it’s trying to treat it like a log file when it is not one? Even less of a clue what’s going on with /var/root.key, file looks reasonable to me in nano.

Digging through the unbound.conf file, somehow the monitoring ip was set to 53, not 5053 or 5335, so I changed that and I think Unbound is now running?

Edit: forgot that’s port 53 from inside the container, so it should stay as 53! The real fatal error was from trying and failing to read the root.key file. I should probably figure out how to give the app permission for that, but that feels daunting right now, maybe later or if someone knows users/groups better than I do to quickly fix this with chown. For now just not checking the root.key file seems to get it running, although it sounds pretty important to get that running unlike log files…?

2026-01-02T02:35:10.543752962Z [1767321310] unbound[1:0] info: start of service (unbound 1.22.0).

Now my major issue is getting my home router to play nice. I’ve tried setting primary DNS to the IP listed in Portainer for my PiHole container, 172.17.0.5, but it immediately throws a fit and I lose internet. Switching back to standard DNS IPs immediately brings back my internet connection. Am I using an IP that isn’t visible to my router somehow? I thought container IP addresses were broadly visible, or is that only on the subnet? My router can clearly see my TrueNAS PC, but how do I expose the containers within to my router?

Edit: also used AI suggested shell cmd to look up all ip addresses tied to that docker container, came up with 172.16.2.2 in addition to the IP listed in Portainer, tried both but internet went down in both cases.

It seems like Pi-Hole and Unbound are not communicating properly. When I limit upstream DNS servers in the Pi-Hole settings to only 172.17.0.6#5335 (unbound DNS port) I get persistent timeouts

2026-01-01 22:14:55.404 CST [70/T74] ERROR: Cannot receive UDP DNS reply: Timeout - no response from upstream DNS server

And when I bring back 8.8.8.8 and 8.8.4.4 backup DNS servers, the errors stop. Hopefully this weekend I can figure out how to give a container TrueNAS file permissions so I can uncomment that important sounding “root.key” file callout in the config file, might be the solution?

Current YAML in stack:

networks:
  dns_net:
    driver: bridge
    ipam:
        config:
        - subnet: 172.17.0.0/16
  proxy:
    external: false

services:
  pihole:
    container_name: pi-hole
    hostname: pihole
    image: pihole/pihole:latest
    networks:
      dns_net:
        ipv4_address: 172.17.0.5
      proxy:
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      # Uncomment the line below if you are using Pi-hole as your DHCP server
      - "67:67/udp"
      # Uncomment the line below if you are using Pi-hole as your NTP server
      - "123:123/udp"
      # Web UI http port
      - "20080:80/tcp"
      # Web UI https port
      - "20443:443/tcp"
    environment:
      TZ: 'America/Chicago'
      FTLCONF_webserver_api_password: 'password'
      FTLCONF_dns_listeningMode: 'ALL'
      FTLCONF_dns_upstreams: '172.17.0.6#5335'
    volumes:
      - '/mnt/Shallow/AppData/Pi-Hole/:/etc/pihole/'
      - '/mnt/Shallow/AppData/Pi-Hole/dnsmasq/:/etc/dnsmasq.d/'
    cap_add:
      # Required if you are using Pi-hole as your DHCP server, else not needed
      - NET_ADMIN
      # Required if you are using Pi-hole as your NTP client to be able to set the host's system time
      - SYS_TIME
      # Optional, if Pi-hole should get some more processing time
      - SYS_NICE
    restart: unless-stopped
  unbound:
    container_name: unbound
    image: mvance/unbound:latest
    networks:
      dns_net:
        ipv4_address: 172.17.0.6
    volumes:
      - '/mnt/Shallow/AppData/Unbound/:/opt/unbound/etc/unbound'
    ports:
      - "5335:53/tcp"
      - "5335:53/udp"
    healthcheck:
      test: ["NONE"]
    restart: unless-stopped

Current unbound.conf file:

server:
    ###########################################################################
    # BASIC SETTINGS
    ###########################################################################
    # Time to live maximum for RRsets and messages in the cache. If the maximum
    # kicks in, responses to clients still get decrementing TTLs based on the
    # original (larger) values. When the internal TTL expires, the cache item
    # has expired. Can be set lower to force the resolver to query for data
    # often, and not trust (very large) TTL values.
    cache-max-ttl: 86400

    # Time to live minimum for RRsets and messages in the cache. If the minimum
    # kicks in, the data is cached for longer than the domain owner intended,
    # and thus less queries are made to look up the data. Zero makes sure the
    # data in the cache is as the domain owner intended, higher values,
    # especially more than an hour or so, can lead to trouble as the data in
    # the cache does not match up with the actual data any more.
    cache-min-ttl: 300

    # Set the working directory for the program.
    directory: "/opt/unbound/etc/unbound"

    # If enabled, Unbound will respond with Extended DNS Error codes (RFC 8914).
    # These EDEs attach informative error messages to a response for various
    # errors.
    # When the val-log-level: option is also set to 2, responses with Extended
    # DNS Errors concerning DNSSEC failures that are not served from cache, will
    # also contain a descriptive text message about the reason for the failure.
    ede: yes

    # If enabled, Unbound will attach an Extended DNS Error (RFC 8914)
    # Code 3 - Stale Answer as EDNS0 option to the expired response.
    # This will not attach the EDE code without setting ede: yes as well.
    ede-serve-expired: yes

    # RFC 6891. Number  of bytes size to advertise as the EDNS reassembly buffer
    # size. This is the value put into  datagrams over UDP towards peers.
    # The actual buffer size is determined by msg-buffer-size (both for TCP and
    # UDP). Do not set higher than that value.
    # Default  is  1232 which is the DNS Flag Day 2020 recommendation.
    # Setting to 512 bypasses even the most stringent path MTU problems, but
    # is seen as extreme, since the amount of TCP fallback generated is
    # excessive (probably also for this resolver, consider tuning the outgoing
    # tcp number).
    edns-buffer-size: 1232

    # Listen to for queries from clients and answer from this network interface
    # and port.
    interface: 0.0.0.0@53

    # Rotates RRSet order in response (the pseudo-random number is taken from
    # the query ID, for speed and thread safety).
    rrset-roundrobin: yes

    # Drop user  privileges after  binding the port.
    username: "_unbound"

    ###########################################################################
    # LOGGING
    ###########################################################################

    # Do not print log lines to inform about local zone actions
    log-local-actions: no

    # Do not print one line per query to the log
    log-queries: no

    # Do not print one line per reply to the log
    log-replies: no

    # Do not print log lines that say why queries return SERVFAIL to clients
    log-servfail: no

    # If you want to log to a file, use:
    # logfile: /opt/unbound/etc/unbound/unbound.log
    # Set log location (using /dev/null further limits logging)
    logfile: unbound.log

    # Set logging level
    # Level 0: No verbosity, only errors.
    # Level 1: Gives operational information.
    # Level 2: Gives detailed operational information including short information per query.
    # Level 3: Gives query level information, output per query.
    # Level 4:  Gives algorithm level information.
    # Level 5: Logs client identification for cache misses.
    verbosity: 0

    ###########################################################################
    # PRIVACY SETTINGS
    ###########################################################################

    # RFC 8198. Use the DNSSEC NSEC chain to synthesize NXDO-MAIN and other
    # denials, using information from previous NXDO-MAINs answers. In other
    # words, use cached NSEC records to generate negative answers within a
    # range and positive answers from wildcards. This increases performance,
    # decreases latency and resource utilization on both authoritative and
    # recursive servers, and increases privacy. Also, it may help increase
    # resilience to certain DoS attacks in some circumstances.
    aggressive-nsec: yes

    # Extra delay for timeouted UDP ports before they are closed, in msec.
    # This prevents very delayed answer packets from the upstream (recursive)
    # servers from bouncing against closed ports and setting off all sort of
    # close-port counters, with eg. 1500 msec. When timeouts happen you need
    # extra sockets, it checks the ID and remote IP of packets, and unwanted
    # packets are added to the unwanted packet counter.
    delay-close: 10000

    # Prevent the unbound server from forking into the background as a daemon
    do-daemonize: no

    # Add localhost to the do-not-query-address list.
    do-not-query-localhost: no

    # Number  of  bytes size of the aggressive negative cache.
    neg-cache-size: 4M

    # Send minimum amount of information to upstream servers to enhance
    # privacy (best privacy).
    qname-minimisation: yes

    ###########################################################################
    # SECURITY SETTINGS
    ###########################################################################
    # Only give access to recursion clients from LAN IPs
    access-control: 127.0.0.1/32 allow
    access-control: 192.168.0.0/16 allow
    access-control: 172.16.0.0/16 allow
    access-control: 10.0.0.0/8 allow
    # access-control: fc00::/7 allow
    # access-control: ::1/128 allow

    # File with trust anchor for  one  zone, which is tracked with RFC5011
    # probes.
    # auto-trust-anchor-file: "var/root.key"

    # Enable chroot (i.e, change apparent root directory for the current
    # running process and its children)
    chroot: "/opt/unbound/etc/unbound"

    # Deny queries of type ANY with an empty response.
    deny-any: yes

    # Harden against algorithm downgrade when multiple algorithms are
    # advertised in the DS record.
    harden-algo-downgrade: yes

    # Harden against unknown records in the authority section and additional
    # section. If no, such records are copied from the upstream and presented
    # to the client together with the answer. If yes, it could hamper future
    # protocol developments that want to add records.
    harden-unknown-additional: yes

    # RFC 8020. returns nxdomain to queries for a name below another name that
    # is already known to be nxdomain.
    harden-below-nxdomain: yes

    # Require DNSSEC data for trust-anchored zones, if such data is absent, the
    # zone becomes bogus. If turned off you run the risk of a downgrade attack
    # that disables security for a zone.
    harden-dnssec-stripped: yes

    # Only trust glue if it is within the servers authority.
    harden-glue: yes

    # Ignore very large queries.
    harden-large-queries: yes

    # Perform additional queries for infrastructure data to harden the referral
    # path. Validates the replies if trust anchors are configured and the zones
    # are signed. This enforces DNSSEC validation on nameserver NS sets and the
    # nameserver addresses that are encountered on the referral path to the
    # answer. Experimental option.
    harden-referral-path: no

    # Ignore very small EDNS buffer sizes from queries.
    harden-short-bufsize: yes

    # If enabled the HTTP header User-Agent is not set. Use with caution
    # as some webserver configurations may reject HTTP requests lacking
    # this header. If needed, it is better to explicitly set the
    # the http-user-agent.
    hide-http-user-agent: no

    # Refuse id.server and hostname.bind queries
    hide-identity: yes

    # Refuse version.server and version.bind queries
    hide-version: yes

    # Set the HTTP User-Agent header for outgoing HTTP requests. If
    # set to "", the default, then the package name and version are
    # used.
    http-user-agent: "DNS"

    # Report this identity rather than the hostname of the server.
    identity: "DNS"

    # These private network addresses are not allowed to be returned for public
    # internet names. Any  occurrence of such addresses are removed from DNS
    # answers. Additionally, the DNSSEC validator may mark the  answers  bogus.
    # This  protects  against DNS  Rebinding
    private-address: 10.0.0.0/8
    private-address: 172.16.0.0/12
    private-address: 192.168.0.0/16
    private-address: 169.254.0.0/16
    # private-address: fd00::/8
    # private-address: fe80::/10
    # private-address: ::ffff:0:0/96

    # Enable ratelimiting of queries (per second) sent to nameserver for
    # performing recursion. More queries are turned away with an error
    # (servfail). This stops recursive floods (e.g., random query names), but
    # not spoofed reflection floods. Cached responses are not rate limited by
    # this setting. Experimental option.
    ratelimit: 1000

    # Use this certificate bundle for authenticating connections made to
    # outside peers (e.g., auth-zone urls, DNS over TLS connections).
    tls-cert-bundle: /etc/ssl/certs/ca-certificates.crt

    # Set the total number of unwanted replies to eep track of in every thread.
    # When it reaches the threshold, a defensive action of clearing the rrset
    # and message caches is taken, hopefully flushing away any poison.
    # Unbound suggests a value of 10 million.
    unwanted-reply-threshold: 10000

    # Use 0x20-encoded random bits in the query to foil spoof attempts. This
    # perturbs the lowercase and uppercase of query names sent to authority
    # servers and checks if the reply still has the correct casing.
    # This feature is an experimental implementation of draft dns-0x20.
    # Experimental option.
    use-caps-for-id: yes

    # Help protect users that rely on this validator for authentication from
    # potentially bad data in the additional section. Instruct the validator to
    # remove data from the additional section of secure messages that are not
    # signed properly. Messages that are insecure, bogus, indeterminate or
    # unchecked are not affected.
    val-clean-additional: yes

    ###########################################################################
    # PERFORMANCE SETTINGS
    ###########################################################################
    # https://nlnetlabs.nl/documentation/unbound/howto-optimise/
    # https://nlnetlabs.nl/news/2019/Feb/05/unbound-1.9.0-released/

    # Number of slabs in the infrastructure cache. Slabs reduce lock contention
    # by threads. Must be set to a power of 2.
    infra-cache-slabs: 16

    # Number of incoming TCP buffers to allocate per thread. Default
    # is 10. If set to 0, or if do-tcp is "no", no  TCP  queries  from
    # clients  are  accepted. For larger installations increasing this
    # value is a good idea.
    incoming-num-tcp: 10

    # Number of slabs in the key cache. Slabs reduce lock contention by
    # threads. Must be set to a power of 2. Setting (close) to the number
    # of cpus is a reasonable guess.
    key-cache-slabs: 16

    # Number  of  bytes  size  of  the  message  cache.
    # Unbound recommendation is to Use roughly twice as much rrset cache memory
    # as you use msg cache memory.
    msg-cache-size: 462536704

    # Number of slabs in the message cache. Slabs reduce lock contention by
    # threads. Must be set to a power of 2. Setting (close) to the number of
    # cpus is a reasonable guess.
    msg-cache-slabs: 16

    # The number of queries that every thread will service simultaneously. If
    # more queries arrive that need servicing, and no queries can be jostled
    # out (see jostle-timeout), then the queries are dropped.
    # This is best set at half the number of the outgoing-range.
    # This Unbound instance was compiled with libevent so it can efficiently
    # use more than 1024 file descriptors.
    num-queries-per-thread: 4096

    # The number of threads to create to serve clients.
    # This is set dynamically at run time to effectively use available CPUs
    # resources
    num-threads: 11

    # Number of ports to open. This number of file descriptors can be opened
    # per thread.
    # This Unbound instance was compiled with libevent so it can efficiently
    # use more than 1024 file descriptors.
    outgoing-range: 8192

    # Number of bytes size of the RRset cache.
    # Use roughly twice as much rrset cache memory as msg cache memory
    rrset-cache-size: 925073408

    # Number of slabs in the RRset cache. Slabs reduce lock contention by
    # threads. Must be set to a power of 2.
    rrset-cache-slabs: 16

    # Do no insert authority/additional sections into response messages when
    # those sections are not required. This reduces response size
    # significantly, and may avoid TCP fallback for some responses. This may
    # cause a slight speedup.
    minimal-responses: yes

    # # Fetch the DNSKEYs earlier in the validation process, when a DS record
    # is encountered. This lowers the latency of requests at the expense of
    # little more CPU usage.
    prefetch: yes

    # Fetch the DNSKEYs earlier in the validation process, when a DS record is
    # encountered. This lowers the latency of requests at the expense of little
    # more CPU usage.
    prefetch-key: yes

    # Have unbound attempt to serve old responses from cache with a TTL of 0 in
    # the response without waiting for the actual resolution to finish. The
    # actual resolution answer ends up in the cache later on.
    serve-expired: yes

    # UDP queries that have waited in the socket buffer for a long time can be
    # dropped. The time is set in seconds, 3 could be a good value to ignore old
    # queries that likely the client does not need a reply for any more. This 
    # could happen if the host has not been able to service the queries for a 
    # while, i.e. Unbound is not running, and then is enabled again. It uses 
    # timestamp socket options.
    sock-queue-timeout: 3

    # Open dedicated listening sockets for incoming queries for each thread and
    # try to set the SO_REUSEPORT socket option on each socket. May distribute
    # incoming queries to threads more evenly.
    so-reuseport: yes

    ###########################################################################
    # LOCAL ZONE
    ###########################################################################

    # Include file for local-data and local-data-ptr
    include: /opt/unbound/etc/unbound/a-records.conf
    include: /opt/unbound/etc/unbound/srv-records.conf

    ###########################################################################
    # FORWARD ZONE
    ###########################################################################

    include: /opt/unbound/etc/unbound/forward-records.conf


remote-control:
    control-enable: no

Boy it’s echo-y in here :sweat_smile: sorry for all the edits and maybe confusing back and forth, been spending all my time for days now on this and have developed a (perhaps stress related haha) fever/cold.

Anyway, I may have actually solved my main issue. I don’t know why my predecessor used a custom bridge network to get PiHole and Unbound to play together when they both can simply use ports tied to my NAS IP to communicate with each other and with my home router… so I simply left all network bits out of the compose YAML, directed PiHole to use 127.0.0.1#5335 as DNS, and bam, it appears to be working, no more UDP timeouts in the PiHole logs… I’m not experienced enough to know if there’s a silent issue, but as of now 80% of “upstream servers” in PiHole UI is my Unbound port, so I assume that’s a good sign?

On the downside, I still can’t figure out how to give file permissions for containers that don’t offer UID/GID env variables… but hopefully I’m only missing out on DNSSEC by ignoring that root.key file, which I read is not crucial to a secure and private home network… and unbound.log for a noob like me probably is worthless until I’ve gotten better at this. For the record then, these errors remain unsolved if anyone has thoughts, and later I may make a new post on this separate issue as I’m a perfectionist.

unbound[1:0] error: Could not open logfile /dev/null: Permission denied
unbound[1:0] warning: subnetcache: serve-expired is set but not working for data originating from the subnet module cache.
unbound[1:0] warning: subnetcache: prefetch is set but not working for data originating from the subnet module cache.
unbound[1:0] error: unable to open var/root.key for reading: Permission denied
unbound[1:0] error: error reading auto-trust-anchor-file: var/root.key
unbound[1:0] error: validator: error in trustanchors config
unbound[1:0] error: validator: could not apply configuration settings.

Any advice on how to determine that my recursive PiHole (never thought I’d write those two words together) is indeed running as intended is most welcome. I’m only at 0.3% queries blocked with the vanilla blocklist, but I haven’t exactly been perusing ad-heavy sites lately, so maybe that’s normal.

SO! After much ranting and raving, I’ll call this one self-solved! Time to clean up my pile of browser tabs and move on to 30 other difficult learning curves lol

Current working YAML for PiHole-Unbound stack:

services:
  pihole:
    container_name: pi-hole
    hostname: pihole
    image: pihole/pihole:latest
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      # Uncomment the line below if you are using Pi-hole as your DHCP server
      # - "67:67/udp"
      # Uncomment the line below if you are using Pi-hole as your NTP server
      - "123:123/udp"
      # Web UI http port
      - "20080:80/tcp"
      # Web UI https port
      - "20443:443/tcp"
    environment:
      TZ: 'America/Chicago'
      FTLCONF_webserver_api_password: 'kYVk_oN6wa9j8eoHVmD-Uzzjm9'
      FTLCONF_dns_listeningMode: 'ALL'
      FTLCONF_dns_upstreams: '127.0.0.1#5335'
    volumes:
      - '/mnt/Shallow/AppData/Pi-Hole/:/etc/pihole/'
      - '/mnt/Shallow/AppData/Pi-Hole/dnsmasq/:/etc/dnsmasq.d/'
    cap_add:
      # Required if you are using Pi-hole as your DHCP server, else not needed
      - NET_ADMIN
      # Required if you are using Pi-hole as your NTP client to be able to set the host's system time
      - SYS_TIME
      # Optional, if Pi-hole should get some more processing time
      - SYS_NICE
    restart: unless-stopped
  unbound:
    container_name: unbound
    image: mvance/unbound:latest
    volumes:
      - '/mnt/Shallow/AppData/Unbound/:/opt/unbound/etc/unbound'
    ports:
      - "5335:53/tcp"
      - "5335:53/udp"
    healthcheck:
      test: ["NONE"]
    restart: unless-stopped

I’m guessing @gm1925 felt the need to build a bridge network in order to enable PiHole’s DHCP functionality, as that’s the only thing I think I’d need a network for at this point? If you google there were some decent articles on how to do this with a bridge or macvlan network, but my router happens to have a decent address reservation system so I don’t see the benefit of that headache at this point.

The saga continues! Now I’m getting this occasional error in PiHole diagnosis and losing internet when I take away Google backup DNS IPs from PiHole’s UI settings:

127.0.0.1#5335 - TCP connection failed (Connection refused)

I read about root.hints file in the official Unbound git which with a package manager is automatically updated to keep an accurate list of root DNS servers to enable recursive DNS to begin. Weirdly, I don’t see this file in either the official Unbound github repo nor the mvance/unbound repo I’m using here… I wonder if root.key is holding these root servers and thus breaking my DNS functionality, where before when I thought it was working it was really bypassing to Google the whole time? I’m going to wipe the slate clean and try again to see if some setting got misconfigured somewhere along the way… maybe even buy a more prosumer router.

I should circle back to close this out in case anyone else is having similar issues. Unfortunately a ton of changes make the “solution” more of an answer of “quit and start over from scratch” kind of situation… :confused:

I started with my frustration trying to give containers macvlan IPs. My simple router didn’t have much DNS settings available, so I shopped around and found the wonderful world of Ubiquiti. After getting a new Unifi Express 7 gateway up and running, I found it much easier to navigate my home network settings in their awesome Network software UI! Highly recommend. It did still take a while and a lot of troubleshooting to start getting any containers to register on my Unifi software “client list” on the network. I set up a VLAN just for “internal services” containers, created a VLAN object with matching VLAN# in my TrueNAS network settings owned by my sole working physical interface, and spun up a docker network manually, so I could separately attach each container to the same “external” network I had created.

While getting the new router up and running, I also tried replacing problematic container apps to see if I or my server or my new router just somehow wasn’t “gelling” with PiHole + Unbound on Portainer. I grew frustrated with Portainer’s network settings not working with my router to establish unique MAC addressed containers, they would only sometimes show up then quickly disconnect and appear as “ghost” clients for the router… so I moved over to Dockge and found it MUCH more intuitive, and didn’t really miss all the extra bells/whistles of Portainer’s more confusing interface. Containers on macvlan started showing up with solid connections, and all the while I fell in love with docker compose for the sheer visibility of everything in one place.

THEN I returned to PiHole, and in my searching discovered Adguard Home. Once I saw that it handles secure DNS protocols without having to tweak/optimize the vanilla install, I just went for it, and it worked much more smoothly for me!

Separately I was growing frustrated with weird unreported elsewhere errors my mvance/unbound instance kept throwing at me, e.g. file permission error not reading config file despite all attempts at assigning the “correct” chown, etc… so I discovered klutchell/unbound and it worked much much more easily with a few corrections to the stock config file.

SO the answer is “change everything you are doing” LOL

Now I have Adguard Home as my DNS filter and cache source visible to my router on a static macvlan IP address, using recursive klutchell/unbound as the sole upstream DNS service on it’s own macvlan IP address. Whoo! Good luck to anyone trying to do this.

Perhaps another super important note, I downgraded my TrueNAS to 25.04 Fangtooth out of fear of not being able to get NVIDIA drivers working properly for my Tesla P4. Plex and Tdarr both seem properly connected for transcoding, and Immich had no problem detecting it for machine learning ops. I’ll probably jump over 25.10 anyway once they finally get a decent driver folded in for my 10GBe interface :stuck_out_tongue: