Nextcloud + Caddy + Pihole on Jailmaker / Docker Compose

I recently made the move to Jailmaker on Truenas Dragonfish and for the most part everything is going swimmingly!

The migration (for tt-rss at least) wasn’t too painful, and I am definitely enjoying how capable and user friendly most of the setup with jailmaker and docker is. Adding caddy-docker-proxy was surprisingly straightforward, and as an added bonus it let me extricate myself from cloudflare’s clammy clutches.

All alliteration aside, getting split-DNS working with nextcloud has been a pain, so I’m documenting what I’ve tried, with the hope that someone knows the right way to do things.

Lets start by finding out what it takes to get the existing tt-rss docker instance working with caddy. It was very nearly as easy as following the caddy-docker-proxy github’s readme verbatim. I began by getting a shell on the docker host (./jlmkr.py shell docker) and creating the caddy network: docker network create caddy just like is in the readme.

Then I copied the provided example compose script and then made a few changes:

# Removed the version tag, because docker says it's deprecated now.
services:
  caddy:
    image: lucaslorentz/caddy-docker-proxy:2.9.1-alpine
    ports:
      - 80:80
      - 443:443
    environment:
      - CADDY_INGRESS_NETWORKS=caddy
      - CADDY_DOCKER_CADDYFILE_PATH=/etc/caddy/Caddyfile
    networks:
      - caddy
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - caddy_data:/data
      - /mnt/data/caddy/caddyfile/Caddyfile:/etc/caddy/Caddyfile
    restart: unless-stopped
networks:
  caddy:
    external: true
volumes:
  caddy_data: {}

It’s mostly the same. The big differences are manually setting the version number to not be bleeding edge, and the mounted Caddyfile path so I can write manual server blocks for services that aren’t docker containers. The docker container will automatically merge the provided Caddyfile with your label-based rules for other containers.

With that compose script copied into dockge, hit “Deploy” and let caddy boot up. Once that’s done, edit the tt-rss compose.yml in dockge:

...

  web-nginx:
    image: cthulhoo/ttrss-web-nginx:latest
    restart: unless-stopped
    env_file:
      - .env
    ports:
      - 8280:80
      #- ${HTTP_PORT}:80
    volumes:
      - app:/var/www/html:ro
    depends_on:
      - app
    networks:
      - caddy
      - default
    labels:
      caddy: rss.my.domain
      caddy.reverse_proxy: "{{upstreams 80}}"
volumes:
  db: null
  app: null
  #  backups: null
  pgadmin-data: null
networks:
  caddy:
    external: true

N.B. You also need to have a CNAME or A type DNS record with your provider that points to the public IP, and have port 443 forwarded to the IP of the docker host.

Restart the container, and like magic caddy will use those compose labels to identify the service needing proxy, the domain to serve from, and it will grab SSL certs and start using them.

That was easy!

Adding authentication to proxied services.

Authelia never worked quite right when I tried setting it up with the previous batch of apps, so for now I’m going to just try and get basic_auth in caddy up and running.

First, we need a hash of the desired password. I’m going to use the caddy command in the container, but you can use any bcrypt generator. From the docker host I’m going to generate a hash for “password”:

N.B. The caddy container doesn’t have bash, use sh instead.

Copy the hash, and go to edit the service you want to put behind auth. Here’s an example with a throwaway container and a user of “user”:

N.B. The caddy-docker-proxy documentation has an important note: Double up the dollar signs in the hash, or the string won’t work properly in your compose file.

Using snippets

If you don’t want to maintain the hash value across multiple services and are OK with using the same password for them and have that externally mounted Caddyfile set up, then you can create a snippet for the basic_auth field.

In the mounted Caddyfile:

root@bns-citadel:/mnt/application/docker/data/caddy/caddyfile# cat Caddyfile 
(user_auth) {
        basic_auth {
                # https://caddy.community/t/using-caddyfiles-basic-auth-with-environment-variables-and-docker/19918/2
                user $2a$14$sN3oUy8P2gGfuFFoVM.OMOF1x0dxi53kDEsovh1XrN9QGb/AMXbF2
        }
}

And then back in the container:

Viewing the generated Caddyfile

Apropos of nothing, if you want to see what is created from the compose yaml, you can! The file is stored at /config/caddy/Caddyfile.autosave in the container.

root@docker:~# docker exec -it caddy-caddy-1 cat /config/caddy/Caddyfile.autosave
(user_auth) {
        basic_auth {
                user $2a$14$sN3oUy8P2gGfuFFoVM.OMOF1x0dxi53kDEsovh1XrN9QGb/AMXbF2
        }
}
domain.name {
        import user_auth
        reverse_proxy 172.28.0.9:80
}

Setting up Pihole

Now that apps are proxied by caddy, I want to be able to access them via the same URL from inside my LAN. Unfortunately that doesn’t “just work” and my ISPs router doesn’t have an option for NAT loopback. I really wasn’t sure how to proceed with this bit since searching online for caddy-specific recommendations didn’t yield much insight. Pihole was chosen because it looked user friendly and had a lot of setup documentation.

After creating the datasets in Truenas, I made a new container in dockge, and added this compose file:

# More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/
services:
  pihole:
    container_name: pihole
    image: pihole/pihole:latest
    ports:
      - 53:53/tcp
      - 53:53/udp
      - 8060:80/tcp
    environment:
      TZ: America/Chicago
      WEBPASSWORD: mypassword
    # Volumes store your data between container upgrades
    volumes:
      - /mnt/data/pi-hole/etc:/etc/pihole
      - /mnt/data/pi-hole/dnsmasq-d:/etc/dnsmasq.d
    networks:
      caddy:
        ipv4_address: 172.28.0.4
      default: null
    labels:
      caddy: pihole.address
      caddy.reverse_proxy: "{{upstreams 80}}"
      caddy.redir: / /admin/
      caddy.import: user_auth # Just an added layer of auth
    restart: unless-stopped
networks:
  caddy:
    external: true

Along with merging in caddy support so that I can tweak things from away for a while, I also made a redirect from / to /admin/ because if you browse the the Pihole root it looks as though things are broken!

I left in the default network too, I’m not sure why. And I set the caddy IP to static for reasons that will crop up when installing nextcloud.

From the Pihole admin page, go to Local DNS → DNS Records and add your root domain names along with the docker host IP. Then navigate to CNAME Records and add subdomains to point at the root domain.

Don’t forget to update your router to point at the docker host’s IP. If you want a safety net, add a normal DNS server as the secondary.