N00b questions about deploying (private) custom apps

I’m trying to do a bind server as a custom app, to serve my vanity domains. So I’ve got a docker image on my dev box that works… how do I get it on the trunas?

First main question: Is there an expected path to do this? It’s not a package that should be public, just because, and I’d rather not involve any cloud services in what should be a local workflow.

Specifically, when creating a custom app, one of the requirements is the URL to fetch it from. I don’t have a URL, I have an image (I can export) on my dev box. What’s the expected path here?

I tried installing “distribution” as an app, but while I can push my image to that, I can’t seem to the NAS to pull back down.

  • If I don’t use Host network, then it doesn’t bind to 127.0.0.1, and the local resolver appears to do weird things where it prefers localhost to what the DNS server says?
  • If I do, then I can either have it fail because of SSL failures (bad cert) or fail because of no SSL failures, depending on if I have the distribution app use a cert or not.

On my dev box, I can configure podman to allow the repository to be insecure. Is there a way to do that on the NAS? Alternately, is there a preferred/suggested way of getting valid certs? A let’sencrypt integration?

If you have a registry of your own, which is easy with docker, then on Scale, you pull the image from your repo. So, for example, I have a private registry, so, I go to Apps → Settings (Manage Container Images) → Pull Image and enter your repo info. If it’s not password protected, then, I’m not sure if you must pull image offhand, but you can. There may be a url format for non protected registry without pull image.

But I don’t have a registry of my own… my only “docker server” is the nas. The “Distribution” application appears to be a registry, so that’s what I’m trying to use here. But then I run into cert/SSL issues.

Thanks for the pointer to that settings location. That at least told me where the password would go, if I could get it to allow insecure repositories.

My image names look like this: urloriptoregistrymachine:5000/emby/fatula

I don’t know the first thing about the distribution app, sorry. But “emby/fatula” for mine should be your image name in whatever registry you have. You do need a registry. Whether or not “distribution” is it, no idea.

If you can do docker commands in your distribution app, try docker image list to see the names.

For me, I have docker setup on my desktop and I build my own images there and host the registry there. My “desktop” is actually a Raspberry PI 5. It builds images for the x86_64 architecture with ease. I don’t deploy plain docker hub images as I make changes to just about every one of those containers.

Hah, I just read Distribution is the new name for docker registry, didn’t know that. But I still can’t help with that app as I do not use it.

In my docker compose file for my registry, I have added:
REGISTRY_HTTP_TLS_CERTIFICATE: “/certs/fatula-wildcard.crt”
REGISTRY_HTTP_TLS_KEY: "/certs/fatula-wildcard.key

To set up my cert if that helps at all. My cert is a wildcard cert I have. Whether or not that has changed, no idea. And I suppose you could use letsencrypt or any other cert you want. I get my certs in an automated manner myself using the PI also (from letsencrypt), and have a job to upload them to the NAS so all apps that need it can use it.

Hope at least something there might help. Maybe someone else has used that app.

If you use docker compose you can have the compose file build your image just in time from a docker file in the same directory.

If you use sandboxes + jailmaker you can use docker compose.

I’ve never bothered pushing an image anywhere and instead just use the above

Fairly certain docker compose up implies build.

This is where the N00b-ness comes in. (I’m a long time unix/linux admin type, but not experienced with containers/docker/k8s/etc, nor TruNAS).

Yes, on my devvm, I can run the image. Yes, the directories that contain the Dockerfile are on the NAS. Yes, I suspect I could SSH into the nas and build the image and likely stumble through deploying it. But then I’d assume it wouldn’t be managable via the UI, and wouldn’t show up there, right?

I mean, I could also just run the app “natively” on a VM on the device. That seems a waste, but it’s quite possible.

Temporary hackaround.

# k3s ctr image pull -k nas.internal.oppositelock.org:30095/domains:latest

and then configure the “Custom App” to never actually refresh the image. This is a terrible hackaround.

I feel like there’s a suggestion somewhere here for better self-signed-cert management on the device. Like, auto-generate self-signed certs for the right domain, and configure the device to trust them locally, so that using SSL to itself will always work. Where would I put that suggestion?

Or a Sandbox which is just a form or process isolation.

You can indeed do that, but, I like having a library of versions available in case of trouble so I do build them. Also, in some cases for certain apps, I do test them first on a separate system. Using an image means I can’t mess it up if I tested it. Many ways to skin a cat I guess.

For those less complex apps where I don’t have to install an extra 50 packages, etc. (like Nextcloud), I’ll likely do that in the compose file on Eel. But still for the more complicated apps, I will still likely build them.

1 Like

You can create a daemon.json file on your dev box and afterwards restart or reload preferably docker, the contents would look like:

{ "insecure-registries":["nas.internal.oppositelock.org::30095"] }

Where the file goes depends on what client you are running (or trying to run) your docker push commands. On linux, it goes in /etc/docker/daemon.json

This would allow you to push without SSL. Working for me on Scale.

So, really, this is a docker question not a Scale question as the problem is on the client, your dev box. But we’ll likely have a lot of these on Electric Eel.

I can already push well enough to the repository (the “distribution” app, also running on the NAS), but then setting up a custom application doesn’t PULL from it. So … yes, the issue I’m hitting is on the NAS side.

I guess I totally misunderstood your issue then. For some reason, I read you can’t push it but re-reading says you can, and clearly at that! Sorry about that.

I don’t believe there is a way to access a non SSL repo in the custom app. You’ll need a valid SSL certificate setup in the distribution app. This can be done via letsencrypt and other providers. To make it more automatic and avoid touching the distribution setup, you should first deploy caddy and it will self manage all that.

I am in the process of doing exactly that if you can bear with me. The next version of Scale will be using docker so I want to do as it does, and, that means I need a proxy, and caddy is automagic so once I have it working (i.e. I have the time), I’ll try to find this thread again. So, your request will go to caddy via SSL and caddy will send it on to your distribution app via http.

I do have it working with caddy, but, I need to clean up my config file then will post details. So, Scale can pull from the distribution registry located on itself, Scale, which is running as a custom app. No SSL in distribution app. caddy handles SSL automatically.

So, with one limitation which will almost certainly go away in Electric Eel since it’s going to docker and that’s what I will use, not vms or jails… Here is my dockerfile to build caddy:

FROM caddy:2.8-builder AS builder

RUN xcaddy build \
    --with github.com/caddy-dns/porkbun

FROM caddy:2.8

ENV TZ=America/Chicago

EXPOSE 80 443

COPY --from=builder /usr/bin/caddy /usr/bin/caddy
COPY Caddyfile /etc/caddy/Caddyfile

Here is my caddyfile that is copied in the dockerfile:

{
	email redacted
}

import /hosts/*

Here is the hosts file (/hosts directory mapped via hostpath) for the distribution registry:

registry.mydomain.com {
encode zstd gzip
reverse_proxy /v2/* registry-ix-chart.ix-registry.svc.cluster.local:5000

header {
	Strict-Transport-Security max-age=31536000;
}

tls {
	dns porkbun {
		api_key redacted
		api_secret_key redacted
	}
}

}

If you want to add more hosts with different dns, I’ll have around 15 of them, you simply add each one to the /hosts directory in the container. All your hosts linked to caddy will get their own cert automatically, and renewed, nothing for you to do. The limitation is caddy itself cannot get it’s images from the registry, you’ll have to copy that one manually once and just leave it alone forever as I don’t see any needs for updates to caddy. Well, it can, but, if you ever mess something up with caddy and need a new image (say it isn’t running), you have to manually move one in. So, I just plan to leave it alone other than adding hosts files (that’s the include).

As far as the custom app config, you need hostpaths for:

/data where letsencrypot stuff is stored like your certificates)
/config (caddy config stuff you do not mess with)
/hosts (where you store your reverse proxy config, one per app)

I use a bridge interface for the host interface field, set a static ip and route for 0.0.0.0/0 to my router, change DNS policy to kubernetes first and specify my router as the nameserver and that’s it. This for example checks what’s in my registry:

curl -X GET https://registry.mydomain.com/v2/_catalog

DNS for each domain points to caddy static IP. In the “hosts” file for distribution, you see I have a kubernetes DNS name, registry-ix-chart.ix-registry.svc.cluster.local and I obtain that from heavyscript. There’s a kubernetes way to get them, don’t recall off hand as I simply use heavyscript to obtain them all. Note that distribution registry MUST have a port forward else it’s name doesn’t show in internal dns list. None of your reverse proxied apps need static ip, only caddy does and port is irrelevant as caddy translates those.

Hope that helps and finally answers your question.

I’m on the road right now, but I’ll certainly give that a try as soon as I get home (… In a couple of weeks? Ugh)

1 Like