Helm charts for TrueNAS Scale

I am installing some apps (elasticsearch, redis, postgresql) through the applications catalog and realized postgresql is not among the available apps. Even mysql is not available.

Now I am curious if I can add somehow my own postgresql chart in order to have it working (single postgres, no master-slave nor ha installation).
There is anything I should care of? (storageclass, etc…)

Thank you in advance

I think both are available from TrueCharts.

Most of those more basic apps people will deploy as a docker container. There is a launch Docker button in the UI which lets you run a simple mysql or postgres container. Many of the apps leverage those databases built-into their chart, but they each get their own instance.

Maybe I am missing something, but I have installed the Truenas Catalog GitHub - truenas/charts: TrueNAS SCALE Apps Catalogs & Charts
Just refreshed it and looked for postgres or mysql without luck.

I have bunch of k8s clusters and I want to reuse most of what I already have.
Would be great install postgres and use persistent storage instead running an ephemeral container.
btw, if there is any way to use persistent storage with docker containers in truenas I would like to learn how. you never know when it can be handy.

You need the Truecharts repository.


1 Like

I’d recommend installing a docker container for those specific apps, just to keep the complexity down:



1 Like

You’re missing that I said TrueCharts, not TrueNAS.

1 Like

Oh, so sorry Dan, you are right. Didn’t realize that.
Just added the repo, updated and voila! pg is available.

Which complexity you mean?
I am not sure how it is supposed to be installed using docker-compose alone because the persistent volumes.
Using helm/kubernetes the storage volumes are going (at least I hope it) to be provisioned from the available space in the dataset in form of another volume/share/truenas-name-for-storageunits. That will protect the databse from loosing the data if the container/service has any outage or simply the container is removed.

I am not sure how to archieve that with simple docker-compose in Truenas scale.

That’s what ix-volumes and host volumes provide for Apps / Docker… That is your persistent location for storage in case you remove the app, restart it, etc. Those are stored on your ZFS pool by default and can be backed up or otherwise interfaced with.

1 Like

Oh. I am a bit confused.
I have tried to install the cloudnative postgres chart and it fails, so I am thiinking to simplify as you adviced.
Would you mind to share a docker-compose for pg/any persisted workload so I can see how the persistance is handled?

Is in truenas possible stop/delete a container and still keep the persisted data as an accesible volume? How? (I have seen DOCKER_HOST=tcp:// is not working)


Sure! I’ll see if I can explain how I would do this using the official docker postgres image. Hopefully this is helpful for others as well.

This is what it may look like when I click “Custom App” in TrueNAS, starting with naming the App and defining which repo to pull from:

Next we set our super secret password:


Lastly we’d mount our persistant storage for the DB. I had created the flash/apps/pg/data dataset beforehand.


You may want to create a forward for the default ports as well:


Lots of good information here as well:


Oh, just a simple hostPath… :slight_smile:
Naively I expected something more sofisticated, haha.

So it is just a matter of creating the folder in the dataset and provide the path in the container.

In order to run a full application with all of its containers (nginx, redis, pg, for example), there is any chance to run with docker-compose files?

Thanks for your replies Kris.

Keep it simple right? :slight_smile:

Docker compose is very much on our road-map. We’re currently working hard on it internally. Expecting to make some noise about when that will land in the product soon. In the meantime the Sandboxes/Jails can be used if you want to run a native Docker / Compose environment.

1 Like

Absolutely. KISS should be always the way, but in order to keep portability is a principle that sometimes is hard to follow.

Just ran the first container for fun and just realized you have daemon.json bridge=none, and “none” network with null driver.
Guess it is on purpose, but then how it is expected for the containers to reach external world, be it for outgoing, or be it for incoming traffic?

I don’t want to mess manually with the config without knowing if there is a good reason or and specific way to do it.

Not sure where you ran into that, by default all containers have full outbound internet access. They have inbound via port forwarding, or you can click buttons in the UI to enable host-based networking. Maybe you are looking at an old docker config? It uses containerd in the background by default, but you should be using the UI to drive it.

Oh… I ran it from the shell… docker run -ti busybox:latest /bin/sh

But also a ‘ctr c ls’ gives me an error:

ctr: failed to dial “/run/containerd/containerd.sock”: context deadline exceeded connection error: desc = “transport: error while dialing: dial unix:///run/containerd/containerd.sock: timeout”

Again, we don’t normally support running those things from shell. If you want to administrate it like you would on a generic Linux box, you’ll want to spin up the nspawn container, just deploy a minimal debian instance to mess about.


I didn’t know about nswpan, it is a nice new trick to have in the toolbox, but I think I will go towards having a light VM and setup it as a k8s node of my external cluster (right the other way around I wanted to do, but will be ok too).

Thank you Kris for all the information, it has been a fun afternoon messing with this side of TrueNAS.

Looking forward for your docker-compose manager :slight_smile:


1 Like