I’m pondering whether I can (I definitely shouldn’t, but that’s another discussion ) run a complex compose app in Incus on Fangtooth.
Hear me out here … the compose app itself uses .env and COMPOSE_FILE, it has a bash wrapper for configuration, it uses two clients out of a list of 12 (mix and match) and benefits from NVMe speeds. Oh and it uses volumes not bind mounts.
I am thinking the easiest way to do this may actually be to Incus an LXC, point that to an NVMe pool as its /, install docker-ce in there, and go to town.
The pool would need to be configured with a low recordsize and possibly directio and all those parameters are handled on the TrueNAS side - internally it’ll look like an ext4 or xfs I assume.
Reasonable? Or insanity?
If compose on TrueNAS command line works this could also be handled by setting the docker dataroot to somewhere on that NVMe pool and then running the compose app from command line.
You can run complex compose apps on EE directly, but volumes will go to the apps pool selected.
If you want you can overload the compose file to redirect the volumes to host mounts etc.
A clean way is to use Docker in Incus, and in fact, you can delegate a zfs dataset to the incus container, although this is not directly supported by the GUI, but would perhaps work well.
With Incus you can set the virtualization pool separately to the apps pool, but if you wish a secondary storage pool that will not persist across TrueNAS upgrades.
Or not. I wonder whether what I did is a terrible idea.
Create a Debian bookworm Incus container instance, with a macvlan NIC. Also add a Disk from /mnt/<pool>/<dataset> to /mnt/docker in the container. This is a fresh dataset just for this container. chown 2147000001:2147000001 /mnt/<pool>/<dataset> on TrueNAS CLI to map it to root inside lxc.
Once in lxc, mkdir -p /etc/docker and create /etc/docker/daemon.json with:
{
"data-root": "/mnt/docker"
}
And then install Docker-CE. Everything just works; docker volumes get created. This is what docker info claims about the storage driver:
incus config set security.nesting true
incus config set security.syscalls.intercept.mknod=true
incus config set security.syscalls.intercept.setxattr=true
incus config set zfs.delegate=true
I didn’t do any of that, and Docker works, volumes get created.
Did I not-so-subtly shoot myself in the foot by using a Disk as the data root for Docker, though?
So far so good. My complex compose works, the wrapper script around it works, its install routine for Docker works.
Let’s call that a full success as far as deployment method goes, but as expected ZFS is a bad fit for my app: It needs to read and write, randomly, every 12s. Mean time to handle storage 120ms, which is acceptable, max 900ms, which is not. Spikes to 500 and 600ms are common; spikes to 900ms happen between 30 minutes and 2 hours.
This is on P3600 NVMe, 4k recordsize, lz4 compression, atime off, sync standard.
Tried it with sync disabled as well, which made no difference.
I could start over with recordsize 8k and then 16k to get more data - but I think I’ll leave it be for now.
Hanging loose. Let’s see whether directio makes it into 25.10, and how it behaves then.