Linux Jails (containers/vms) with Incus

Well, for one, backups will be easier. In this PoC, I created a custom dataset under your /mnt/pool that is called jails. You can setup rolling snapshots of that dataset. You can setup custom networks pretty easily in Incus as well. Each machine created is a new dataset in the root dataset of /mnt/pool/jails, similar to how jailmaker does it’s datasets. Incus is tightly integrated with the ZFS driver. Snapshots are very simple as well in Incus. Systems could be quickly cloned from these images as well for instant clones.

As far as networking goes, it supports network zones (dns,dhcp), ACL’s, BGP (if you wanted it), easier to use networks and bridges for VLAN’s, etc. I know you can do them in docker as well, but you’d be looking at multiple layers vs just using LXC with OCI containers running in LXC. It supports Open vSwitch as well if you want to get fancy.

Flexibility would come in spinning up machines more easily if you needed a cluster for instance with cloud configs. But flexibility would extend to most parts of Incus. Building Incus clusters is pretty easy as well, although I’m not sure how TNS will leverage that in the WebUI. I’m sure that’s part of why they are moving over to Incus. They lost that awhile ago and it will bring some of that back.

I’m still learning about Incus and it’s features, but removing one abstraction layer is a plus. So far in my limited experience, it’s far less obtuse than docker. It’s cli is very easy to work with compared to other solutions.

TNS docker support at this point is very limited and more difficult to deal with in regards to running stand alone containers, custom NIC’s, IP’s, etc. Sure you can run Dockge, etc. It’s mostly preferences at this point… but Incus will bring some power to the people (that want it)…

4 Likes

Yeah, I saw that. As long as it’s not compatibility breaking, they backport those into LTS.

From the link, these are Incus snapshots. How do these relate to ZFS snapshots?

Good point—but not a particularly high bar to clear, I’d say.
Though I understand that LXC can run docker containers but not docker-compose, which appears to be the favourite interface for everybody and his dog, so there will still be two containerisation systems in Fangtooth: Docker-compose for application containers, and Incus for system containers and jail-like behaviour.

Snapshots should leverage the FS driver, in this case ZFS. So snapshots could be scheduled in Incus and managed there.

Yeah, cloud-config should be able to replace docker-compose in theory. We would just create custom cloud-config.yaml files that setup the OCI compliant LXC containers, similar to what I did in the OP in this thread.

These cloud configs can span into various areas as well, such as: storage, network, gold images, etc.

I guess a repo with these configs could be hosted on GH and be available to grab and spin up containers for users favorite apps…

1 Like

When using ZFS for Incus Storage, they are ZFS snapshots.

1 Like

Fiddling with VM’s a little today…

incus ls                                                      
+---------+---------+------------------------------+------+-----------------+-----------+
|  NAME   |  STATE  |             IPV4             | IPV6 |      TYPE       | SNAPSHOTS |
+---------+---------+------------------------------+------+-----------------+-----------+
| docker1 | RUNNING | 192.168.0.31 (eth0)          |      | CONTAINER       | 0         |
|         |         | 172.18.0.1 (br-ffaf5468f2be) |      |                 |           |
|         |         | 172.17.0.1 (docker0)         |      |                 |           |
+---------+---------+------------------------------+------+-----------------+-----------+
| test    | RUNNING | 192.168.0.230 (enp5s0)       |      | VIRTUAL-MACHINE | 0         |
+---------+---------+------------------------------+------+-----------------+-----------+

VM:

incus exec test -- bash
root@test:~# uname -r
6.1.0-27-amd64

Container:

incus exec docker1 -- bash
root@docker1:~# uname -r
6.12.0-production+truenas

VM can be made by simply adding the --vm flag to the incus launch command. So something like:

incus launch images:debian/bookworm/cloud -p default test --vm
2 Likes

Been running into an issue when upgrading between nightlies, the custom pool goes missing. I’ll need to file a bug report against that. But to recover it, you need to do the following to get back up and running.

Import the profile again:

incus profile create docker < docker-init.yaml

Then recover:

incus admin recover                           
This server currently has the following storage pools:
 - default (backend="zfs", source="pool/.ix-virt")
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: jails
Name of the storage backend (zfs, dir): zfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): pool/jails
Additional storage pool configuration property (KEY=VALUE, empty when done): 
Would you like to recover another storage pool? (yes/no) [default=no]: 
The recovery process will be scanning the following storage pools:
 - EXISTING: "default" (backend="zfs", source="pool/.ix-virt")
 - NEW: "jails" (backend="zfs", source="pool/jails")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]: 
Scanning for unknown volumes...
The following unknown storage pools have been found:
 - Storage pool "jails" of type "zfs"
The following unknown volumes have been found:
 - Container "docker1" on pool "jails" in project "default" (includes 0 snapshots)
Would you like those to be recovered? (yes/no) [default=no]: yes
Starting recovery...

Once that is completed, things will be back to normal:

incus storage ls   
+---------+--------+-------------+---------+---------+
|  NAME   | DRIVER | DESCRIPTION | USED BY |  STATE  |
+---------+--------+-------------+---------+---------+
| default | zfs    |             | 2       | CREATED |
+---------+--------+-------------+---------+---------+
| jails   | zfs    |             | 2       | CREATED |
+---------+--------+-------------+---------+---------+

incus profile ls
+---------+-------------------------+---------+
|  NAME   |       DESCRIPTION       | USED BY |
+---------+-------------------------+---------+
| default | Default TrueNAS profile | 1       |
+---------+-------------------------+---------+
| docker  | Docker Profile          | 1       |
+---------+-------------------------+---------+

incus ls
+---------+---------+------------------------------+------+-----------------+-----------+
|  NAME   |  STATE  |             IPV4             | IPV6 |      TYPE       | SNAPSHOTS |
+---------+---------+------------------------------+------+-----------------+-----------+
| docker1 | RUNNING | 192.168.0.31 (eth0)          |      | CONTAINER       | 0         |
|         |         | 172.18.0.1 (br-ffaf5468f2be) |      |                 |           |
|         |         | 172.17.0.1 (docker0)         |      |                 |           |
+---------+---------+------------------------------+------+-----------------+-----------+
| test    | RUNNING | 192.168.0.230 (enp5s0)       |      | VIRTUAL-MACHINE | 0         |
+---------+---------+------------------------------+------+-----------------+-----------+
1 Like

Hello,

great to read that there is a POC in process to integrate Incus as a first citizen.

I’m running Incus on TrueNAS for about a year now and run into a pitfalls during that time. One of them is of course the upgrade issue. After a few trials I settled with creating a dataset for Incus itself (for all the stuff which is located at /var/lib/incus) and using an ENV variable located in /etc/default/incus

INCUS_DIR="/mnt/<pool>/<dataset>"

With this I only need to reinstall/upgrade TrueNAS, modify /etc/default/incus, start the process and I’m up and running again.

By placing Incus data dir on a separate dataset has also the benefit of using snapshots to recover faster or transfer it to another server etc.

It is actually a pretty cool combo to have Incus combined with TrueNAS. Getting full support to administrate it through the UI will be a huge benefit.

2 Likes

Just thought I’d throw this here, as the Incus “Containers” stuff is looking very cool.

When you choose your “pool” for Incus it creates a hidden dataset called <pool>/.ix-virt which is used as the default incus storage pool.

I haven’t had an issue with Incus coming back after a system update on the Fangtooth nightlies… but its very early days :slight_smile:

1 Like

When you use the default pool, the dataset you listed, it comes back. But the custom dataset doesn’t.

You can inspect it with (off the top of my head, AFK right now):

incus storage show default

It kind of depends how Incus is installed and boot strapped. As long as all the install bootstrap values are the same it will always come back.
Think the issue is more about any kind of custom changes applied to the default will be gone. Things like extra storage, network, profiles etc.
Give my suggestion a try to place /var/lib/incus on a separate dataset and see if this keep all configurations.

One wonders if perhaps the incus database shouldn’t actually be on a pool, rather than the boot device…

and yes, I know that’s your suggestion, but what I mean is as part of the integration into TrueNAS.

1 Like

Based on my experience if you update TrueNAS is creates a complete new dataset structure for each new version you install:

boot-pool/ROOT/24.04.0                                                                                3.19G  5.56G   164M  legacy
...
boot-pool/ROOT/24.04.0/var                                                                             339M  5.56G   331M  /var
boot-pool/ROOT/24.04.0/var/ca-certificates                                                             108K  5.56G   108K  /var/local/ca-certificates
boot-pool/ROOT/24.04.0/var/log                                                                        2.32M  5.56G  63.0M  /var/log
boot-pool/ROOT/24.04.2                                                                                6.04G  5.56G   166M  legacy
...
boot-pool/ROOT/24.04.2/var                                                                             519M  5.56G   377M  /var
boot-pool/ROOT/24.04.2/var/ca-certificates                                                             108K  5.56G   108K  /var/local/ca-certificates
boot-pool/ROOT/24.04.2/var/log                                                                         136M  5.56G  76.7M  /var/log

This is how it looks like on one of my installations. Which means everything you place on the datasets is only available in the version you are running and the new version is more or less empty. That’s why there is a warning to perform a config backup before any upgrade and you can restore it afterwords.
What you could do is to include Incus into the configuration backup which you restore after the upgrade and everything is back. However, I’m not an expert in what is required to backup from Incus for a successful restore later. There are a lot of moving parts under /var/lib/incus…
Hence I’ve decided to use my own dataset for /var/lib/incus and all is good. Properly something to ask on the Incus forum if there is a recommendation?

Nice, oci support is there now…

incus version
Client version: 6.0.2
Server version: 6.0.2

EDIT: maybe not… maybe Debian didn’t backport the actual incus changes in 6.0.2?

incus remote add docker https://docker.io --protocol=oci
Error: Invalid protocol: oci

:unamused:

Some changes were backported to 6.0.2, but it looks like they didn’t actually enable oci until 6.0.3

I submitted a suggestion to allow for custom incus configurations in the feedback section. Not sure if I should’ve filed it as a bug or not…

Heh, me too recently.
Incus can do a lot of things but only a small subset will be available through Truenas UI or CLI. It will take time to integrate Incus features into Truenas middleware.
So I think custom Incus config would allow advanced users to really use all the Incus features they need.

I know we can basically configure Incus directly by incus config edit but I am not sure if this would survive Truenas upgrade.
From what I see Incus stores config in database located in /var/lib/incus/database/global

It doesn’t, that’s why you have to incus admin recover between updates.

I think I might’ve just thought of a workaround. I’ll try it this afternoon…

I tried setting INCUS_DIR during boot and it didn’t work on keeping the custom install. Still have to recover the install. Only way I see this working is if IX provides a way to move the database to a pool or dataset so it persists.

I may be able to one line incus admin recover, but I need to look into that further.

1 Like

So it looks like Incus is right up my alley. I was previously using Jailmaker to run an unprivileged container for Docker. I don’t want to use Electric Eel’s built-in Docker system because I don’t want my containers to run as root — I use several like Nextcloud-AIO and Immich that require root and can’t be run as a non-root user since they need to spawn containers, but they can be run as ‘root’ (ie. fake root).

Are there any particular concerns or problems with running Docker in an Incus container?

I’m coming from Proxmox land and it gets repeatedly drilled into us from the devs and other Proxmox veterans to not run Docker in LXC since it’s an unsupported configuration and they can’t guarantee that system updates won’t break the containers. I’ve personally experienced a container causing a kernel panic which unfortunately brought down the host with it due to the shared kernel.