Well, for one, backups will be easier. In this PoC, I created a custom dataset under your /mnt/pool that is called jails. You can setup rolling snapshots of that dataset. You can setup custom networks pretty easily in Incus as well. Each machine created is a new dataset in the root dataset of /mnt/pool/jails, similar to how jailmaker does it’s datasets. Incus is tightly integrated with the ZFS driver. Snapshots are very simple as well in Incus. Systems could be quickly cloned from these images as well for instant clones.
As far as networking goes, it supports network zones (dns,dhcp), ACL’s, BGP (if you wanted it), easier to use networks and bridges for VLAN’s, etc. I know you can do them in docker as well, but you’d be looking at multiple layers vs just using LXC with OCI containers running in LXC. It supports Open vSwitch as well if you want to get fancy.
Flexibility would come in spinning up machines more easily if you needed a cluster for instance with cloud configs. But flexibility would extend to most parts of Incus. Building Incus clusters is pretty easy as well, although I’m not sure how TNS will leverage that in the WebUI. I’m sure that’s part of why they are moving over to Incus. They lost that awhile ago and it will bring some of that back.
I’m still learning about Incus and it’s features, but removing one abstraction layer is a plus. So far in my limited experience, it’s far less obtuse than docker. It’s cli is very easy to work with compared to other solutions.
TNS docker support at this point is very limited and more difficult to deal with in regards to running stand alone containers, custom NIC’s, IP’s, etc. Sure you can run Dockge, etc. It’s mostly preferences at this point… but Incus will bring some power to the people (that want it)…
From the link, these are Incus snapshots. How do these relate to ZFS snapshots?
Good point—but not a particularly high bar to clear, I’d say.
Though I understand that LXC can run docker containers but not docker-compose, which appears to be the favourite interface for everybody and his dog, so there will still be two containerisation systems in Fangtooth: Docker-compose for application containers, and Incus for system containers and jail-like behaviour.
Snapshots should leverage the FS driver, in this case ZFS. So snapshots could be scheduled in Incus and managed there.
Yeah, cloud-config should be able to replace docker-compose in theory. We would just create custom cloud-config.yaml files that setup the OCI compliant LXC containers, similar to what I did in the OP in this thread.
These cloud configs can span into various areas as well, such as: storage, network, gold images, etc.
I guess a repo with these configs could be hosted on GH and be available to grab and spin up containers for users favorite apps…
Been running into an issue when upgrading between nightlies, the custom pool goes missing. I’ll need to file a bug report against that. But to recover it, you need to do the following to get back up and running.
Import the profile again:
incus profile create docker < docker-init.yaml
Then recover:
incus admin recover
This server currently has the following storage pools:
- default (backend="zfs", source="pool/.ix-virt")
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: jails
Name of the storage backend (zfs, dir): zfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): pool/jails
Additional storage pool configuration property (KEY=VALUE, empty when done):
Would you like to recover another storage pool? (yes/no) [default=no]:
The recovery process will be scanning the following storage pools:
- EXISTING: "default" (backend="zfs", source="pool/.ix-virt")
- NEW: "jails" (backend="zfs", source="pool/jails")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]:
Scanning for unknown volumes...
The following unknown storage pools have been found:
- Storage pool "jails" of type "zfs"
The following unknown volumes have been found:
- Container "docker1" on pool "jails" in project "default" (includes 0 snapshots)
Would you like those to be recovered? (yes/no) [default=no]: yes
Starting recovery...
Once that is completed, things will be back to normal:
incus storage ls
+---------+--------+-------------+---------+---------+
| NAME | DRIVER | DESCRIPTION | USED BY | STATE |
+---------+--------+-------------+---------+---------+
| default | zfs | | 2 | CREATED |
+---------+--------+-------------+---------+---------+
| jails | zfs | | 2 | CREATED |
+---------+--------+-------------+---------+---------+
incus profile ls
+---------+-------------------------+---------+
| NAME | DESCRIPTION | USED BY |
+---------+-------------------------+---------+
| default | Default TrueNAS profile | 1 |
+---------+-------------------------+---------+
| docker | Docker Profile | 1 |
+---------+-------------------------+---------+
incus ls
+---------+---------+------------------------------+------+-----------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------+---------+------------------------------+------+-----------------+-----------+
| docker1 | RUNNING | 192.168.0.31 (eth0) | | CONTAINER | 0 |
| | | 172.18.0.1 (br-ffaf5468f2be) | | | |
| | | 172.17.0.1 (docker0) | | | |
+---------+---------+------------------------------+------+-----------------+-----------+
| test | RUNNING | 192.168.0.230 (enp5s0) | | VIRTUAL-MACHINE | 0 |
+---------+---------+------------------------------+------+-----------------+-----------+
great to read that there is a POC in process to integrate Incus as a first citizen.
I’m running Incus on TrueNAS for about a year now and run into a pitfalls during that time. One of them is of course the upgrade issue. After a few trials I settled with creating a dataset for Incus itself (for all the stuff which is located at /var/lib/incus) and using an ENV variable located in /etc/default/incus
INCUS_DIR="/mnt/<pool>/<dataset>"
With this I only need to reinstall/upgrade TrueNAS, modify /etc/default/incus, start the process and I’m up and running again.
By placing Incus data dir on a separate dataset has also the benefit of using snapshots to recover faster or transfer it to another server etc.
It is actually a pretty cool combo to have Incus combined with TrueNAS. Getting full support to administrate it through the UI will be a huge benefit.
It kind of depends how Incus is installed and boot strapped. As long as all the install bootstrap values are the same it will always come back.
Think the issue is more about any kind of custom changes applied to the default will be gone. Things like extra storage, network, profiles etc.
Give my suggestion a try to place /var/lib/incus on a separate dataset and see if this keep all configurations.
This is how it looks like on one of my installations. Which means everything you place on the datasets is only available in the version you are running and the new version is more or less empty. That’s why there is a warning to perform a config backup before any upgrade and you can restore it afterwords.
What you could do is to include Incus into the configuration backup which you restore after the upgrade and everything is back. However, I’m not an expert in what is required to backup from Incus for a successful restore later. There are a lot of moving parts under /var/lib/incus…
Hence I’ve decided to use my own dataset for /var/lib/incus and all is good. Properly something to ask on the Incus forum if there is a recommendation?