Linux Jails (containers/vms) with Incus

INTRODUCTION


For those who have or currently use Jailmaker, we know that the project is EOL and has no maintainer (Thank you for all your effort @Jip-Hop!). It provided an easy solution for creating jails using systemd-nspawn. This was useful for creating highly customizable environments that are not as heavy as full virtual machines.

IX is moving in a different direction in TrueNAS SCALE 25.04 with inclusion of Incus. With that in mind, Jailmaker isn’t really needed in it’s current form.

This thread is getting a head start on the new release to have something in place for easy, templated, yet customizable jail creation with flexibility that possibly won’t be available in the SCALE UI.

How far I take this remains to be seen, as Fangtooth is in active development. Currently, this is just creating simple profiles to launch instances with Incus.

IMPORTANT: This guide is intended for TrueNAS SCALE 25.04+. If you want to try it on your own distro, feel free to, but don’t reply here for support questions. Take those to the Incus discussion boards.

NOTE: If you’re running this in a VM, you may run into Nesting issues. You need to configure your VM to nest from the host and enable the proper CPU virtualization extensions in your BIOS then it “should” work fine…

Config options added to the docker-init.yaml and docker-init-nvidia.yaml will load the needed kernel modules, sysctl values, and Incus instance configs.

NOTE: This guide assumes you already have your TrueNAS SCALE networking, bridge(s), and ZFS pools configured. I’m assuming you are using br0 for the bridge and pool for your ZFS pool name. If not, go ahead and adjust the appropriate configs below with your specific interface(s) and pool(s).


DISCLAIMER


I am not responsible for your actions. If you attempt these and mess up your system, that’s on you.


Getting started

NOTE: If you don’t su to the root user you’ll need to prepend sudo to the commands listed below.

  1. Download the one or both of following yaml configs to your TrueNAS 25.04+ host and modify as needed.
    a. Configure your <pool-name> for the root drive. In our instance, it would be called pool.
    b. Configure your appropriate mount points on your TrueNAS host where you will be hosting your app data.
    c. Set your timezone.
    d. Feel free to modify anything else you might need, like adding additional packages you would like in your base image, additional datasets passed through, modifying your GPU PCI address, etc.

NOTE: When you modify this config, it will launch the same configuration on all instances created from the config moving forward. Don’t add anything to the config/profile that you don’t want on EVERY instance created from the config. If you need separate settings, create multiple configs.

Config docker-init.yaml:

description: Docker
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: <pool-name>
    type: disk
  data:
    path: /mnt/data
    source: /mnt/pool/data/apps
    recursive: true
    type: disk
  stacks:
    path: /opt/stacks
    source: /mnt/pool/data/stacks
    recursive: true
    type: disk
config:
  # Start instances on boot
  boot.autostart: "true"
 
  # Load needed kernel modules
  linux.kernel_modules: br_netfilter

  # Enable required security settings
  security.nesting: "true"
  security.syscalls.intercept.mknod: "true"
  security.syscalls.intercept.setxattr: "true"

  #cloud-init.network-config: |
  #  #cloud-config

  #  network:
  #    version: 2
  #    ethernets:
  #      eth0:
  #        addresses:
  #          - 192.168.1.100/24
  #        gateway4: 192.168.1.1
  #        nameservers:
  #          addresses: [192.168.1.1]
  #          search:
  #            - domain.lan

  cloud-init.user-data: |
    #cloud-config

    # Enable docker sysctl values
    write_files:
      - path: /etc/sysctl.d/20-docker.conf
        content: |
          net.ipv4.conf.all.forwarding=1
          net.bridge.bridge-nf-call-iptables=1
          net.bridge.bridge-nf-call-ip6tables=1

    # Set timezone
    timezone: US/Eastern

    # apt update and apt upgrade
    package_update: true
    package_upgrade: true

    # Install apt repos and packages needed for docker
    apt:
      preserve_sources_list: true
      sources:
        docker.list:
          source: deb [arch=amd64] https://download.docker.com/linux/debian $RELEASE stable
          keyid: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
          filename: docker.list
    packages:
      - apt-transport-https
      - apt-utils
      - ca-certificates
      - curl
      - gpg
      - docker-ce
      - docker-ce-cli
      - containerd.io
      - docker-buildx-plugin
      - docker-compose-plugin

    # create groups
    groups:
      - docker
      - apps: [root]

    # create users
    users:
      - default
      - name: apps
        primary_group: apps
        uid: 568
        groups: docker
        lock_passwd: true

    # Add default auto created user to docker group
    system_info:
      default_user:
        groups: [docker,apps]

    # additional configuration
    runcmd:
      - 'echo "-----------------------------"'
      - 'echo "Configuring system uid/gid..."'
      - 'echo "-----------------------------"'
      - 'groupmod -g 568 apps'
      - 'groupmod -g 500 docker'
      - 'usermod -u 1000 debian'
      - 'groupmod -g 1000 debian'
      - 'echo "-----------------------------"'
      - 'echo "    Installing dockge...     "'
      - 'echo "-----------------------------"'
      - 'mkdir -p /opt/dockge'
      - 'cd /opt/dockge'
      - 'curl https://raw.githubusercontent.com/louislam/dockge/master/compose.yaml --output compose.yaml'
      - 'docker compose up -d'

Config docker-init-nvidia.yaml. Read the FAQ for info of additional steps that may need to happen for GPU’s:

description: Docker Nvidia
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  gpu0:
    gputype: physical
    pci: 0000:2b:00.0
    type: gpu
  root:
    path: /
    pool: <pool-name>
    type: disk
  data:
    path: /mnt/data
    source: /mnt/pool/data/apps
    recursive: true
    type: disk
  stacks:
    path: /opt/stacks
    source: /mnt/pool/data/stacks
    recursive: true
    type: disk
config:
  # Start instances on boot
  boot.autostart: "true"
 
  # Load needed kernel modules
  linux.kernel_modules: br_netfilter,nvidia_uvm

  # Enable required security settings
  security.nesting: "true"
  security.syscalls.intercept.mknod: "true"
  security.syscalls.intercept.setxattr: "true"

  # Nvidia configs
  nvidia.driver.capabilities: compute,graphics,utility,video
  nvidia.runtime: "true"

  #cloud-init.network-config: |
  #  #cloud-config

  #  network:
  #    version: 2
  #    ethernets:
  #      eth0:
  #        addresses:
  #          - 192.168.1.100/24
  #        gateway4: 192.168.1.1
  #        nameservers:
  #          addresses: [192.168.1.1]
  #          search:
  #            - domain.lan

  cloud-init.user-data: |
    #cloud-config

    # Enable docker sysctl values
    write_files:
      - path: /etc/sysctl.d/20-docker.conf
        content: |
          net.ipv4.conf.all.forwarding=1
          net.bridge.bridge-nf-call-iptables=1
          net.bridge.bridge-nf-call-ip6tables=1
      - path: /etc/systemd/system/fix-gpu-pass.service
        owner: root:root
        permissions: '0755'
        content: |
          [Unit]
          Description=Symlink for LXC/Nvidia to Docker passthrough
          Before=docker.service

          [Service]
          User=root
          Group=root
          ExecStart=/bin/bash -c 'mkdir -p /proc/driver/nvidia/gpus && ln -s /dev/nvidia0 /proc/driver/nvidia/gpus/0000:2b:00.0'
          Type=oneshot

          [Install]
          WantedBy=multi-user.target

    # Set timezone
    timezone: US/Eastern

    # apt update and apt upgrade
    package_update: true
    package_upgrade: true

    # Install apt repos and packages needed for docker
    apt:
      preserve_sources_list: true
      sources:
        docker.list:
          source: deb [arch=amd64] https://download.docker.com/linux/debian $RELEASE stable
          keyid: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
          filename: docker.list
        nvidia-container-toolkit.list:
          source: deb https://nvidia.github.io/libnvidia-container/stable/deb/$(ARCH) /
          keyid: C95B321B61E88C1809C4F759DDCAE044F796ECB0
          filename: nvidia-container-toolkit.list
    packages:
      - apt-transport-https
      - apt-utils
      - ca-certificates
      - curl
      - gpg
      - docker-ce
      - docker-ce-cli
      - containerd.io
      - docker-buildx-plugin
      - docker-compose-plugin
      - nvidia-container-toolkit

    # create groups
    groups:
      - docker
      - apps: [root]

    # create users
    users:
      - default
      - name: apps
        primary_group: apps
        uid: 568
        groups: docker
        lock_passwd: true

    # Add default auto created user to docker group
    system_info:
      default_user:
        groups: [docker,apps]

    # additional configuration
    runcmd:
      - 'echo "-----------------------------"'
      - 'echo "Configuring system uid/gid..."'
      - 'echo "-----------------------------"'
      - 'groupmod -g 568 apps'
      - 'groupmod -g 500 docker'
      - 'usermod -u 1000 debian'
      - 'groupmod -g 1000 debian'
      - 'echo "-----------------------------"'
      - 'echo " Configuring fix-gpu-pass... "'
      - 'echo "-----------------------------"'
      - 'systemctl daemon-reload'
      - 'systemctl enable fix-gpu-pass'
      - 'systemctl start fix-gpu-pass'
      - 'echo "-----------------------------"'
      - 'echo "    Configuring nvidia...    "'
      - 'echo "-----------------------------"'
      - 'nvidia-ctk runtime configure --runtime=docker'
      - 'nvidia-ctk config --set nvidia-container-cli.no-cgroups -i'
      - 'echo "-----------------------------"'
      - 'echo "    Restarting docker...     "'
      - 'echo "-----------------------------"'
      - 'systemctl restart docker'
      - 'echo "    Installing dockge...     "'
      - 'echo "-----------------------------"'
      - 'mkdir -p /opt/dockge'
      - 'cd /opt/dockge'
      - 'curl https://raw.githubusercontent.com/louislam/dockge/master/compose.yaml --output compose.yaml'
      - 'docker compose up -d'
  1. Build a docker instance. You can build as many as you like. docker1 is the instance name which will show up when you list the Incus instances by running incus ls.

With docker-init.yaml:

incus launch images:debian/bookworm/cloud docker1 < docker-init.yaml

or docker-init-nvidia.yaml:

incus launch images:debian/bookworm/cloud docker1 < docker-init-nvidia.yaml
  1. Access the new Incus instance shell.
incus exec docker1 bash
  1. NOTE: If you configured cloud-init.network-config, you can skip this step as it’s done automatically for you with the cloud-init scripts. Configure static IP and DNS resolver. Once the instance is built, you should configure a static IP address and point to your DNS server. Edit the following files with your favorite editor.

NOTE: You can also use a static DHCP lease and leave the instance set to DHCP. Refer to your DHCP servers` manual on how to do this. There is advanced managed networking features in Incus that can be leveraged as well, but that is outside the scope of this guide.

/etc/systemd/network/10-cloud-init-eth0.network

Output below. Modify your Address and Gateway.

[Match]
Name=eth0

[Network]
#DHCP=ipv4
DHCP=false
Address=192.168.0.30/24
Gateway=192.168.0.1
LinkLocalAddressing=no
LLDP=yes
EmitLLDP=customer-bridge

/etc/systemd/resolved.conf

Output below. Modify DNS to point to your DNS server and Domains to your search domain if needed. If you don’t need the search domains, just comment out the Domains line.

#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it under the
#  terms of the GNU Lesser General Public License as published by the Free
#  Software Foundation; either version 2.1 of the License, or (at your option)
#  any later version.
#
# Entries in this file show the compile time defaults. Local configuration
# should be created by either modifying this file, or by creating "drop-ins" in
# the resolved.conf.d/ subdirectory. The latter is generally recommended.
# Defaults can be restored by simply deleting this file and all drop-ins.
#
# Use 'systemd-analyze cat-config systemd/resolved.conf' to display the full config.
#
# See resolved.conf(5) for details.

[Resolve]
# Some examples of DNS servers which may be used for DNS= and FallbackDNS=:
# Cloudflare: 1.1.1.1#cloudflare-dns.com 1.0.0.1#cloudflare-dns.com 2606:4700:4700::1111#cloudflare-dns.com 2606:4700:4700::1001#cloudflare-dns.com
# Google:     8.8.8.8#dns.google 8.8.4.4#dns.google 2001:4860:4860::8888#dns.google 2001:4860:4860::8844#dns.google
# Quad9:      9.9.9.9#dns.quad9.net 149.112.112.112#dns.quad9.net 2620:fe::fe#dns.quad9.net 2620:fe::9#dns.quad9.net
DNS=192.168.0.8
#FallbackDNS=
Domains=domain.lan
#DNSSEC=no
#DNSOverTLS=no
#MulticastDNS=yes
#LLMNR=yes
#Cache=yes
#CacheFromLocalhost=no
#DNSStubListener=yes
#DNSStubListenerExtra=
#ReadEtcHosts=yes
#ResolveUnicastSingleLabel=no
  1. Restart container
exit
midclt call virt.instance.restart docker1 -j
  1. Verify your new instance looks good. You should see output similar to below when everything is up and running. Notice that you will have an eth0 interface which is the instances bridge to the TrueNAS hosts’ br0 interface. You will also have docker0 for Docker’s interface to eth0. Finally you should have a br-* interface that Dockge is using.
incus ls                      
+-------------+---------+------------------------------+------+-----------+-----------+
|    NAME     |  STATE  |             IPV4             | IPV6 |   TYPE    | SNAPSHOTS |
+-------------+---------+------------------------------+------+-----------+-----------+
| docker1     | RUNNING | 192.168.0.30 (eth0)          |      | CONTAINER | 0         |
|             |         | 172.18.0.1 (br-7e7ee82b01bf) |      |           |           |
|             |         | 172.17.0.1 (docker0)         |      |           |           |
+-------------+---------+------------------------------+------+-----------+-----------+
  1. Setup your userns_idmap’s for setting permissions for local host passthrough storage. See here for the docs.
  2. (OPTIONAL, but recommended): look at the production setup configuration guide and network tuning guide for your server and tweak as needed.

Known Issues


  • Privileged containers are not currently supported. That is coming in a future release.
  • Custom profiles, pools, networks, and storage DO NOT persist currently between TNS upgrades. See here for how to recover after upgrades if you are using anything custom outside the default profile.
    * Passing datasets through is considered a potiential security risk using shift: true. This will be addressed over time. This has been addressed in this post by using the userns_idmap feature being implemented by IX. This will get better over time and is planned to be integrated into the Web UI as well in a future update.

TODO


1. Integrate custom ZFS datasets/pools into the jail creation process.
2. Add GPU attachment to instances.
3. Investigate using block mode for Docker instances. EDIT: It can be done, see here for more info and gotchas.
4. Investigate incorporating Incus Web UI. EDIT: Not possible in it’s current form because of how it installs unless users wanted to do their own install of Incus in VM’s, which defeats the purpose on TNS.
5. Investigate using separate profiles, datasets, networks, etc. that are managed outside the purview of middlewared. EDIT: Not possible unless IX allows persistence of custom profiles, datasets, etc. outside of middlewared.
6. Investigate setting idmap via cloud-init scripts to move away from disk: shift: true. See this post on how to use userns_idmap’s.
7. Possibly create script to manage custom instance profiles in Incus, if applicable.


Changelog


11.08.2024:

  • Add ZFS jails dataset to profile and instructions on adding the dataset to Incus.
  • Add nvidia instance configs to docker-init.yaml profile.

02.24.2025:

  • Add Nvidia docker-init-nvidia.yaml with full configuration and installation of nvidia-container-toolkit.
  • Removed Nvidia configs from docker-init.yaml.
  • Removed creating with a profile.
  • Removed creating a separate jails dataset and reconfigured the docker-init.yaml and docker-init-nvidia.yaml to use the default storage pool in to survive reboots.
  • Update GPU FAQ.
  • Added additional formatting for log readability for /var/log/cloud-init-output.log.

02.26.2025:

  • Updated docker-init.yaml and docker-init-nvidia.yaml with additional recommended security configs for Docker:
 # Enable required security settings
 security.nesting: "true"
 security.syscalls.intercept.mknod: "true"
 security.syscalls.intercept.setxattr: "true"

02.28.2025:

  • Added cloud-init.network-config to docker-init.yaml and docker-init-nvidia.yaml to allow for creating preconfigured network settings, ie. IP, DNS, search domains, etc. NOTE: they are commented out, so if you want to use them uncomment them.

03.12.2025:

  • Remove disks: shift: true from docker-init.yaml and docker-init-nvidia.yaml.
  • Remove shift verbiage from FAQ.
  • Added Step 7 on userns_idmap.
  • Added apps user/group creation to docker-init.yaml and docker-init-nvidia.yaml. This is done because IX currently passes though apps from the local host to containers by default. Creating a new container and a new dataset with apps as the owner should result in a working container immediately.
  • Updated reboot command to use midclt so userns_idmap takes place properly on container/vm.
midclt call virt.instance.restart docker1 -j

03.13.2025:

  • Add root to apps group since root can’t be passed through using userns_idmap. Also added default cloud-init user to apps group for a similar purpose. This will allow docker containers that require root to be able to write to datasets with permissions of apps:apps. Make sure to set your dirs/files as 775/664 respectively. Example for your host dataset/folder (This can be accomplished in the TNS Web UI as well under: Datasets > click YOUR apps dataset > Permissions > Edit):
chown apps:apps dir
chown apps:apps file
chmod 775 dir
chmor 664 file

03.16.2025:

  • Fixed nvidia gpu double passthrough with a workaround. This will create a systemd unit file that will run before docker starts and puts a symlink in place so docker containers can access the GPU. Please modify the PCI address to match your card in the following line in docker-init-nvidia.yaml.
mkdir -p /proc/driver/nvidia/gpus && ln -s /dev/nvidia0 >>/proc/driver/nvidia/gpus/0000:2b:00.0
  • Modify /etc/nvidia-container-runtime/config.toml to disable cgroups.
no-cgroups = true

03.17.2025:

  • Added additional nvidia config to docker-init-nvidia.yaml to load needed modules into Incus containers.
nvidia.driver.capabilities: compute,graphics,utility,video
  • Updated runcmd formatting.

03.18.2025:

  • Added nvidia_uvm to docker-init-nvidia.yaml config to allow for proper startup on boot.
  • Updated GPU FAQ with instructions on creating a PREINIT command to run /usr/bin/nvidia-smi -f /dev/null to allow nvidia based containers to boot properly on reboot.

03.25.2025:

  • Update to cloud-init.network-config v2. More info here.
 cloud-init.network-config: |
   #cloud-config

   network:
     version: 2
     ethernets:
       eth0:
         addresses:
           - 192.168.0.21/24
         gateway4: 192.168.0.1
         nameservers:
           addresses: [192.168.0.8]
           search:
             - domain.lan

03.27.2025:

  • Add step 8 with links to the Incus tuning guides.

04.17.2025:

  • Add apt-utils package.
  • Updated Step 7 for userns_idmap since it’s now supported in the UI. See here for more info.
  • Update Known Issues that Privileged Containers are not supported currently.

04.23.2025:

  • Added note to read the FAQ for GPU’s.
  • Removed beta verbiage.

FAQ


Does this work?

Yes! This will install docker and run docker images. Mounts can write data through to your host and datasets you are assigning to your containers which are configured under devices: and type: disk and recursive: true enabled.


Do GPU’s work?

Yes! Just use the docker-init-nvidia.yaml cloud-init config. This will configure the Nvidia container options and GPU configuration options in Incus install the nvidia-container-toolkit, then update the docker runtime will need to be updated.

In order for nvidia based containers to boot, you will need to initialize the drivers on boot using a PREINIT command. Run the command below to workaround this issue:

/usr/bin/nvidia-smi -f /dev/null

Example Screenshot:

You will need to manually configure your GPU PCI address for the time being. To get a list of GPUs, run the following command:

incus info --resources | grep "GPUs:" -A 20

Grab the PCI address for your card and substitute it in the docker-init-nvidia.yaml.

PCI address: 0000:2b:00.0

One liner that “should” work:

incus info --resources | awk '/^[^[:space:]]/ { g = (/GPUs?:/) } /Vendor:/ { v = (tolower($0) ~ /amd|intel|nvidia/) } g && v && sub(/.*PCI address: /,"")'

For AMD and Intel GPU’s, it should just be a matter of passing through /dev/dri and off to the races you should be.


Do I have to install Dockge?

No. You can comment out, remove, or replace the lines with something you want to use in the config. I added Dockge to the config because I use it and I see it used by other users in the forums. How you accomplish that will be up to you.

8 Likes

wow… i thought it would be a quick fix. but seems like it’s turning into a bigger project than i thought. feeling a bit guilty now :sweat_smile:

ty dan, really appreciate the work you are doing. i’m sure many will benefit from this.

right now i’m decomissing my qnap and moving to a new diy build that uses an amd igpu for the ryzen 7600. no graphics card, just only a lower power igpu.

not sure if i still need to add gpu passthrough for that to work with jailmaker docker for jellyfin or not :thinking:

This isn’t fixing EE. EE works fine with the existing jails and I added the workarounds for that on the old thread. It’s not worth fixing because Incus is coming in the next release.

What this is providing is a new solution in TNS 25.04. Jails will be vastly different in 25.04. All I have left now is to add GPU support and I think we’re squared away and ready to go.

I’m not sure I’m ready to commit my bare metal server to 25.04 quite yet… :rofl:

3 Likes

IMG_2110

—-

(Probably not a good idea)

3 Likes

I would at least wait for alpha. It’s scheduled for December 18, 2024. That should be nice Christmas present :slight_smile:

1 Like

@kris I’m working on this and was wondering, could an option be added in 25.04 for Incus to choose between stable and lts. Reason being, it looks like stable supports importing oci compatible repositories, ie. docker hub, ghcr, etc. whereas lts does not.

incus remote add docker https://docker.io --protocol=oci

This would allow for creating native lxc containers without needing to setup docker via nested containers. We could then stand up native oci instances, etc.

Let me just provide some context.
TrueNAS gets Incus package from official Debian repo.
Debian only packages LTS version: Debian -- Details of package incus in bookworm-backports
I dont think this will change because Debian is known for its conservative attitude.

Stable versions are available in 2 ways:

  1. Self-compile
  2. Zabbly repo - packages made by the main Incus developer
1 Like

Right, IX would just need to use Zabbly’s repos over Debian and then have an option to select lts or stable. I’m asking because I’d rather not modify my appliance to accomplish that. I definitely don’t want to direct users that way either. Not wanting to add additional overhead by running VM’s either. Bare metal as possible. :joy:

Both “could” be installed and use update-alternatives to run either one as well. That would likely require compiling though.

Important reading: Support - Incus documentation

Stable versions are only supported until next version is released. I dont think they are good for production system that needs long-term stability.

Basically the same reason why TrueNAS uses only LTS Linux kernel.

Also I dont think it’s good to depend on third-party developers repo for production system. Debian discourages from that.

Either way having two versions of the same package installed is bad idea.

For desktop Stable versions are ok, but for server or NAS LTS is the way to go.

3 Likes

Yeah, I get it. Doesn’t hurt to ask though. I don’t know when the oci protocol will be dropped into lts either.

Yeah, I get it. I just wouldnt get my hopes up.
The OCI support is nice, but I still consider it “under-development”.

6.0 LTS should get only bug and security fixes and some minor additions.
For OCI support you would likely need to wait for 7.0 LTS which should be released after 2 years.
Since 6.0 was released on April 2024 I would expect 7.0 LTS on April 2026

Either way, lets wait what @kris says.

It’s in lts 6.0.2… Bookworm is 6.0.1… :neutral_face:

* client: Add basic OCI registry client
* shared/cliconfig: Add OCI remote support

OCI support is in 6.0.2? I guess you are lucky :slight_smile: Well, I myself dont know which features will the main dev backport to LTS and which not.

But 6.0.2 should be available as Bookworm-backport. Maybe it’s just not yet updated in Truenas nightly.

Hopefully! I feel it would be a worthy addition.

Its nice, I just dont really understand the benefit of running Docker (OCI) containers under LXC (Incus) instead of native Docker runtime. But I will gladly learn :slight_smile:

2 Likes

Less overhead. You wouldn’t need to install docker and apps can be run from a single cloud config.

Something like:

incus create docker:linuxserver/plex:latest Plex -c environment.TZ=America/Chicago -c environment.VERSION=docker -c environment.net=host -c environment.PUID=1000 -c environment.PGID=1000 -c boot.autostart=true -c boot.autorestart=true --network=LAN

All of that can be dumped into a simple cloud config. It would replace docker compose from what I can tell.

Dockge wouldn’t be of much use anymore though. I would probably experiment with the Incus GUI in a container to see what that would look like as far as managing the environment.

This is theory right now, but it sounds good on paper.

2 Likes

For now I still think according to this: About containers and VMs - Incus documentation

So Incus for VMs and system containers and Docker for application containers.

Great that Incus can do OCI. But for production I would still use Docker because its safer and everyone tests their OCI images/apps for Docker runtime.

But it’s nice to have choice.

Also added flexibility, custom network adapters, easier backup management, etc.

Looks like I wasnt really correct about what goes into LTS.
According to this LTS will get all the good stuff from regular releases except things that change default behavior, DB schema or break API compatibility.

1 Like

Could you expand a bit more on these?

1 Like