Linux Jails (containers/vms) with Incus (PoC)

INTRODUCTION


For those who have or currently use Jailmaker, we know that the project is EOL and has no maintainer (Thank you for all your effort @Jip-Hop!). It provided an easy solution for creating jails using systemd-nspawn. This was useful for creating highly customizable environments that are not as heavy as full virtual machines.

IX is moving in a different direction in TrueNAS SCALE 25.04 with inclusion of Incus. With that in mind, Jailmaker isn’t really needed in it’s current form.

This thread is getting a head start on the new release to have something in place for easy, templated, yet customizable jail creation with flexibility that possibly won’t be available in the SCALE UI.

How far I take this remains to be seen, as Fangtooth is in active development. Currently, this is just creating simple profiles to launch instances with Incus.

IMPORTANT: This guide is intended for TrueNAS SCALE 25.04+. If you want to try it on your own distro, feel free to, but don’t reply here for support questions. Take those to the Incus discussion boards.

NOTE: If you’re running this in a VM, you may run into Nesting issues. You need to configure your VM to nest from the host and enable the proper CPU virtualization extensions in your BIOS then it “should” work fine…

Config options added to the docker-init.yaml will load the needed kernel modules, sysctl values, and Incus instance configs when instances attached to the docker profile are powered on.

NOTE: This guide assumes you already have your TrueNAS SCALE networking, bridge(s), and ZFS pools configured, including adding a dataset called jails. I’m assuming you are using br0 for the bridge and pool for your ZFS pool name. If not, go ahead and adjust the appropriate configs below with your specific interface(s) and pool(s). However, I will walk through creating a generic jails dataset to make things easier.


DISCLAIMER


I am not responsible for your actions. If you attempt these and mess up your system, that’s on you. This is entirely in a PoC (Proof of Concept) state. This is purely for testing and learning about how interaction with containers/vms will work moving forward in TrueNAS SCALE 25.04+.


Getting started

NOTE: If you don’t su to the root user you’ll need to append sudo to the commands listed below.

  1. Create a new ZFS dataset and storage pool in Incus called jails from the TrueNAS SCALE 25.04+ shell.
incus storage create jails zfs source=pool/jails

You can verify it was created with:

zfs list|grep ^pool/jails
pool/jails                                                                          1.16G  46.8G    96K  legacy

You can verify the new pool is available by running:

incus storage ls                                  
+---------+--------+-------------+---------+---------+
|  NAME   | DRIVER | DESCRIPTION | USED BY |  STATE  |
+---------+--------+-------------+---------+---------+
| default | zfs    |             | 4       | CREATED |
+---------+--------+-------------+---------+---------+
| jails   | zfs    |             | 3       | CREATED |
+---------+--------+-------------+---------+---------+
  1. Download the following yaml config to your TrueNAS 25.04+ host and modify as needed.
    a. Configure your appropriate mount points on your TrueNAS host where you will be hosting your app data.
    b. Set your timezone.
    c. Feel free to modify anything else you might need, like adding additional packages you would like in your base image, additional datasets passed through, etc.

NOTE: When you modify this config, it will launch the same configuration on all instances created from the config/profile moving forward. Don’t add anything to the config/profile that you don’t want on EVERY instance created from the config/profile. If you need separate settings, create multiple configs/profiles. You can attach and unattach profiles to/from instances or easily recreate instances quickly.

Profile docker-init.yaml:

description: Docker Profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: jails
    type: disk
  data:
    path: /mnt/data
    source: /mnt/pool/data/apps
    shift: true
    type: disk
  stacks:
    path: /opt/stacks
    source: /mnt/pool/data/stacks
    shift: true
    type: disk
config:
  # Start instances on boot
  boot.autostart: "true"
 
  # Load needed kernel modules
  linux.kernel_modules: br_netfilter

  # Enable nesting
  security.nesting: "true"

  # Nvidia configs
  #nvidia.driver.capabilities: "all"
  #nvidia.runtime: "true"

  cloud-init.user-data: |
    #cloud-config

    # Enable docker sysctl values
    write_files:
      - path: /etc/sysctl.d/20-docker.conf
        content: |
          net.ipv4.conf.all.forwarding=1
          net.bridge.bridge-nf-call-iptables=1
          net.bridge.bridge-nf-call-ip6tables=1

    # Set timezone
    timezone: US/Eastern

    # apt update and apt upgrade
    package_update: true
    package_upgrade: true

    # Install apt repos and packages needed for docker
    apt:
      preserve_sources_list: true
      sources:
        docker.list:
          source: deb [arch=amd64] https://download.docker.com/linux/debian $RELEASE stable
          keyid: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
          filename: docker.list
    packages:
      - apt-transport-https
      - ca-certificates
      - curl
      - gpg
      - docker-ce
      - docker-ce-cli
      - containerd.io
      - docker-buildx-plugin
      - docker-compose-plugin

    # create the docker group
    groups:
      - docker

    # Add default auto created user to docker group
    system_info:
      default_user:
        groups: [docker]

    # Install dockge
    runcmd:
      - [ mkdir, -p, /opt/dockge ]
      - [ cd, /opt/dockge ]
      - [ curl, https://raw.githubusercontent.com/louislam/dockge/master/compose.yaml, --output, compose.yaml ]
      - [ docker, compose, up, -d ]
  1. [Optional] You can create images by simply redirecting the config into the incus launch images command below. See variants below. Import the profile into incus. Any future docker instances can now use this profile to create new instances moving forward.
incus profile create docker < docker-init.yaml
  1. Build a docker instance. You can build as many as you like. docker1 is the instance name which will show up when you list the Incus instances by running incus ls.

With profile:

incus launch images:debian/bookworm/cloud -p docker docker1

With docker-init.yaml config file:

incus launch images:debian/bookworm/cloud docker1 < docker-init.yaml
  1. Access the new Incus instance shell.
incus exec docker1 -- bash
  1. Configure static IP and DNS resolver. Once the instance is built, you should configure a static IP address and point to your DNS server. Edit the following files with your favorite editor.

NOTE: You can also use a static DHCP lease and leave the instance set to DHCP. Refer to your DHCP servers` manual on how to do this. There is advanced managed networking features in Incus that can be leveraged as well, but that is outside the scope of this guide.

/etc/systemd/network/10-cloud-init-eth0.network

Output below. Modify your Address and Gateway.

[Match]
Name=eth0

[Network]
#DHCP=ipv4
DHCP=false
Address=192.168.0.30/24
Gateway=192.168.0.1
LinkLocalAddressing=no
LLDP=yes
EmitLLDP=customer-bridge

/etc/systemd/resolved.conf

Output below. Modify DNS to point to your DNS server and Domains to your search domain if needed. If you don’t need the search domains, just comment out the Domains line.

#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it under the
#  terms of the GNU Lesser General Public License as published by the Free
#  Software Foundation; either version 2.1 of the License, or (at your option)
#  any later version.
#
# Entries in this file show the compile time defaults. Local configuration
# should be created by either modifying this file, or by creating "drop-ins" in
# the resolved.conf.d/ subdirectory. The latter is generally recommended.
# Defaults can be restored by simply deleting this file and all drop-ins.
#
# Use 'systemd-analyze cat-config systemd/resolved.conf' to display the full config.
#
# See resolved.conf(5) for details.

[Resolve]
# Some examples of DNS servers which may be used for DNS= and FallbackDNS=:
# Cloudflare: 1.1.1.1#cloudflare-dns.com 1.0.0.1#cloudflare-dns.com 2606:4700:4700::1111#cloudflare-dns.com 2606:4700:4700::1001#cloudflare-dns.com
# Google:     8.8.8.8#dns.google 8.8.4.4#dns.google 2001:4860:4860::8888#dns.google 2001:4860:4860::8844#dns.google
# Quad9:      9.9.9.9#dns.quad9.net 149.112.112.112#dns.quad9.net 2620:fe::fe#dns.quad9.net 2620:fe::9#dns.quad9.net
DNS=192.168.0.8
#FallbackDNS=
Domains=lan.domain.co
#DNSSEC=no
#DNSOverTLS=no
#MulticastDNS=yes
#LLMNR=yes
#Cache=yes
#CacheFromLocalhost=no
#DNSStubListener=yes
#DNSStubListenerExtra=
#ReadEtcHosts=yes
#ResolveUnicastSingleLabel=no
  1. Restart container
exit
incus restart docker1
  1. Verify your new instance looks good. You should see output similar to below when everything is up and running. Notice that you will have an eth0 interface which is the instances bridge to the TrueNAS hosts’ br0 interface. You will also have docker0 for Docker’s interface to eth0. Finally you should have a br-* interface that Dockge is using.
incus ls                      
+-------------+---------+------------------------------+------+-----------+-----------+
|    NAME     |  STATE  |             IPV4             | IPV6 |   TYPE    | SNAPSHOTS |
+-------------+---------+------------------------------+------+-----------+-----------+
| docker1     | RUNNING | 192.168.0.30 (eth0)          |      | CONTAINER | 0         |
|             |         | 172.18.0.1 (br-7e7ee82b01bf) |      |           |           |
|             |         | 172.17.0.1 (docker0)         |      |           |           |
+-------------+---------+------------------------------+------+-----------+-----------+

Known Issues


  • Custom profiles, pools, networks, and storage DO NOT persist currently between TNS upgrades. Pretty much everything done in this thread does not persist. See here for how to recover after upgrades.

TODO


1. Integrate custom ZFS datasets/pools into the jail creation process.
2. Add GPU attachment to instances.
3. Possibly create script to manage custom instance profiles in Incus, if applicable.


Changelog


11.08.2024:

  • Add ZFS jails dataset to profile and instructions on adding the dataset to Incus.
  • Add nvidia instance configs to docker-init.yml profile.

FAQ


Does this work?

Yes! Standard containers work, install docker, and can install and run docker images. Mounts can write data through to your host and datasets you are assigning to your containers which are configured under devices: and type: disk with shift: true enabled.


Do GPU’s work?

Probably, I’m currently working on this in a virtual environment for the time being and will work on this when it gets closer to being stable. There are Nvidia container options and GPU configuration options in Incus that will need to be incorporated into the Incus profiles plus the nvidia-container-toolkit will to be installed, while the docker runtime will need to be updated.


Is this available now?

Not in stable. It’s only available in the nightlies currently.


Should I run nightlies?

In short, no. However, if you are an expert systems user/administrator or want to help test or learn about systems, while being able to report bugs, and make things better, then feel free to jump in. Understand, things will get rocky at times during a development cycle.


Do I have to install Dockge?

No. You can comment out, remove, or replace the lines with something you want to use in the config. I added Dockge to the config because I use it and I see it used by other users in the forums. How you accomplish that will be up to you.

7 Likes

wow… i thought it would be a quick fix. but seems like it’s turning into a bigger project than i thought. feeling a bit guilty now :sweat_smile:

ty dan, really appreciate the work you are doing. i’m sure many will benefit from this.

right now i’m decomissing my qnap and moving to a new diy build that uses an amd igpu for the ryzen 7600. no graphics card, just only a lower power igpu.

not sure if i still need to add gpu passthrough for that to work with jailmaker docker for jellyfin or not :thinking:

This isn’t fixing EE. EE works fine with the existing jails and I added the workarounds for that on the old thread. It’s not worth fixing because Incus is coming in the next release.

What this is providing is a new solution in TNS 25.04. Jails will be vastly different in 25.04. All I have left now is to add GPU support and I think we’re squared away and ready to go.

I’m not sure I’m ready to commit my bare metal server to 25.04 quite yet… :rofl:

3 Likes

IMG_2110

—-

(Probably not a good idea)

2 Likes

I would at least wait for alpha. It’s scheduled for December 18, 2024. That should be nice Christmas present :slight_smile:

1 Like

@kris I’m working on this and was wondering, could an option be added in 25.04 for Incus to choose between stable and lts. Reason being, it looks like stable supports importing oci compatible repositories, ie. docker hub, ghcr, etc. whereas lts does not.

incus remote add docker https://docker.io --protocol=oci

This would allow for creating native lxc containers without needing to setup docker via nested containers. We could then stand up native oci instances, etc.

Let me just provide some context.
TrueNAS gets Incus package from official Debian repo.
Debian only packages LTS version: Debian -- Details of package incus in bookworm-backports
I dont think this will change because Debian is known for its conservative attitude.

Stable versions are available in 2 ways:

  1. Self-compile
  2. Zabbly repo - packages made by the main Incus developer
1 Like

Right, IX would just need to use Zabbly’s repos over Debian and then have an option to select lts or stable. I’m asking because I’d rather not modify my appliance to accomplish that. I definitely don’t want to direct users that way either. Not wanting to add additional overhead by running VM’s either. Bare metal as possible. :joy:

Both “could” be installed and use update-alternatives to run either one as well. That would likely require compiling though.

Important reading: Support - Incus documentation

Stable versions are only supported until next version is released. I dont think they are good for production system that needs long-term stability.

Basically the same reason why TrueNAS uses only LTS Linux kernel.

Also I dont think it’s good to depend on third-party developers repo for production system. Debian discourages from that.

Either way having two versions of the same package installed is bad idea.

For desktop Stable versions are ok, but for server or NAS LTS is the way to go.

3 Likes

Yeah, I get it. Doesn’t hurt to ask though. I don’t know when the oci protocol will be dropped into lts either.

Yeah, I get it. I just wouldnt get my hopes up.
The OCI support is nice, but I still consider it “under-development”.

6.0 LTS should get only bug and security fixes and some minor additions.
For OCI support you would likely need to wait for 7.0 LTS which should be released after 2 years.
Since 6.0 was released on April 2024 I would expect 7.0 LTS on April 2026

Either way, lets wait what @kris says.

It’s in lts 6.0.2… Bookworm is 6.0.1… :neutral_face:

* client: Add basic OCI registry client
* shared/cliconfig: Add OCI remote support

OCI support is in 6.0.2? I guess you are lucky :slight_smile: Well, I myself dont know which features will the main dev backport to LTS and which not.

But 6.0.2 should be available as Bookworm-backport. Maybe it’s just not yet updated in Truenas nightly.

Hopefully! I feel it would be a worthy addition.

Its nice, I just dont really understand the benefit of running Docker (OCI) containers under LXC (Incus) instead of native Docker runtime. But I will gladly learn :slight_smile:

2 Likes

Less overhead. You wouldn’t need to install docker and apps can be run from a single cloud config.

Something like:

incus create docker:linuxserver/plex:latest Plex -c environment.TZ=America/Chicago -c environment.VERSION=docker -c environment.net=host -c environment.PUID=1000 -c environment.PGID=1000 -c boot.autostart=true -c boot.autorestart=true --network=LAN

All of that can be dumped into a simple cloud config. It would replace docker compose from what I can tell.

Dockge wouldn’t be of much use anymore though. I would probably experiment with the Incus GUI in a container to see what that would look like as far as managing the environment.

This is theory right now, but it sounds good on paper.

2 Likes

For now I still think according to this: About containers and VMs - Incus documentation

So Incus for VMs and system containers and Docker for application containers.

Great that Incus can do OCI. But for production I would still use Docker because its safer and everyone tests their OCI images/apps for Docker runtime.

But it’s nice to have choice.

Also added flexibility, custom network adapters, easier backup management, etc.

Looks like I wasnt really correct about what goes into LTS.
According to this LTS will get all the good stuff from regular releases except things that change default behavior, DB schema or break API compatibility.

1 Like

Could you expand a bit more on these?

1 Like