INTRODUCTION
For those who have or currently use Jailmaker, we know that the project is EOL and has no maintainer (Thank you for all your effort @Jip-Hop!). It provided an easy solution for creating jails using
systemd-nspawn
. This was useful for creating highly customizable environments that are not as heavy as full virtual machines.IX is moving in a different direction in TrueNAS SCALE 25.04 with inclusion of Incus. With that in mind, Jailmaker isn’t really needed in it’s current form.
This thread is getting a head start on the new release to have something in place for easy, templated, yet customizable jail creation with flexibility that possibly won’t be available in the SCALE UI.
How far I take this remains to be seen, as Fangtooth is in active development. Currently, this is just creating simple profiles to launch instances with Incus.
IMPORTANT: This guide is intended for TrueNAS SCALE 25.04+. If you want to try it on your own distro, feel free to, but don’t reply here for support questions. Take those to the Incus discussion boards.
NOTE: If you’re running this in a VM, you may run into Nesting issues. You need to configure your VM to nest from the host and enable the proper CPU virtualization extensions in your BIOS then it “should” work fine…
Config options added to the
docker-init.yaml
will load the needed kernel modules, sysctl values, and Incus instance configs when instances attached to thedocker
profile are powered on.
NOTE: This guide assumes you already have your TrueNAS SCALE networking, bridge(s), and ZFS pools configured, including adding a dataset called
jails
. I’m assuming you are usingbr0
for the bridge andpool
for your ZFS pool name. If not, go ahead and adjust the appropriate configs below with your specific interface(s) and pool(s). However, I will walk through creating a genericjails
dataset to make things easier.
DISCLAIMER
I am not responsible for your actions. If you attempt these and mess up your system, that’s on you. This is entirely in a PoC (Proof of Concept) state. This is purely for testing and learning about how interaction with containers/vms will work moving forward in TrueNAS SCALE 25.04+.
Getting started
NOTE: If you don’t
su
to theroot
user you’ll need to appendsudo
to the commands listed below.
- Create a new ZFS dataset and storage pool in Incus called
jails
from the TrueNAS SCALE 25.04+ shell.
incus storage create jails zfs source=pool/jails
You can verify it was created with:
zfs list|grep ^pool/jails
pool/jails 1.16G 46.8G 96K legacy
You can verify the new pool is available by running:
incus storage ls
+---------+--------+-------------+---------+---------+
| NAME | DRIVER | DESCRIPTION | USED BY | STATE |
+---------+--------+-------------+---------+---------+
| default | zfs | | 4 | CREATED |
+---------+--------+-------------+---------+---------+
| jails | zfs | | 3 | CREATED |
+---------+--------+-------------+---------+---------+
- Download the following yaml config to your TrueNAS 25.04+ host and modify as needed.
a. Configure your appropriate mount points on your TrueNAS host where you will be hosting your app data.
b. Set yourtimezone
.
c. Feel free to modify anything else you might need, like adding additional packages you would like in your base image, additional datasets passed through, etc.
NOTE: When you modify this config, it will launch the same configuration on all instances created from the config/profile moving forward. Don’t add anything to the config/profile that you don’t want on EVERY instance created from the config/profile. If you need separate settings, create multiple configs/profiles. You can attach and unattach profiles to/from instances or easily recreate instances quickly.
Profile docker-init.yaml
:
description: Docker Profile
devices:
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
root:
path: /
pool: jails
type: disk
data:
path: /mnt/data
source: /mnt/pool/data/apps
shift: true
type: disk
stacks:
path: /opt/stacks
source: /mnt/pool/data/stacks
shift: true
type: disk
config:
# Start instances on boot
boot.autostart: "true"
# Load needed kernel modules
linux.kernel_modules: br_netfilter
# Enable nesting
security.nesting: "true"
# Nvidia configs
#nvidia.driver.capabilities: "all"
#nvidia.runtime: "true"
cloud-init.user-data: |
#cloud-config
# Enable docker sysctl values
write_files:
- path: /etc/sysctl.d/20-docker.conf
content: |
net.ipv4.conf.all.forwarding=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
# Set timezone
timezone: US/Eastern
# apt update and apt upgrade
package_update: true
package_upgrade: true
# Install apt repos and packages needed for docker
apt:
preserve_sources_list: true
sources:
docker.list:
source: deb [arch=amd64] https://download.docker.com/linux/debian $RELEASE stable
keyid: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
filename: docker.list
packages:
- apt-transport-https
- ca-certificates
- curl
- gpg
- docker-ce
- docker-ce-cli
- containerd.io
- docker-buildx-plugin
- docker-compose-plugin
# create the docker group
groups:
- docker
# Add default auto created user to docker group
system_info:
default_user:
groups: [docker]
# Install dockge
runcmd:
- [ mkdir, -p, /opt/dockge ]
- [ cd, /opt/dockge ]
- [ curl, https://raw.githubusercontent.com/louislam/dockge/master/compose.yaml, --output, compose.yaml ]
- [ docker, compose, up, -d ]
- [Optional] You can create images by simply redirecting the config into the
incus launch images
command below. See variants below. Import theprofile
intoincus
. Any future docker instances can now use this profile to create new instances moving forward.
incus profile create docker < docker-init.yaml
- Build a docker instance. You can build as many as you like.
docker1
is the instance name which will show up when you list the Incus instances by runningincus ls
.
With profile:
incus launch images:debian/bookworm/cloud -p docker docker1
With docker-init.yaml
config file:
incus launch images:debian/bookworm/cloud docker1 < docker-init.yaml
- Access the new Incus instance shell.
incus exec docker1 -- bash
- Configure static IP and DNS resolver. Once the instance is built, you should configure a static IP address and point to your DNS server. Edit the following files with your favorite editor.
NOTE: You can also use a static DHCP lease and leave the instance set to
DHCP
. Refer to your DHCP servers` manual on how to do this. There is advanced managed networking features in Incus that can be leveraged as well, but that is outside the scope of this guide.
/etc/systemd/network/10-cloud-init-eth0.network
Output below. Modify your Address
and Gateway
.
[Match]
Name=eth0
[Network]
#DHCP=ipv4
DHCP=false
Address=192.168.0.30/24
Gateway=192.168.0.1
LinkLocalAddressing=no
LLDP=yes
EmitLLDP=customer-bridge
/etc/systemd/resolved.conf
Output below. Modify DNS
to point to your DNS server and Domains
to your search domain if needed. If you don’t need the search domains, just comment out the Domains
line.
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it under the
# terms of the GNU Lesser General Public License as published by the Free
# Software Foundation; either version 2.1 of the License, or (at your option)
# any later version.
#
# Entries in this file show the compile time defaults. Local configuration
# should be created by either modifying this file, or by creating "drop-ins" in
# the resolved.conf.d/ subdirectory. The latter is generally recommended.
# Defaults can be restored by simply deleting this file and all drop-ins.
#
# Use 'systemd-analyze cat-config systemd/resolved.conf' to display the full config.
#
# See resolved.conf(5) for details.
[Resolve]
# Some examples of DNS servers which may be used for DNS= and FallbackDNS=:
# Cloudflare: 1.1.1.1#cloudflare-dns.com 1.0.0.1#cloudflare-dns.com 2606:4700:4700::1111#cloudflare-dns.com 2606:4700:4700::1001#cloudflare-dns.com
# Google: 8.8.8.8#dns.google 8.8.4.4#dns.google 2001:4860:4860::8888#dns.google 2001:4860:4860::8844#dns.google
# Quad9: 9.9.9.9#dns.quad9.net 149.112.112.112#dns.quad9.net 2620:fe::fe#dns.quad9.net 2620:fe::9#dns.quad9.net
DNS=192.168.0.8
#FallbackDNS=
Domains=lan.domain.co
#DNSSEC=no
#DNSOverTLS=no
#MulticastDNS=yes
#LLMNR=yes
#Cache=yes
#CacheFromLocalhost=no
#DNSStubListener=yes
#DNSStubListenerExtra=
#ReadEtcHosts=yes
#ResolveUnicastSingleLabel=no
- Restart container
exit
incus restart docker1
- Verify your new instance looks good. You should see output similar to below when everything is up and running. Notice that you will have an
eth0
interface which is the instances bridge to the TrueNAS hosts’br0
interface. You will also havedocker0
for Docker’s interface toeth0
. Finally you should have abr-*
interface that Dockge is using.
incus ls
+-------------+---------+------------------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------+---------+------------------------------+------+-----------+-----------+
| docker1 | RUNNING | 192.168.0.30 (eth0) | | CONTAINER | 0 |
| | | 172.18.0.1 (br-7e7ee82b01bf) | | | |
| | | 172.17.0.1 (docker0) | | | |
+-------------+---------+------------------------------+------+-----------+-----------+
Known Issues
- Custom profiles, pools, networks, and storage DO NOT persist currently between TNS upgrades. Pretty much everything done in this thread does not persist. See here for how to recover after upgrades.
TODO
1. Integrate custom ZFS datasets/pools into the jail creation process.
2. Add GPU attachment to instances.
3. Possibly create script to manage custom instance profiles in Incus, if applicable.
Changelog
11.08.2024:
- Add ZFS
jails
dataset to profile and instructions on adding the dataset to Incus.- Add nvidia instance configs to
docker-init.yml
profile.
FAQ
Does this work?
Yes! Standard containers work, install docker, and can install and run docker images. Mounts can write data through to your host and datasets you are assigning to your containers which are configured under
devices:
andtype: disk
withshift: true
enabled.
Do GPU’s work?
Probably, I’m currently working on this in a virtual environment for the time being and will work on this when it gets closer to being stable. There are Nvidia container options and GPU configuration options in Incus that will need to be incorporated into the Incus profiles plus the
nvidia-container-toolkit
will to be installed, while the docker runtime will need to be updated.
Is this available now?
Not in stable. It’s only available in the nightlies currently.
Should I run nightlies?
In short, no. However, if you are an expert systems user/administrator or want to help test or learn about systems, while being able to report bugs, and make things better, then feel free to jump in. Understand, things will get rocky at times during a development cycle.
Do I have to install Dockge?
No. You can comment out, remove, or replace the lines with something you want to use in the config. I added Dockge to the config because I use it and I see it used by other users in the forums. How you accomplish that will be up to you.