Linux Jails (sandboxes / containers) with Jailmaker

First, again a BIG thank you to Jip-Hop for creating jailmaker. It made the transition to Scale soooo much more enjoyable than without - I use literally no plugins/apps/whatever but exclusively run additional software in jails. They simply work, and do so superbly. TY.

Question: I created my jails a while back before jlmkr.py dealt with creating zfs datasets per new jail automatically. Can I just create the “jails” dataset and the nested per-jail datasets manually or is there other magic happening when jlmkr.py creates them?
Thanks in advance!

Not much magic, except the jailmaker directory needs to be a dataset.

To transition, rename your jails directory, then make a jails dataset (generic) and sub datasets, then rsync the contents (standard copy is not really enough), although mv maybe.

Take a snapshot first of your jailmaker dataset :wink:

Cool, thank you - was hoping as much.

Will a piped tar do it also?

… yes it will …

Sorry to answer my own question but I decided to just try it and within a couple of minutes all was well and happy in new zfs datasets.

Jailmaker rocks.

K.

2 Likes

I added the bind for /dev/fuse but I’m getting modprobe: FATAL: Module fuse not found in directory /lib/modules/6.6.32-production+truenas when running sudo modprobe fuse in my jail. Can anyone verify my config please? Thanks for any insight!

gpu_passthrough_intel=1
gpu_passthrough_nvidia=0
# Turning off seccomp filtering improves performance at the expense of security
seccomp=1

# Network and mount configuration for the container
systemd_nspawn_user_args=--network-bridge=br0
        --resolv-conf=bind-host
        --system-call-filter='add_key keyctl bpf perf_event_open'
        --capability=CAP_IPC_LOCK
        --bind='/dev/fuse:/dev/fuse'
        --bind='/dev/bus/usb:/dev/bus/usb'
        --bind='/mnt/nvme/docker:/mnt/docker'

# Script to run on the HOST before starting the jail
pre_start_hook=#!/usr/bin/bash
        set -euo pipefail
        echo 'PRE_START_HOOK'
        echo 1 > /proc/sys/net/ipv4/ip_forward
        modprobe br_netfilter
        echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
        echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

# Only used while creating the jail
distro=ubuntu
release=noble

# Install docker inside the jail and other initial setup including environment settings
initial_setup=#!/usr/bin/bash
        set -euo pipefail

        # Perform system updates and install essential packages
        apt-get update
        apt-get -y install ca-certificates curl nano screen wget openssh-server lsof rsync dnsutils python3 pciutils usbutils gnupg

        # Docker installation
        install -m 0755 -d /etc/apt/keyrings
        curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
        chmod a+r /etc/apt/keyrings/docker.asc

        echo \
        "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
        $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
        tee /etc/apt/sources.list.d/docker.list > /dev/null
        apt-get update
        apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

        # Remove default Ubuntu user to free up UID 1000
        userdel -r ubuntu

        # Define the user and home directory
        USER_NAME="username"
        HOME_DIR="/home/${USER_NAME}"

        # User setup
        useradd -u 1000 -m -s /bin/bash ${USER_NAME}
        if [ $(getent group docker) ]; then
        usermod -aG docker ${USER_NAME}
        fi
        usermod -aG sudo ${USER_NAME}

        # Sudoers setup
        echo "${USER_NAME} ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/${USER_NAME}
        chmod 0440 /etc/sudoers.d/${USER_NAME}

        # Docker shortcut
        ln -s /mnt/docker /home/${USER_NAME}/docker

# systemd-nspawn specific default arguments
systemd_run_default_args=--property=KillMode=mixed
        --property=Type=notify
        --property=RestartForceExitStatus=133
        --property=SuccessExitStatus=133
        --property=Delegate=yes
        --property=TasksMax=infinity
        --collect
        --setenv=SYSTEMD_NSPAWN_LOCK=0

systemd_nspawn_default_args=--keep-unit
        --quiet
        --boot
        --bind-ro=/sys/module
        --inaccessible=/sys/module/apparmor```

Anyone have issues with bind mounts not working on 24.04.2.2? I created a new jail because I wanted to do some testing and none of the binds are working. Everything else is working in the config however other than that. Still trying to debug. I have the last version of jlmkr installed and the latest config. Below is the config I’m using and has been slightly modified from the repo version.

startup=1
gpu_passthrough_intel=0
gpu_passthrough_nvidia=1
# Turning off seccomp filtering improves performance at the expense of security
seccomp=1

# Use macvlan networking to provide an isolated network namespace,
# so docker can manage firewall rules
# Alternatively use --network-macvlan=eno1 instead of --network-bridge
# Ensure to change eno1/br1 to the interface name you want to use
# You may want to add additional options here, e.g. bind mounts
#systemd_nspawn_user_args=--network-macvlan=vlan3 #VLAN Support
systemd_nspawn_user_args=--network-bridge=br0 #management network
        --system-call-filter='add_key keyctl bpf'
        --bind='/mnt/tank/containers/:/mnt/containers'
        --bind='/mnt/tank/data/apps/:/mnt/data'
        --bind='/mnt/tank/data/db/:/mnt/db'
        --bind='/mnt/tank/media/:/mnt/media'
        --bind='/mnt/tank/data/stacks:/opt/stacks'

# Script to run on the HOST before starting the jail
# Load kernel module and config kernel settings required for docker
pre_start_hook=#!/usr/bin/bash
        set -euo pipefail
        echo 'PRE_START_HOOK'
        echo 1 > /proc/sys/net/ipv4/ip_forward
        modprobe br_netfilter
        echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
        echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

# Only used while creating the jail
distro=debian
release=bookworm

# Install docker inside the jail:
# https://docs.docker.com/engine/install/debian/#install-using-the-repository
# Will also install the NVIDIA Container Toolkit if gpu_passthrough_nvidia=1 during initial setup
# https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
initial_setup=#!/usr/bin/bash
        set -euo pipefail

        apt-get update && apt-get -y install ca-certificates curl host
        install -m 0755 -d /etc/apt/keyrings
        curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
        chmod a+r /etc/apt/keyrings/docker.asc

        echo \
        "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
        $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
        tee /etc/apt/sources.list.d/docker.list > /dev/null

        apt-get update
        apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

        # The /usr/bin/nvidia-smi will be present when gpu_passthrough_nvidia=1
        if [ -f /usr/bin/nvidia-smi ]; then
        curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey -o /etc/apt/keyrings/nvidia.asc
        chmod a+r /etc/apt/keyrings/nvidia.asc
        curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
        sed 's#deb https://#deb [signed-by=/etc/apt/keyrings/nvidia.asc] https://#g' | \
        tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

        apt-get update
        apt-get install -y nvidia-container-toolkit

        nvidia-ctk runtime configure --runtime=docker
        systemctl restart docker
        fi

        docker info

# You generally will not need to change the options below
systemd_run_default_args=--property=KillMode=mixed
        --property=Type=notify
        --property=RestartForceExitStatus=133
        --property=SuccessExitStatus=133
        --property=Delegate=yes
        --property=TasksMax=infinity
        --collect
        --setenv=SYSTEMD_NSPAWN_LOCK=0

systemd_nspawn_default_args=--keep-unit
        --quiet
        --boot
        --bind-ro=/sys/module
        --inaccessible=/sys/module/apparmor

This is the startup logs on the test jail:

Sep 26 13:18:24 jupiter systemd[1]: Started jlmkr-dns.service - My nspawn jail dns [created with jailmaker].
Sep 26 13:18:24 jupiter systemd-nspawn[3548850]: systemd 252.30-1~deb12u2 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMAC>
Sep 26 13:18:24 jupiter systemd-nspawn[3548850]: Detected virtualization systemd-nspawn.
Sep 26 13:18:24 jupiter systemd-nspawn[3548850]: Detected architecture x86-64.
Sep 26 13:18:24 jupiter systemd-nspawn[3548850]: 
Sep 26 13:18:24 jupiter systemd-nspawn[3548850]: Welcome to Debian GNU/Linux 12 (bookworm)!
Sep 26 13:18:24 jupiter systemd-nspawn[3548850]: 
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: Queued start job for default target graphical.target.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Created slice system-getty.slice - Slice /system/getty.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Created slice system-modprobe.slice - Slice /system/modprobe.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Created slice user.slice - User and Session Slice.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Started systemd-ask-password-consol…quests to Console Directory Watch.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Started systemd-ask-password-wall.p… Requests to Wall Directory Watch.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target cryptsetup.target - Local Encrypted Volumes.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target integritysetup.targe…Local Integrity Protected Volumes.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target paths.target - Path Units.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target remote-fs.target - Remote File Systems.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target remote-veritysetup.t…- Remote Verity Protected Volumes.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target slices.target - Slice Units.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target swap.target - Swaps.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target veritysetup.target - Local Verity Protected Volumes.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Listening on systemd-initctl.socket… initctl Compatibility Named Pipe.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Listening on systemd-journald-dev-l…ocket - Journal Socket (/dev/log).
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Listening on systemd-journald.socket - Journal Socket.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Listening on systemd-networkd.socket - Network Service Netlink Socket.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Mounting dev-hugepages.mount - Huge Pages File System...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting systemd-journald.service - Journal Service...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting systemd-network-generator.… units from Kernel command line...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting systemd-remount-fs.service…nt Root and Kernel File Systems...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting systemd-sysctl.service - Apply Kernel Variables...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Mounted dev-hugepages.mount - Huge Pages File System.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Finished systemd-network-generator.…rk units from Kernel command line.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target network-pre.target - Preparation for Network.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Finished systemd-remount-fs.service…ount Root and Kernel File Systems.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting systemd-sysusers.service - Create System Users...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Finished systemd-sysctl.service - Apply Kernel Variables.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Finished systemd-sysusers.service - Create System Users.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting systemd-tmpfiles-setup-dev…ate Static Device Nodes in /dev...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Started systemd-journald.service - Journal Service.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting systemd-journal-flush.serv…h Journal to Persistent Storage...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Finished systemd-tmpfiles-setup-dev…reate Static Device Nodes in /dev.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target local-fs-pre.target …reparation for Local File Systems.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target local-fs.target - Local File Systems.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting systemd-networkd.service - Network Configuration...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Started systemd-networkd.service - Network Configuration.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting systemd-networkd-wait-onli…it for Network to be Configured...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Finished systemd-journal-flush.serv…ush Journal to Persistent Storage.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting systemd-tmpfiles-setup.ser…te System Files and Directories...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Finished systemd-tmpfiles-setup.ser…eate System Files and Directories.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting systemd-resolved.service - Network Name Resolution...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting systemd-update-utmp.servic…rd System Boot/Shutdown in UTMP...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Finished systemd-networkd-wait-onli…Wait for Network to be Configured.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Finished systemd-update-utmp.servic…cord System Boot/Shutdown in UTMP.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Started systemd-resolved.service - Network Name Resolution.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target network.target - Network.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target network-online.target - Network is Online.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target nss-lookup.target - Host and Network Name Lookups.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target sysinit.target - System Initialization.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Started apt-daily.timer - Daily apt download activities.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Started apt-daily-upgrade.timer - D… apt upgrade and clean activities.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Started dpkg-db-backup.timer - Daily dpkg database backup timer.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Started e2scrub_all.timer - Periodi…etadata Check for All Filesystems.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Started systemd-tmpfiles-clean.time… Cleanup of Temporary Directories.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target timers.target - Timer Units.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Listening on dbus.socket - D-Bus System Message Bus Socket.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting docker.socket - Docker Socket for the API...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Listening on docker.socket - Docker Socket for the API.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target sockets.target - Socket Units.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target basic.target - Basic System.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting containerd.service - containerd container runtime...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting dbus.service - D-Bus System Message Bus...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting systemd-logind.service - User Login Management...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting systemd-user-sessions.service - Permit User Sessions...
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Started dbus.service - D-Bus System Message Bus.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Finished systemd-user-sessions.service - Permit User Sessions.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Started console-getty.service - Console Getty.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Reached target getty.target - Login Prompts.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Started systemd-logind.service - User Login Management.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]: [  OK  ] Started containerd.service - containerd container runtime.
Sep 26 13:18:25 jupiter systemd-nspawn[3548850]:          Starting docker.service - Docker Application Container Engine...
Sep 26 13:18:31 jupiter systemd-nspawn[3548850]: 
Sep 26 13:18:31 jupiter systemd-nspawn[3548850]: Debian GNU/Linux 12 dns pts/0
Sep 26 13:18:31 jupiter systemd-nspawn[3548850]:

I will keep playing with this, but I think I figured it out. I’m building out a new one to confirm. I believe it’s my comment after the bridge network. It was interpreting that as a separate command… Below is from the logs when jlmkr start was run.

--network-bridge=br0 '#management' network

Yup… :expressionless:

1 Like

*Just updated to the latest Dragonfish version (Dragonfish-24.04.2.2). *

The jlmkr.py commands do not work at all after this. Maybe I missed this in an earlier post, but with every update do you have to delete / reinstall jailmaker from scratch?

For example, I would previously type:

""/jlmkr.py" or "jlmkr.py" or "jlmkr" or any kind of combination that worked before keep getting “zsh command not found” errors. I think the latest update broke my bash. Although I never changed it from default? sudo, su, etc. does nothing to solve this.

The strange thing is that the docker start command “/jlmkr.py” in the startup INIT config file works on boot so this has something to do with the Shell within Truenas and this jailmaker / docker install.

Any idea what is wrong?

SOLVED: Reboot fixed this. No idea what happened. Please ignore and disregard.

i’m not sure, but is it possible to mount a HAOs ? (home assitant) ?

Anyone having issues with transcoding not working?

Not sure if it’s a Plex issue or not, but getting only software transcoding now. Immich doesn’t seem to be working either with transcoding.

compose:

services:
  plex:
    image: plexinc/pms-docker
    container_name: plex
    restart: unless-stopped
    network_mode: host
    volumes:
      - ${CONFIG_PATH}:/config
      - ${TRANSCODE_PATH}:/transcode
      - ${MEDIA_PATH}:/data
    runtime: nvidia
    deploy:
      resources:
        reservations:
          devices:
            - capabilities:
                - gpu
    env_file:
      - .env

.env

# App
TZ=US/Eastern
PLEX_CLAIM=********nice-try************
CONFIG_PATH=/mnt/data/plex/configs
TRANSCODE_PATH=/mnt/data/plex/transcode
MEDIA_PATH=/mnt/media/plex

# GPU
NVIDIA_VISIBLE_DEVICES=all
NVIDIA_DRIVER_CAPABILITIES=compute,video,utility

Resolved by using the newer format for deploying gpu resources. Once I did this, it fired right up. Posting the change in case others run into this issue.

    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities:
                - gpu
2 Likes

Bit the Electric Eel upgrade bullet today.

All went well, but my docker jail wouldn’t start up without putting the following bind-mount into my docker’s config file, using: jlmkr edit docker

--bind-ro=/usr/lib/x86_64-linux-gnu/

At this stage, I’m keeping everything in the docker jail and using standalone dockge as we’ve all done prior to EE. It’s simple & working - what are the benefits of migrating to native EE apps and Dockge?

(and thanks in advance for your EE jail migration video @ Stux, in case I do winding up migrating).

1 Like

Major benefits are not having to use a double bind mount, ie into the jail and then into the container, instead the container can directly bind to the pool’s data, and secondly, not having to maintain a separate linux distro

The biggest con is that you can’t just host a docker daemon on a separate IP, like you can when using the jail. You can use macvlan networking to get a separate IP per container, but not for the whole daemon.

1 Like

Another benefit to going with native EE is that the jailmaker project has been abandoned by the developer due to Truenas Scale switching to Docker.

Righto - all migrated into native EE dockge, as per your vid, thanks stux.

I’ve left the docker jail in place for now - and indeed needed it. When I “stopped and inactivated” the old stacks in the new dockge with the jail dockge as the imported agent, it wiped out my stacks from the new, native dockge as well, so I had to start fresh. That’s what backups are for!

Had a bit of a faff getting nextcloud settled in again (permissions, mount points, config.php and .htaccess…you know nextcloud), but managed that eventually. Have changed my arr mount points and IP addresses in HA proxy (to the parent IP address of my TrueNAS box now). Yes - I appreciate the direct host mount, rather than through the jail config - one less thing to mess up!

Anyway, she’s all working 100% again, now in native. Woohoo!

1 Like

Are you running a Nvidia GPU per chance with gpu_passthrough_nvidia=1 set? It has an nvidia directory down that path.

See dragonfish output:

mount|grep "x86_64-linux-gnu"
boot-pool/ROOT/24.04.2.3/usr on /usr/lib/x86_64-linux-gnu/nvidia/current type zfs (ro,relatime,xattr,noacl,casesensitive)

Make sure you are on the last version of the config/script from the Jip-Jop’s repo. It will mount as part of startup if you’re on the last version.

I’m late to the party… realizing one can run docker commands in shell. Is there any risk to the overall system (security/stability/etc) using command line to setup your docker stacks? This goes against the GUI way in EE but command line is what I learned initially and prefer. I’m considering leaving the sandbox but don’t want to break the overall system.

What has to be done to remove jailmaker from TrueNAS?
Stopp and remove jails, remove the init-script and delete jailmaker- and docker-dataset?

Pretty much. You don’t need to remove the docker dataset necessarily.

I go over it in my video on migrating from Sandboxes to Docker in Electric Eel

2 Likes