Linux Jails (sandboxes / containers) with Jailmaker

did nvidia graphics stop working when moving to eel?

the previous working config for nvidia graphics for the jailmaker config no longer worked.

so i had to remove the nvidia entries in jailmaker config in order to get jailmaker to work in eel.

next, i had to remove graphics entries in the docker compose for my docker containers to be able to deploy successfully.

so my question is, has anybody else experienced this? and is there a fix? not sure cauz jailmaker is no longer actively maintained yes? Any solution?

If must, i’m willing to just not use graphics card if it’s not possible. but would be a shame though.

also i had some networking issues to my docker containers regardless of whether i use the domain url or the 192.168… it was wonky as it worked sometimes but most often it didn’t.

how i may have fixed that was to delete the jail and create from scratch. next was to install dockge using my existing docker compose, then using dockge to deploy the rest. seems ok for now… will continue monitoring.

so if anyone is wondering can u use jailmaker on eel from dragonfish, you can.

i’m just not sure about the graphics card nvidia setup though. for me it broke.

NVIDIA drivers are now installed on demand… or when triggered.

is this the correct script?

when i used it with the gpu_passthrough_nvidia=0 set to 1 it would not work. there were error codes prevent jailmaker from starting. only 0 would jailmaker start.

but r u refering to running the native docker setup? cauz for that yes that would definitely just work.

but i am using jailmaker. for config i could not enable jailmaker unless i disabled the nvidia in config.

and for docker, it would not load with the nvidia stuff that worked before. so had to remove.

i will do more testing x-x; but really not sure what to do xd…

Jailmaker works by passing through the drivers from the host. There needs to be drivers on the host.

well in truenas under apps there is a toggle for nvidia drivers. im assuming that needs to be ticked for what u r saying to work?

there isnt any youtube explaining any of this (the steps pertaining to graphics) ^^; o well

all i know is this is my result

in docker compose. in dragonfish it worked, in eel it didn’t :cry:

Error response from daemon: could not select device driver "nvidia" with capabilities: [[gpu]]

In EE, you need to go to Configure > Settings and install the Nvidia drivers. Also, make sure you have the latest jlmkr.py downloaded on your server.

Starting in 24.10, TrueNAS does not include a default NVIDIA GPU driver and instead provides a simple NVIDIA driver download option in the web interface. This allows for driver updates between TrueNAS release versions.

Users can enable driver installation from the Installed applications screen. Click Configure > Settings and select Install NVIDIA Drivers. This option is only available for users with a compatible NVIDIA GPU and no drivers installed or for users who have previously enabled the setting.

1 Like

ill try check all that ty. hopefully resolves :cry:

Let us know how it goes!

1 Like

will do.

managed to figure/solve my networking issue. bit embarassing xd

anyway now can focus on the graphics card stuff to figure that out

1 Like

I use Unifi and it DOES let you assign static DHCP IP’s outside the range. It’s out of SPEC, but it works. It’s their own thing and I haven’t seen that anywhere else.

1 Like

oo thats weird. i never knew that.

i learned how to setup dhcp from lawrence youtube channel.

basically he sets the dhcp to something like 40-100

leaving 1-39 and 101-255 free to set :blush:

for now i disabled the graphics stuff.

2 apps i used it for. jellyfin and immich.

i modified the graphics settings and hashed it out, then it booted up. then in jellyfin setting toggle the acceleration to disable. then it works without graphics.

immich machine learning doesn’t work well without the graphic. but that aside it’s usable for browsing pics.

will wait for more people to comment on setting it up then do a search then to try follow the steps.

i’m sure i’m not the only one trying to figure out the new way to get it to work >_<;

note: for those still using jailmaker

other than the graphics issue, the dockers are all working fine on electric eel using jailmaker (for now).

tested a most of the apps i use, seem to be working.

  • navidrome
  • jellyfin
  • authentik
  • immich
  • dockge
    etc
1 Like

I upgraded to Electric Eel and things are working so far. I have a few issues to sort out however. GPU is working fine, BUT only after I installed the nvidia drivers and rebooted the host. This is immich running, Plex is working as well.

nvidia-smi 
Thu Oct 31 10:58:15 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.127.05             Driver Version: 550.127.05     CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1080 Ti     Off |   00000000:2B:00.0 Off |                  N/A |
|  0%   45C    P2             58W /  280W |     967MiB /  11264MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A     46293      C   /opt/venv/bin/python                          964MiB |
+-----------------------------------------------------------------------------------------+

The issues I have seen so far are:

  1. nvidia-uvm kernel module taint. Drivers are still loading, may just be an issue with this Nvidia driver and kernel 6.6.44
[   57.441372] nvidia_uvm: module uses symbols nvUvmInterfaceDisableAccessCntr from proprietary module nvidia, inheriting taint.
[   57.487366] nvidia-uvm: Loaded the UVM driver, major device number 237.
  1. Error loading modules from a jail: host: error while loading shared libraries: libisc-9.18.28-1~deb12u2-Debian.so. It causes commands like host to break.
    a. This is happening because of the Nvidia GPU, on those jails because the following gets mounted, which is overriding the correct lib in /lib/x86_64-linux-gnu:
mount|grep x86_64-linux-gnu
boot-pool/ROOT/24.10.0/usr on /usr/lib/x86_64-linux-gnu type zfs (ro,relatime,xattr,noacl,casesensitive)
  1. nvidia-container-toolkit is broken in jails to install and upgrade… This is a deal breaker… If you already have the toolkit installed, you should be fine, but creating new jails, you’re going to have a bad day. You could mount /usr/lib/x86_64-linux-gnu/ with --bind=/usr/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu to allow writing to the directory, but do you really want to modify the host machine?!?
debconf: delaying package configuration, since apt-utils is not installed
(Reading database ... 17037 files and directories currently installed.)
Preparing to unpack .../libnvidia-container1_1.17.0-1_amd64.deb ...
Unpacking libnvidia-container1:amd64 (1.17.0-1) over (1.16.2-1) ...
dpkg: error processing archive /var/cache/apt/archives/libnvidia-container1_1.17.0-1_amd64.deb (--unpack):
 unable to create '/usr/lib/x86_64-linux-gnu/libnvidia-container-go.so.1.17.0.dpkg-new' (while processing './usr/lib/x86_64-linux-gnu/libnvi
dia-container-go.so.1.17.0'): Read-only file system
dpkg: error while cleaning up:
 unable to remove newly-extracted version of '/usr/lib/x86_64-linux-gnu/libnvidia-container-go.so.1.17.0': Read-only file system
Preparing to unpack .../libnvidia-container-tools_1.17.0-1_amd64.deb ...
Unpacking libnvidia-container-tools (1.17.0-1) over (1.16.2-1) ...
Preparing to unpack .../nvidia-container-toolkit_1.17.0-1_amd64.deb ...
Unpacking nvidia-container-toolkit (1.17.0-1) over (1.16.2-1) ...
Preparing to unpack .../nvidia-container-toolkit-base_1.17.0-1_amd64.deb ...
Unpacking nvidia-container-toolkit-base (1.17.0-1) over (1.16.2-1) ...
Errors were encountered while processing:
 /var/cache/apt/archives/libnvidia-container1_1.17.0-1_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
  1. DNS resolution was reset to the TrueNAS’s DNS server in the jails, had to point those back to my DNS server. It broke several of my apps not being able to talk to my postgres container.
  2. Had a couple of unhealthy containers that had to be rebuilt after upgrade.
  3. My nvidia-patch script is broken for the lovelace nvidia patch, need to rework how that works now… I will update my other thread for that. It works manually, it probably needs to be run later in the boot process now that the drivers are installed differently. Will investigate this issue.
  4. New dashboard didn’t pull everything I had configured from the old dashboard. Not a big deal, but will reconfigure that later. Pool widget doesn’t load currently.
  5. CPU is running about 2C hotter, I’m guessing the new dashboard is busier this time around. Going to close it out and see what it looks like after an hour.

I will update here as I resolve these issues…

I’m not going to upgrade the ZFS pool yet in case I need to roll back for some unforeseen reason.

1 Like

i was wondering about that. whether we are expected to install nvidia toolkit in jailmaker like before, OR, somehow link to the nvidia driver that truenas can install easily via a toggle in the apps settings.

there is also a question what to do in jailmaker config. before, we had to set the value for graphic card to 1 e.g.

gpu_passthrough_nvidia=0

set to 1 from 0. but if you tried doing that, it would error and refuse to launch the config. so no choice but to set that to 0.

the other line in config i removed was this

--bind-ro=/usr/lib/x86_64-linux-gnu/nvidia/current

the step after that was normally to install the nvidia toolkit within jlmkr shell docker *this is the name of my jail

next is to modify your docker compose to include entries related to graphics card. each docker container has their different requirement, but if you go to their github they will explain what to do e.g. in docker compose what lines to include for users wishing to add nvidia graphics card support

i tested on apps e.g. immich, and jellyfin

anyway in my case something was wrong with graphics so when i tried to boot up those 2 docker compose as is (that worked prior in dragonfish to problem), it would error and not boot up. something about daemon not able to find device

solution for me was to temporary hash out the graphics card entries in docker compose, then it allowed the docker container for these 2 apps to run fine. for jellyfin i went inside it’s ui settings and set the hardware acceleration to disable. this allowed jellyfin playback to work (minus using graphics card :sweat: ). In immich when i tried to do a search using keyword “cat” it failed to do the search (which worked before in dragonfish) :sob:

then lastly in jailmaker shell docker you are suppose to do a final check using

this i think, to check if it detects your nvidia graphics card or not. if it doesn’t, means you missed a step in configuring the graphics card to work with jailmaker
nvidia-smi

but since electric eel graphics broke. or at least the steps required changed. anyway that’s all i know :cry:

o oo… i already upgraded the zfs pool :scream:

that said eel seems to work… other than the graphics issue. it sux i couldn’t figure out how to get it to work, but it’s not too crucial for me. everything else in eel seems to be working so far.

The previous release was RC, so this recent one was more stable than that and came with the vdex expansion feature.

Looking into the nvidia container toolkit, TrueNAS installs it now…

ll /usr/lib/x86_64-linux-gnu/libnvidia-container*
lrwxrwxrwx 1 root      32 Oct 31 05:18 /usr/lib/x86_64-linux-gnu/libnvidia-container-go.so.1 -> libnvidia-container-go.so.1.17.0
-rw-r--r-- 1 root 2959448 Oct 31 05:18 /usr/lib/x86_64-linux-gnu/libnvidia-container-go.so.1.17.0
lrwxrwxrwx 1 root      29 Oct 31 05:18 /usr/lib/x86_64-linux-gnu/libnvidia-container.so.1 -> libnvidia-container.so.1.17.0*
-rwxr-xr-x 1 root  199952 Oct 31 05:18 /usr/lib/x86_64-linux-gnu/libnvidia-container.so.1.17.0

/var/log/apt/history.log:
Start-Date: 2024-10-31 10:36:37 Commandline: apt -y install gcc make pkg-config libvulkan1 nvidia-container-toolkit vulkan-validationlayers Install: libvulkan1:amd64 (1.3.239.0-1), libxcb-present0:amd64 (1.15-1, automatic), vulkan-validationlayers:amd64 (1.3.239.0-2), manpages-dev:amd64 (6.03-2, automatic), gcc-12:amd64 (12.2.0-14, automatic), libtsan2:amd64 (12.2.0-14, automatic), cpp:amd64 (4:12.2.0-3, automatic), gcc:amd64 (4:12.2.0-3), libx11-xcb1:amd64 (2:1.8.4-2+deb12u2, automatic), libxshmfence1:amd64 (1.3-1, automatic), libnvidia-container1:amd64 (1.17.0-1, automatic), libaom3:amd64 (3.6.0-1, automatic), libheif1:amd64 (1.15.1-1, automatic), libx265-199:amd64 (3.5-2+b1, automatic), libxcb-dri3-0:amd64 (1.15-1, automatic), libdav1d6:amd64 (1.0.0-2+deb12u1, automatic), libllvm15:amd64 (1:15.0.6-4+b1, automatic), libsvtav1enc1:amd64 (1.4.1+dfsg-1, automatic), pkgconf:amd64 (1.8.1-1, automatic), libcc1-0:amd64 (12.2.0-14, automatic), libmpc3:amd64 (1.3.1-1, automatic), libxpm4:amd64 (1:3.5.12-1.1+deb12u1, automatic), libgav1-1:amd64 (0.18.0-1+b1, automatic), libasan8:amd64 (12.2.0-14, automatic), libnsl-dev:amd64 (1.3.0-2, automatic), rpcsvc-proto:amd64 (1.4.3-1, automatic), make:amd64 (4.3-4.1), mesa-vulkan-drivers:amd64 (22.3.6-1+deb12u1, automatic), libyuv0:amd64 (0.0~git20230123.b2528b0-1, automatic), libcrypt-dev:amd64 (1:4.4.33-2, automatic), pkg-config:amd64 (1.8.1-1), cpp-12:amd64 (12.2.0-14, automatic), libabsl20220623:amd64 (20220623.1-1, automatic), libitm1:amd64 (12.2.0-14, automatic), libnvidia-container-tools:amd64 (1.17.0-1, automatic), libavif15:amd64 (0.11.1-1, automatic), libc-dev-bin:amd64 (2.36-9+deb12u7, automatic), libxcb-randr0:amd64 (1.15-1, automatic), libc-devtools:amd64 (2.36-9+deb12u7, automatic), nvidia-container-toolkit:amd64 (1.17.0-1), libisl23:amd64 (0.25-1.1, automatic), libc6-dev:amd64 (2.36-9+deb12u7, automatic), pkgconf-bin:amd64 (1.8.1-1, automatic), nvidia-container-toolkit-base:amd64 (1.17.0-1, automatic), libubsan1:amd64 (12.2.0-14, automatic), liblsan0:amd64 (12.2.0-14, automatic), libpkgconf3:amd64 (1.8.1-1, automatic), libgd3:amd64 (2.3.3-9, automatic), libwayland-client0:amd64 (1.21.0-1, automatic), libde265-0:amd64 (1.0.11-1+deb12u2, automatic), libxcb-sync1:amd64 (1.15-1, automatic), libtirpc-dev:amd64 (1.3.3+ds-1, automatic), librav1e0:amd64 (0.5.1-6, automatic), libatomic1:amd64 (12.2.0-14, automatic), libgcc-12-dev:amd64 (12.2.0-14, automatic), libxcb-xfixes0:amd64 (1.15-1, automatic) End-Date: 2024-10-31 10:36:47
The package is failing to install those exact libs:

debconf: delaying package configuration, since apt-utils is not installed
(Reading database ... 17037 files and directories currently installed.)
Preparing to unpack .../libnvidia-container1_1.17.0-1_amd64.deb ...
Unpacking libnvidia-container1:amd64 (1.17.0-1) over (1.16.2-1) ...
dpkg: error processing archive /var/cache/apt/archives/libnvidia-container1_1.17.0-1_amd64.deb (--unpack):
 unable to create '/usr/lib/x86_64-linux-gnu/libnvidia-container-go.so.1.17.0.dpkg-new' (while processing './usr/lib/x86_64-linux-gnu/libnvi
dia-container-go.so.1.17.0'): Read-only file system
dpkg: error while cleaning up:
 unable to remove newly-extracted version of '/usr/lib/x86_64-linux-gnu/libnvidia-container-go.so.1.17.0': Read-only file system
Errors were encountered while processing:
 /var/cache/apt/archives/libnvidia-container1_1.17.0-1_amd64.deb

I’m thinking we might not need to install those packages anymore. I will test with a new jail and modify the jailmaker.py script to not install the repos for the nvidia-container-toolkit.

EDIT: This is going to get really messy, really quickly for GPU users… can’t install docker, passing through /usr/bin gives the jail access to docker, etc, but now I have to create custom systemd units, docker group to start it from within the jail… more testing…

1 Like

SUCCESS! I will post a custom config once I narrow everything down…

root@test:~# docker info
Client: Docker Engine - Community
 Version:    27.1.1
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.16.1
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.29.1
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 27.1.1
 Storage Driver: overlay2
  Backing Filesystem: zfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2 nvidia
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41
 runc version: v1.1.13-0-g58aa920
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.6.44-production+truenas
 Operating System: Debian GNU/Linux 12 (bookworm)
 OSType: linux
 Architecture: x86_64
 CPUs: 16
 Total Memory: 62.73GiB
 Name: test
 ID: 759f5770-bced-48ed-a960-8512b9f9f4f3
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
2 Likes

I had to add --bind-ro=‘/usr/lib/x86_64-linux-gnu/’

1 Like

glad our smart comrades figured it out. ty every1 for your efforts :blush:

NOTICE: Modified post and steps below for the fully automated config-ee file. All you need to do now is jlmkr create --start -c config-ee <jail-name> and it will be started and ready to go with docker and nvidia gpu support.

ALERT: Keep in mind, this is ONLY for Nvidia GPU based jails. This doesn’t apply for standard jails. For standard jails, you can continue to use the template config. This is ONLY for Electric Eel!

DISCLAIMER: With the GPU you will lose the ability to install packages inside the jail… to patch or update the jail, you’ll have to disable the GPU and comment out all the extra mounts in the config file to do so at this point. I also have not tested this with TrueNAS’s Apps running, so I have no idea at this point if they will conflict, it shouldn’t, but you never know.

REQUIRED: Make sure you are on the latest version of the jlmkr.py script, otherwise this likely won’t work.

  1. Install the Nvidia drivers in EE. It can be done by going to: Apps > Configuration > Settings > Tick Install NVIDIA Drivers. Click Save and reboot your TrueNAS server when it’s finished installing the drivers.
  2. Create and start jail with custom config. Copy, paste, edit, and save the config code below, then run the following command: jlmkr create --start -c config-ee <jail-name>

Custom config: config-ee:

NOTE: Make sure to set your network interface and data mounts to the datasets on your hosts that you use for your applications… I left in some example ones you can modify. If you don’t modify these to match your system, the jail will fail to start. TAKE HEED!

startup=1
gpu_passthrough_intel=0
gpu_passthrough_nvidia=1
# Turning off seccomp filtering improves performance at the expense of security
seccomp=1

# Use macvlan networking to provide an isolated network namespace,
# so docker can manage firewall rules
# Alternatively use --network-macvlan=eno1 instead of --network-bridge
# Ensure to change eno1/br1 to the interface name you want to use
# You may want to add additional options here, e.g. bind mounts
#
# When using different network adapters, update the line above the following:
#     --system-call-filter='add_key keyctl bpf'
# otherwise there will be wierd docker, dns, interface, and storage issues.
#
systemd_nspawn_user_args=--network-bridge=br0
        --system-call-filter='add_key keyctl bpf'
        --bind-ro='/dev/dri:/dev/dri'
        --bind-ro='/etc/ssl:/etc/ssl'
        --bind-ro='/usr/bin/containerd:/usr/bin/containerd'
        --bind-ro='/usr/bin/containerd-shim:/usr/bin/containerd-shim'
        --bind-ro='/usr/bin/containerd-shim-runc-v1:/usr/bin/containerd-shim-runc-v1'
        --bind-ro='/usr/bin/containerd-shim-runc-v2:/usr/bin/containerd-shim-runc-v2'
        --bind-ro='/usr/bin/curl:/usr/bin/curl'
        --bind-ro='/usr/bin/docker:/usr/bin/docker'
        --bind-ro='/usr/bin/docker-proxy:/usr/bin/docker-proxy'
        --bind-ro='/usr/bin/dockerd:/usr/bin/dockerd'
        --bind-ro='/usr/bin/dockerd-rootless-setuptool.sh:/usr/bin/dockerd-rootless-setuptool.sh'
        --bind-ro='/usr/bin/dockerd-rootless.sh:/usr/bin/dockerd-rootless.sh'
        --bind-ro='/usr/bin/nvidia-container-cli:/usr/bin/nvidia-container-cli'
        --bind-ro='/usr/bin/nvidia-container-runtime:/usr/bin/nvidia-container-runtime'
        --bind-ro='/usr/bin/nvidia-container-runtime-hook:/usr/bin/nvidia-container-runtime-hook'
        --bind-ro='/usr/bin/nvidia-container-toolkit:/usr/bin/nvidia-container-toolkit'
        --bind-ro='/usr/bin/nvidia-ctk:/usr/bin/nvidia-ctk'
        --bind-ro='/usr/bin/runc:/usr/bin/runc'
        --bind-ro='/usr/sbin/iptables:/usr/sbin/iptables'
        --bind-ro='/usr/libexec/docker:/usr/libexec/docker'
        --bind='/mnt/tank/media/:/mnt/media'
        --bind='/mnt/tank/stacks:/opt/stacks'

# Script to run on the HOST before starting the jail
# Load kernel module and config kernel settings required for docker
pre_start_hook=#!/usr/bin/bash
        set -euo pipefail
        echo 'PRE_START_HOOK'
        echo 1 > /proc/sys/net/ipv4/ip_forward
        modprobe br_netfilter
        echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
        echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

# Only used while creating the jail
distro=debian
release=bookworm

# Configure docker inside the jail:
# https://docs.docker.com/engine/install/debian/#install-using-the-repository
# Will also configure the NVIDIA Container Toolkit if gpu_passthrough_nvidia=1 during initial setup
# https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
initial_setup=#!/usr/bin/bash
    set -euo pipefail

    # The /usr/bin/nvidia-smi will be present when gpu_passthrough_nvidia=1
    if [ -f /usr/bin/nvidia-smi ]; then

        # Create docker group
        groupadd docker

        # Create docker.service unit file
        cat <<EOF >> /lib/systemd/system/docker.service
        [Unit]
        Description=Docker Application Container Engine
        Documentation=https://docs.docker.com
        After=network-online.target docker.socket firewalld.service containerd.service time-set.target
        Wants=network-online.target containerd.service
        Requires=docker.socket

        [Service]
        Type=notify
        # the default is not to use systemd for cgroups because the delegate issues still
        # exists and systemd currently does not support the cgroup feature set required
        # for containers run by docker
        ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
        ExecReload=/bin/kill -s HUP \$MAINPID
        TimeoutStartSec=0
        RestartSec=2
        Restart=always

        # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
        # Both the old, and new location are accepted by systemd 229 and up, so using the old location
        # to make them work for either version of systemd.
        StartLimitBurst=3

        # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
        # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
        # this option work for either version of systemd.
        StartLimitInterval=60s

        # Having non-zero Limit*s causes performance problems due to accounting overhead
        # in the kernel. We recommend using cgroups to do container-local accounting.
        LimitNPROC=infinity
        LimitCORE=infinity

        # Comment TasksMax if your systemd version does not support it.
        # Only systemd 226 and above support this option.
        TasksMax=infinity

        # set delegate yes so that systemd does not reset the cgroups of docker containers
        Delegate=yes

        # kill only the docker process, not all processes in the cgroup
        KillMode=process
        OOMScoreAdjust=-500

        [Install]
        WantedBy=multi-user.target
        EOF

        # Create docker.socket unit file
        cat <<EOF >> /lib/systemd/system/docker.socket
        [Unit]
        Description=Docker Socket for the API

        [Socket]
        # If /var/run is not implemented as a symlink to /run, you may need to
        # specify ListenStream=/var/run/docker.sock instead.
        ListenStream=/run/docker.sock
        SocketMode=0660
        SocketUser=root
        SocketGroup=docker

        [Install]
        WantedBy=sockets.target
        EOF

        # Create containerd.service unit
        cat <<EOF >> /lib/systemd/system/containerd.service
        # Copyright The containerd Authors.
        #
        # Licensed under the Apache License, Version 2.0 (the "License");
        # you may not use this file except in compliance with the License.
        # You may obtain a copy of the License at
        #
        #     http://www.apache.org/licenses/LICENSE-2.0
        #
        # Unless required by applicable law or agreed to in writing, software
        # distributed under the License is distributed on an "AS IS" BASIS,
        # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
        # See the License for the specific language governing permissions and
        # limitations under the License.

        [Unit]
        Description=containerd container runtime
        Documentation=https://containerd.io
        After=network.target local-fs.target

        [Service]
        #uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration
        #Environment="ENABLE_CRI_SANDBOXES=sandboxed"
        ExecStartPre=-/sbin/modprobe overlay
        ExecStart=/usr/bin/containerd

        Type=notify
        Delegate=yes
        KillMode=process
        Restart=always
        RestartSec=5
        # Having non-zero Limit*s causes performance problems due to accounting overhead
        # in the kernel. We recommend using cgroups to do container-local accounting.
        LimitNPROC=infinity
        LimitCORE=infinity
        LimitNOFILE=infinity
        # Comment TasksMax if your systemd version does not supports it.
        # Only systemd 226 and above support this version.
        TasksMax=infinity
        OOMScoreAdjust=-999

        [Install]
        WantedBy=multi-user.target
        EOF

        # Enable Docker
        systemctl enable docker.service
        systemctl enable docker.socket
        systemctl enable containerd.service

        # Configure nvidia runtime
        nvidia-ctk runtime configure --runtime=docker
        systemctl restart docker
    fi

    docker info

# You generally will not need to change the options below
systemd_run_default_args=--property=KillMode=mixed
        --property=Type=notify
        --property=RestartForceExitStatus=133
        --property=SuccessExitStatus=133
        --property=Delegate=yes
        --property=TasksMax=infinity
        --collect
        --setenv=SYSTEMD_NSPAWN_LOCK=0

systemd_nspawn_default_args=--keep-unit
        --quiet
        --boot
        --bind-ro=/sys/module
        --inaccessible=/sys/module/apparmor
  1. Profit? You should be able to install Dockage, Plex, Jellyfin, etc at this point.

Output of Plex working in new jail:

nvidia-smi 
Thu Oct 31 16:04:22 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.127.05             Driver Version: 550.127.05     CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1080 Ti     Off |   00000000:2B:00.0 Off |                  N/A |
|  0%   56C    P2             61W /  280W |     191MiB /  11264MiB |      1%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A    603809      C   ...lib/plexmediaserver/Plex Transcoder        188MiB |
+-----------------------------------------------------------------------------------------+
3 Likes