Proxmox in TrueNAS sandbox

Hello. I have managed to run Proxmox in a TrueNAS nspawn sandbox with full system privileges, so Proxmox hypervisor is running on bare metal using the TrueNAS Linux core. This way we do have all the advantages of TrueNAS and Proxmox on the same machine with close to zero overheads. Proxmox clustering for both VMs and LXC containers works very well.

The instructions are here. If you know a better way, please feel free to comment or (even better) make PRs.

6 Likes

Many thanks for posting these details. A minor point, in setup1.sh mtu is set to 9000, some may want mtu 1500. At line 77 in setup1.sh, would apt-get install --download-only ifupdown2 be a better choice?

TBH, I don’t understand the dbus stuff (nor the js changes), but my main concern is the list of systemd_nspawn_user_arg which gives the jail such wide access to the host system and the possibility of conflicts between jail and host.

Thanks for sharing! Looks like it was not an easy feat to get this working. When I have time I’ll give some more feedback but for now I recommend not bind mounting the full /dev directory. This is probably causing the issues with the shell and the the console.

I’ve tested this setup on my ageing Linux desktop which is configured for nested virtualisation. TN SCALE is a VM running a proxmox jail. After a few minor tweaks of the config file everything works as per instructions. It certainly stands the idea of running TN SCALE as a VM inside proxmox on its head.

But, I’d feel far more confident in simply running proxmox on bare metal and configuring and administering ZFS at the CLI as needed. NAS type functionality can be provided by running services on Proxmox using bind mounts to containers.

That’s true, it would be better to make it configurable.

Not sure it well work because this way it will only download the package, but will not install to the system files, which we need on the next reboot.

We need dbus to install Proxmox, but for some reason it is not initialized properly when booting normally. So I use the same filesystem to boot the temporary container. I expect it creates some files, but I was not able to find what are they. If those files were known we could just create them from a script avoiding this step.

This is to suppress the lack of license message, but it does not work in the pre-init script. @Jip-Hop, what should be the path to the container disk content when accessing from a pre init script?

I don’t like it either. But I did not find a better way to deal with ZFS. The issue is that when a ZFS volume is created a new device is created as well. But nspawn can’t provide access to new devices by name mask when a container is already launched (at least I am not aware of that). So we need to give the full /dev access to let Proxmox see new volume once a VM or LXC container is created.

Just added an mtu argument.

Fixed

Perhaps you could bind mount /dev from the host at /host-dev and then mount the /dev/tty etc devices from the jail onto /host-dev and then bind mount /host-dev back at /dev. If you do this in the jail (each time the jail starts) then you’d preserve the essential jail devices and complete them with the host devices. Or look into overlay mounts with nspawn instead of bind mounts. You may be able to sink the /dev from the host under the /dev from the jail.

Not sure I understand what you’re asking but see the pre_start_hook here for an example how you can modify the rootfs before starting the jail for the first time: jailmaker/templates/nixos/config at main · Jip-Hop/jailmaker · GitHub

I was wondering how to detect the guest root directory in the host system, but I do not need it anymore. I understood that it is the current directory when running a pre start script.

I am not sure I know now to do it, help is appreciated.

Sounds promising, I should do some experiments to check if it works as expected.

Trying this out. I have run in to a couple things so far.

This line in create.sh gives me a bad substitution error.

export GATEWAY_IP=“${ip route | grep default | awk ‘{print $3}’}”

changing it to

export GATEWAY_IP=“$(ip route | grep default | awk ‘{print $3}’)”

fixes it for me.

Also in create.sh is a hard coded path

systemd-nspawn --boot -D /mnt/data/jailmaker/jails/$HOSTNAME/rootfs -M $HOSTNAME.tmp

Should it use?

$jlmkrPath/jails/$HOSTNAME/rootfs

I just ended up hard coding it to my path.

Got it running with after those changes.

Silly bug from my side, thanks!

I will also check how to detect an actual jailmaker directory location.