Create Incus VM host path mounts in Instances UI

Problem/Justification

Incus instances support many different kinds of disk devices. For VMs, the only type currently exposed in the UI are storage volumes (made available to the guest as block devices); however, Incus also supports host path mounts: “Virtual machines share host-side mounts or directories through 9p or virtiofs (if available).”

I would like to request this existing functionality (specifically, setting host source and guest path, which is evidently used by incus-agent inside the guest to set up the appropriate mount) be exposed in the TrueNAS instances UI.

Impact

Benefits

Users gain another option for sharing host data to VMs. Use cases that currently require NFS (tricky for auth) or SMB (tricky for permissions) between guest and host would now be able to use host path mounts instead. This would make it easy to ensure files used by the VM are properly snapshotted and benefit from all the nice characteristics that ZFS datasets offer.

This also brings more uniformity to the various container/virtualization mechanisms TrueNAS currently offers (native Docker, Incus containers, Incus VMs) by enabling host path mounts to work for all of them.

Drawbacks

I don’t know about I/O performance of a host path mount vs. something like ext4 inside a zvol. Presumably the 9p and virtiofs have their own overhead, and I’m not sure how it compares to that of running e.g. ext4 inside a zvol. Likewise, I’m not sure how this compares to the overhead of a networked protocol like NFS or SMB.

Additionally, testing would be needed to see how ACL enforcement works in this flow. Do NFSv4 ACLs behave sensibly when a ZFS filesystem is exposed to the guest? Unclear to me.

Both these things should be testable by manually manipulating the VMs via incus CLI. (Obviously a no-no for production, but reasonable purely for testing.)

User Story

A TrueNAS user is running an Incus VM and wants to…

  • … quickly transfer some files to or from the host. They could add a host path “disk” through the Instances UI without needing to enable NFS or SMB, configure IP allowlists and/or user authentication, etc.
  • … ensure files written by the VM are backed up using normal ZFS snapshot and replication, without duplicating those files both inside and outside the VM’s zvol(s).
  • … be able to easily migrate between containers and VMs without needing to move files in and out of zvols constantly.

All these user journeys could be better addressed by exposing the existing Incus host path functionality in the UI.

1 Like

This would also help address the issue with some use cases requiring multiple types of share (NFS and SMB) that are not possible to be used anymore from Fangtooth if you use Time Machine for backups.

I use a SMB share for user access to files but I mount this dataset also as NFS share (read only) into a VM for remote backup using Borg. The NFS share is required due to SMB permissions.

This same server has some SMB only shares used for Time Machine backups that will stop working when I upgrade to a future release.

Being able to mount the dataset as a host path to the VM would address this issue. I could mount the dataset as a host path instead of a NFS share and then keep each dataset shared by a single method.

Adding to this, without a way to back up an Incus VM at the moment, the ability to copy data easily to an external source (a dataset on the host) is critical to data security, and would grant all the benefits that TrueNAS has to offer such as snapshots, replication, etc.

2 Likes

@GJSchaller Don’t forget to vote on the feature as well. :slight_smile:

1 Like

Testing this out. I created a new Incus VM instance called test-vm (Debian Bookworm) and a dataset on the host with POSIX ACLs at /mnt/apps-pool/scratch/temp-posix. Tried to add the host path per the Incus docs:

$ sudo incus config device add test-vm posix disk source=/mnt/apps-pool/scratch/temp-posix path=/mnt/posix

But this command just hung, and apparently borked the Incus service somehow such that it didn’t respond to future CLI or Web UI interactions. Had to do sudo systemctl restart incus (which took a long time, so I think the VM didn’t exit cleanly) to get it back.

And that seems to have left an extra ZFS mount around that causes problems:

$ sudo incus config device add test-vm posix disk source=/mnt/apps-pool/scratch/temp-posix path=/mnt/posix
Error: Failed to start device "posix": remove /var/lib/incus/devices/test-vm/disk.posix.mnt-posix: device or resource busy

$ mount | grep mnt-posix
apps-pool/scratch/temp-posix on /var/lib/incus/devices/test-vm/disk.posix.mnt-posix type zfs (rw,noatime,xattr,posixacl,casesensitive)

I’m not sure why it’s mounting the underlying ZFS dataset again on the host…

Same hang happens even if the host mount lives in /tmp, so at least it doesn’t seem like a ZFS related issue:

$ mkdir /tmp/test
$ echo hello >/tmp/test/foo
$ sudo incus config device add test-vm test disk source=/tmp/test path=/mnt/test

Still hangs…

Also, it is not just an issue with Debian Bullseye being outdated. Trying the same steps with Ubuntu Plucky (released just a few days ago) results in the same hang of Incus on the host when trying to pass through a host filesystem to the guest.

I can see why iX didn’t enable this (yet). :slight_smile: But I am struggling to find anything in logs (dmesg, journalctl -u incus, etc.) that indicates what’s actually causing the hang… Perhaps someone more familiar with Incus might have better luck than me.

I tried to leave dmesg -w running inside the guest and watch it over VNC, but the VNC connection cuts out as well when I try to add the host path mount. Same with SSHing into the guest. qemu is still running, AFAICT the guest is totally stuck at this point.

Tried the same thing on Ubuntu.
I shutted down the VM first, created the volume but when trying to boot it hangs for ever.
The documentation in Incus mentions this should work for both containers and VM (even in hotplug mode).

I’ll guess I have to stick to good old NFS share but this feature will be really good to have !

Okay, somehow I managed to found a workaround but no very clean unfortunatly.
Disks devices uses virtiofs to propagate the mount to the VM. TrueNas hasn’t the package installed.
Here in the Incus documentation you can read that only the rust rewrite is currently supported (Requirements page under QEMU)

So I’ve compiled the binary on my Truenas server and copied into /usr/bin.

Now If I run

incus --debug config device add test home disk source=/mnt/TANK/media path=/mnt/test

It works and I can see the mounted path inside my vm.
Maybe the virtiofsd pckage hasn’t been integrated yet inside TrueNas packages…

2 Likes

Very interesting, thanks for figuring this out!

Do you know if the Rust rewrite is already packaged for Debian, or do they still ship the old version?

Things are super busy right now, and I’m not sure when I’ll get to it, but I can make a testing install of Fangtooth, enable apt, and install the Debian virtiofsd package, and see if it works out of the box. (I suspect it’s an easier sell for TrueNAS to enable this if they can just install the normal Debian package vs. having to maintain a custom build of virtiofsd.)

:smiley: Do you have a rough sense yet of how well it performs?

Unfortunately; the Rust rewrite doesn’t seems to be packaged for Debian right now, you have to compile it manually. Didn’t tested with thevirtiofsd package, should be great if incus supports it.
Sure it will be so much easier for Truenas to just ship this package inside their updates images instead of the Rust fork.
We’ll see in the next releases I guess.

No problems for the moment, I could reboot the vm, make some reads on the disks devices.
I use it as a Kubernetes storage path for my PVC’s, so far it’s alright.
But each time I will upgrade the Truenas release, I’ll have to re-install the package. Not a big deal, just provided a POST INIT script for that.

Ah, that’s too bad! At least it looks like there’s a package built from the Rust version in Debian unstable (sid) now: Debian -- Details of source package rust-virtiofsd in sid

Not sure what Debian release TrueNAS is currently pulling from, but if they were able to incorporate this into the 25.10 release, that would be super cool. :slight_smile:

Okay, I took another look at this. Installed Fangtooth on a test system, and enabled apt (sudo install-dev-tools). Unfortuately, the deb package from trixie won’t install, even when I download it manually and try to install via dpkg. It conflicts with qemu-system-common, which ships the older, non-Rust version of virtiofsd in bookworm (the version of Debian that Fangtooth seems to be roughly based off).

I wonder if the TrueNAS folks will update the base system to Trixie in 25.10, or otherwise do something to pull in newer virtiofsd (and possibly other supporting packages for Incus). They already ship a newer kernel, so maybe this would be possible as well?