Update 2025-08-10: Given the recent decision to unwind the experimental Incus support and revert back to libvirt, I’ve updated this feature request to cover libvirt’s own virtiofs integration rather than the Incus equivalent. (It’s the same functionality, so I think keeping the FR open with the same votes is reasonable.
Problem/Justification
VMs managed via libvirt support host path mounts via virtiofs, which enable directories in host filesystems to be mounted from inside the VM. This allows quick and easy host/VM file sharing without requiring NFS (or SMB) to be configured and secured. It also sidesteps any complexity around container–host networking (e.g., bridge network setup).
Since TrueNAS discourages direct reconfiguration of libvirt VMs, this FR requests TrueNAS allow creating and managing virtiofs mounts in the Virtual Machines UI.
Older Incus version
Incus instances support many different kinds of disk devices. For VMs, the only type currently exposed in the UI are storage volumes (made available to the guest as block devices); however, Incus also supports host path mounts: “Virtual machines share host-side mounts or directories through 9p or virtiofs (if available).”
I would like to request this existing functionality (specifically, setting host source and guest path, which is evidently used by incus-agent inside the guest to set up the appropriate mount) be exposed in the TrueNAS instances UI.
Impact
Benefits
Users gain another option for sharing host data to VMs. Use cases that currently require NFS (tricky for auth) or SMB (tricky for permissions) between guest and host would now be able to use host path mounts instead. This would make it easy to ensure files used by the VM are properly snapshotted and benefit from all the nice characteristics that ZFS datasets offer.
This also brings more uniformity to the various container/virtualization mechanisms TrueNAS currently offers (native Docker, KVM VMs, LXC containers) by enabling host path mounts to work for all of them.
Drawbacks
I don’t know about I/O performance of a host path mount vs. something like ext4 inside a zvol. Presumably the virtiofs has its own overhead, and I’m not sure how it compares to that of running e.g. ext4 inside a zvol. Likewise, I’m not sure how this compares to the overhead of a networked protocol like NFS or SMB.
Additionally, testing would be needed to see how ACL enforcement works in this flow. Do NFSv4 ACLs behave sensibly when a ZFS filesystem is exposed to the guest? Unclear to me.
Both these things should be testable by manually manipulating the VM configuration manually. (Obviously a no-no for production, but reasonable purely for testing.)
User Story
A TrueNAS user is running a VM and wants to…
- … quickly transfer some files to or from the host. They could add a host path “disk” through the Instances UI without needing to enable NFS or SMB, configure IP allowlists and/or user authentication, etc.
- … ensure files written by the VM are backed up using normal ZFS snapshot and replication, without duplicating those files both inside and outside the VM’s zvol(s).
- … be able to easily migrate between containers and VMs without needing to move files in and out of zvols constantly.
All these user journeys could be better addressed by exposing the existing virtiofs functionality in the UI.