Problem with the VM's on 25.04.2

i have been running two truenas servers for quite some time, currently on 25.04.2 and they work great for data storage, which was the original premise.

i tried to use vm’s in the original jails, but didnt really like the way they worked. ie couldnt get windows and some other things to work the way i wanted, so i used another server/system for that. i have a need for just a few at a time, so was trying to avoid proxmox or xcp-ng as i just didnt need that level of complexity

but when truenas transitioned to incus, i tried again, and loved it. everything worked for me, including vm’s within vms which i used for debugging/testing various things.

so now they have transitioned back to libvirt, and i dont like it. the old ‘container’ vms that were created under incus continue to work so thats good, but i am having issues with some, not all, installs on the new virtual machines tab.

for example, i cant install truenas core; it stalls at the same place on two different servers. i CAN install freebsd so that isnt it. but they both installed fine on incus.
also have some issues with the spice display system - it keeps timing out; if it loses focus you have to refresh/relog into the page. a quick search tells me that is fairly common. the only compatible spice viewer for windows i could find was ‘remote viewer’ and it is terrible; doesnt even remember the last connections on my system (even though it says it has that feature.) And some versions of linux wont install properly, which i think is display related but not sure yet. i had a couple minor issues on incus, but nothing like this.

so im disappointed. i still like truenas for storage, but was very encouraged with incus, despite its problems. but this seems like a (necessary?) step back due to the enterprise issues of no upgrade path, so what to do?

could all this be operator error or just limitations in libvirt or the way it was implemented?
can they add some display device/connector for the VNC protocol?
any chance this will get better or should i just leave these alone and spin up a proxmox/xcp-ng server separately for the vms?

if the answer is wait until the fall, i can do that. it will be painful, but most of what i need is running in the incus containers - the only thing im lacking is the vm within vm for debugging things. and maybe they work on some items, just not the ones i have tried so far.

dont get me wrong, im not abandoning truenas for storage. but i thought it could do a lot more for me regarding vms.

last thing; the containers and apps work great, so theres that :slight_smile:

My understanding is that both libvirt and Incus are using the same underlying system functionality, QEMU/KVM, but choosing different setup parameters for the VM. The libvirt approach is using settings from sometime ago, and hasn’t really been updated as time went on (with the exception of the recent add of SecureBoot/TPM options). Incus was an attempt to update that system, but all of the improvements meant that users had to either update or rebuild their VM – and as we all know, complaining about that was more preferable to many than doing the work for a jump into the future.

But, all the new functionality is there under the hood, now, it’s supposedly just a matter of TrueNAS exposing it via the UI, slowly and/or with enough options that legacy VM users don’t have a cow.

QEMU/KVM isn’t much my area of expertise, but running a Classic Virtualization VM and an Incus one on my 25.04.02 system will show some of the differences with this command:
ps aux | grep qemu
along with reading the Incus config file
/run/incus/*/qemu.conf
of the Incus VM
gets a bunch of different settings.

Piping these into AI, and asking for a summary of difference, I get the below output – which seems reasonable from what I experienced playing with two different VM approaches, especially the need for the virtio driver disk for Windows. Of course, some of the differences is because my Classic Virtualization VM was built on BIOS rather than UEFI.

I’d expect things like updates to a Q35 machine type, and a virtio device stack, would be the most important updates – but, of course, those are exactly the updates (hardware changes) that would make old VM installs not boot on a new system, or make new Windows installs require extra driver support, so navigating some user inertia will always be a challenge.

What the bot told me

Your Incus VM is configured with:

  • Modern Q35 PCIe machine model
  • KVM acceleration
  • No VGA, but SPICE and virtio-vga are ready
  • UEFI firmware boot using OVMF
  • Secure sandboxing, run as low-priv user
  • Virtio-optimized devices: rng, keyboard, tablet, scsi, balloon
  • Host–guest communication via vsock and serial ports
  • Shared config drive using 9p
  • Extensible PCIe layout with multiple root ports

This is a clean, modern, highly extensible VM layout — tuned for headless operation, automation, and performance.

The TrueNAS libvirt VM:

  • Uses older virtualization architecture (pc-i440fx-7.2)
  • Emulates more legacy hardware (IDE, ISA, QXL, USB tablet)
  • Has a more monolithic PCI layout
  • Uses a manually specified CPU model (Haswell)
  • Lacks modularity and modern features like PCIe topology or paravirtualized config sharing

Key Differences and Observations

  1. :classical_building: Machine Type: i440fx vs Q35
  • i440fx is a legacy machine type using older PCI (not PCIe), with IDE/SATA controllers and ISA bridges
  • Q35 supports modern PCIe devices, better hotplugging, and aligns more closely with real-world physical hardware.
    :red_triangle_pointed_down: Verdict: i440fx is outdated for most modern OSes, while Q35 is preferred.
  1. :dna: CPU Model: Fixed Haswell vs Host Passthrough
    The TrueNAS libvirt VM explicitly specifies a Haswell-class CPU with added flags — conservative and less flexible.
    Incus passes the host CPU model directly, enabling better performance, modern instructions, and alignment with actual hardware capabilities

  2. :puzzle_piece: Legacy Devices Used
    The TrueNAS libvirt VM uses:
    :cross_mark: IDE Disk:
    Incus uses virtio-scsi, which is faster, more efficient, and optimized for virtualization.
    :cross_mark: ISA Serial:
    Incus uses virtio-serial, which is modern and flexible.
    :cross_mark: QXL Graphics:
    virtio-vga is more efficient and recommended with SPICE for modern guests.
    :cross_mark: USB Tablet:
    Required with legacy QXL video + PS/2 mouse for absolute pointer — unnecessary with newer input models.
    :red_triangle_pointed_down: Verdict: TrueNAS VM uses older emulated hardware (ISA, IDE, QXL), Incus uses fully modern virtio stack.

  3. :electric_plug: PCI Layout: Flat vs Modular PCIe Tree
    TrueNAS VM places all devices on pci.0:
    Incus creates modular PCIe root ports (pcie.0 → pcieX), giving:
    Hotplug support
    Clear device segmentation
    Realistic topology
    :red_triangle_pointed_down: Verdict: Incus config is modular, hotplug-capable, and scalable. TrueNAS layout is flat, older-style.

  4. :locked_with_key: Security & Sandboxing
    Both VMs enable QEMU sandboxing:
    TrueNAS sets:
    elevateprivileges=deny,spawn=deny
    Incus sets:
    elevateprivileges=allow,spawn=allow
    This means TrueNAS is slightly more restrictive, though both are secure.
    :small_orange_diamond: Verdict: Minor difference, but TrueNAS is slightly more sandboxed.

  5. :compass: Storage Backend
    TrueNAS uses: “-blockdev host_device … -device ide-hd” on a ZVOL.
    Incus uses virtio-scsi-pci, which is:

    • Faster
    • Paravirtualized
    • More flexible for multiple disks
      :red_triangle_pointed_down: Verdict: Incus wins with virtio-scsi, TrueNAS still uses IDE + AHCI emulation.
  6. :puzzle_piece: SPICE & Display
    Both use SPICE:
    TrueNAS exposes SPICE over TCP (less secure).
    Incus uses Unix sockets, more secure by design.
    :red_triangle_pointed_down: Verdict: Incus uses a safer default SPICE configuration.

  7. :brain: Missing Modern Features in TrueNAS VM
    Compared to Incus, this VM is missing:
    :white_check_mark: virtio-rng (random number generator)
    :white_check_mark: virtio-input (keyboard, tablet)
    :white_check_mark: vsock support (guest-host communication)
    :white_check_mark: config drive via 9p (for cloud-init or provisioning)
    :white_check_mark: UEFI boot (OVMF) — this VM is likely BIOS-based unless it was OVMF-enabled implicitly

:white_check_mark: Overall Verdict

Feature TrueNAS VM Incus VM
Machine type pc-i440fx :cross_mark: q35 :white_check_mark:
CPU model Static (Haswell) :cross_mark: host passthrough :white_check_mark:
Storage bus IDE/SATA :cross_mark: Virtio-SCSI :white_check_mark:
PCI topology Flat :cross_mark: PCIe-root-port modular :white_check_mark:
Firmware Likely BIOS :cross_mark: UEFI with OVMF :white_check_mark:
RNG, vsock, virtio input Not present :cross_mark: Present :white_check_mark:
SPICE access TCP :cross_mark: Unix socket :white_check_mark:
Display QXL :cross_mark: virtio-vga :white_check_mark:
Config drive (9p) Not present :cross_mark: Present :white_check_mark:

:magnifying_glass_tilted_left: Conclusion
The TrueNAS libvirt VM is functional but dated in architecture:
It’s based on older machine types and device models
Lacks support for many modern, high-performance virtualization features
Reflects libvirt’s legacy compatibility-first defaults
The Incus VM is clearly more modern, minimal, modular, and optimized, especially for cloud-style workloads, automation, and extensibility.

1 Like

BTW, Incus is still in the system, and – if you are willing to take your chances on what an unsupported approach may mean in the future, and not blame anyone about it – you can still create an Incus style VM from the CLI (use your appropriate pool name for the -s parameter; mine is TANK_NVME):

root@TrueNAS02[~]# incus init vm-test --empty --vm -s TANK_NVME -d root,size=10GiB
Creating vm-test

…which will then show up in the Containers list:


…and from there you can add/configure devices like:
VNC
Disks, including a uploaded ISO-backed, low-priority boot device
etc.

1 Like

@Jeverett Some of the information you posted re: TN 25.04.02 & libvirt VMs is incorrect.

The q35 machine type is used by so-called “classic virtualization” as show in this log extract from /var/log/libvirt/qemu/:

-machine pc-q35-6.2,usb=off,smm=on,dump-guest-core=off,memory-backend=pc.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \

virtio-serial is also used, but graphics is qxl-vga and virtio-blk-pci for virtio disks.

And host passthrough also works for libvirt VMs. You are not locked to a static CPU as @Jeverett stated

What you say is true; what I say is also true:

My output:

root@TrueNAS02[~]# ps aux | grep qemu
libvirt+    8278  3.6  6.4 5988052 4235840 ?     Sl   Aug04 159:49 /usr/bin/qemu-system-x86_64 -name guest=1_pfsense,debug-threads=on -S -object {"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-1_pfsense/master-key.aes"} -machine pc-i440fx-7.2,usb=off,dump-guest-core=off,memory-backend=pc.ram -accel kvm -cpu Haswell-noTSX-IBRS,vme=on,ss=on,vmx=on,pdcm=on,f16c=on,rdrand=on,hypervisor=on,arat=on,tsc-adjust=on,umip=on,md-clear=on,stibp=on,arch-capabilities=on,ssbd=on,xsaveopt=on,pdpe1gb=on,abm=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,skip-l1dfl-vmentry=on,pschange-mc-no=on...

@Jeverett As you’re running pfsense as a VM, what did you select for the “Guest Operating System” ? IIRC any FreeBSD based OS will default to using machine type pc-i440fx. It doesn’t necessarily mean it can’t run with machine type q35. My example was a “linux VM”.