HI,
So this is a note for all on TrueNAS SCALE Fangtooth 25.04+
I have a few different vms which I have configured in Truenas, via vnc they all work fine in fact the vnc method now is imho much better than the previous implementation, imho cleaner, albeit a bit less secure and it works well, however when I did gpu pass through I could not get past the boot screens. I tried several different Linux distributions.
I thought it might be my setup I have the hdmi connected to my av receiver which then connects to my tv - I thought somewhere here the signal was being degraded tried different hdmi cables / ports - a few different things. I gave up after a while thinking there is some hardware issue occurring, but also that maybe it is related to Incus.
With the recent update to 25.04.1 I thought I would try it again to see if I can get any further (also having contracted some variant of covid) - when enabling vnc and discrete gpu output, both would work. I first did this in an Arch Linux vm - logged in via the vnc, turned on the tv display and amazingly everything worked. I then tried the debian vm - on the tv it was stuck in the same main boot instance unable to see grub - I enabled the vnc session and sure enough in the vnc session I could actually see the grub interface and it would get to the login screen, once I logged in then I could see the external display and I could get the vnc session to mirror the display connection (the same is true in arch - I just happened to have logged in via vnc first, without realising the issue). Fantastic everything working, at least you would think.
What I noticed is that 2 gpus are presented 1 of which uses a redhat display and the gpu is an llvmpipe gpu (this is for the vnc). My external is an AMD Radeon, however even with disabling vnc removing the incus-agent, removing the qemu-agent etc the llvmpipe gpu would always be the primary driver and even with vnc disabled the primary would always be llvmpipe, which meant no hardware acceleration and the cpu being used for rendering.
I at this point figured this must be Incus related. I searched around and came across a disccusion in linux containers (I cant add the link to my post)
I decided to try this out, by editing my config file in the truenas terminal, my instance in this case is called DebTv:
$ sudo incus config edit DebTv
The actual line to add is the following:
raw.qemu.conf: |-
[device "qemu_gpu"]
This should be added to the config section and the result should be as follows:
architecture: x86_64
config:
raw.qemu.conf: '[device "qemu_gpu"]'
I restarted the vm and the gpu passthrough is properly working I can see the grub boot menu, the log in screen everything, however I suspect the vnc session no longer works. Also now the only the gpu is the AMD Radeon external gpu with full hardware acceleration working.
This is great for me, for now I have a solution, but I suspect the vnc session will no longer work.
Ideally TrueNas should be aware of this and should provide more control and ideally there should be a way of forcing the llvmpipe gpu to always be the 2nd gpu when an external gpu is passed through and if the vnc session is disabled the llvmpipe gpu should either be automatically disabled or it can be left as a secondary gpu. As it stands the new virtualisation really does not play well with gpu passthrough,
I thought I would message this here, in case anyone else is running into a similar issue, I could not work this out initially and this has taken me a while to get to the bottom of and work out exactly what is going on.