Incus VM cant add GPU via UI

When adding a GPU via the UI i get this error

middlewared.service_exception.ValidationErrors: [EINVAL] 
device.GPU.gpu_type: Field required

using this works to add the device sudo incus config device add frigate gpu0 gpu pci=0000:0e:00.0 but breaks the UI with more validation errors, resulting in a broken device section showing nothing, i guess because its not setting gputype : physical ?

also it breaks looking at datasets if you add the gpu by hand to the incus container. ouch.

I saw that you are using frigate, I use it here too, but in a docker and not in a VM, I just passed the correct settings of my VGA in docker compose and it is working.

Thanks, while ths has nothing to do with the OP, I go frigate working with GPU by using docker as the nested virtualization was fraught at best. I actuallu want to use my hailo 8 Ai card, but there is no way to load the drivers on truenas, this is why i was trying to use a nested VM (my truenas is on proxmox).

But nested virt seems a no go with pcie passthrough - when i finally got the hailo driver to load in the nested VM it reset the whole physcial server :slight_smile: i quickly tried the GPU in the nested VM and that didnt reset the server but the nvidia drivers wouldn’t load.

So for now its gpu pass through from proxmox to truenas and use docker. I wouldn’t be messing with any of this if truenas would provide a way for drivers to be loaded with dkms - like a way to make sysext packages so the base OS is not touched.

tl;dr i prefer not to use this gpu for this, but i needed it working asap for reasons and (not unexpectedly) nested virt is a bust.

I wouldn’t bother with incus for now. I moved from 24.10 to 25.4 and NOTHING is working regarding VMs. You can’t install windows easily, you need to play around a lot to get other stuff working, it’s absurd. Same as you I tried a GPU passthrough and it’s not being detected or used in the incus container at all.

1 Like

I agree it definitely wasn’t ready for prime time, seems like they shold have two branches with incus staying in beta.

FYI I eventually traced the resetting to be an issue with the card and how it likes to do bus resets - that caused by server to reboot with PCI SERR. I found the solution - it was a specific qemu conf setting to disable hotplug within in the Level 1 truenas VM. I havent had time to retry as i currently have a corrupted BMC firmware on my server causing lots of issues :frowning: