Hello Guys,
So, having a Windows VM setup in TrueNAS SCALE. How do i passthrough the GPU ROM like in KVM or ESXi or Proxmox? Is there any way which i can add/load the GPU ROM aka VBIOS?
Thanks
Hello Guys,
So, having a Windows VM setup in TrueNAS SCALE. How do i passthrough the GPU ROM like in KVM or ESXi or Proxmox? Is there any way which i can add/load the GPU ROM aka VBIOS?
Thanks
Should be able to add PCI passthrough when you build the VM. I was able to add a GPU, but couldn’t add that and PCI passthrough at the same time.
One a semi-related question, are you able to see your VM via VNC? If so what are your settings? Are you using an SSD drive as your storage? I am following in your footsteps.
You might want to add GPU ROM to your thread title.
Passthrough works but the Shutdown/restart causes hang to the TrueNAS itself. So, i guess adding the ROM should fix it.
Never tried it. But hopefully, that should work!
Yes, WD SN850X SSD.
Oh yeah. I fixed that now.
@Captain_Morgan I tried to edit the XML but the changes are not persistent. Its so much difficult and i’m not sure how do i edit the XML.
Also, i did virt-host-validate and i see the following output:
QEMU: Checking for hardware virtualization : PASS
QEMU: Checking if device /dev/kvm exists : PASS
QEMU: Checking if device /dev/kvm is accessible : PASS
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
QEMU: Checking for cgroup 'cpu' controller support : PASS
QEMU: Checking for cgroup 'cpuacct' controller support : PASS
QEMU: Checking for cgroup 'cpuset' controller support : PASS
QEMU: Checking for cgroup 'memory' controller support : PASS
QEMU: Checking for cgroup 'devices' controller support : WARN (Enable 'devices' in kernel Kconfig file or mount/enable cgroup controller in your system)
QEMU: Checking for cgroup 'blkio' controller support : PASS
QEMU: Checking for device assignment IOMMU support : PASS
QEMU: Checking if IOMMU is enabled by kernel : PASS
QEMU: Checking for secure guest support : WARN (AMD Secure Encrypted Virtualization appears to be disabled in firmware.)
LXC: Checking for Linux >= 2.6.26 : PASS
LXC: Checking for namespace ipc : PASS
LXC: Checking for namespace mnt : PASS
LXC: Checking for namespace pid : PASS
LXC: Checking for namespace uts : PASS
LXC: Checking for namespace net : PASS
LXC: Checking for namespace user : PASS
LXC: Checking for cgroup 'cpu' controller support : PASS
LXC: Checking for cgroup 'cpuacct' controller support : PASS
LXC: Checking for cgroup 'cpuset' controller support : PASS
LXC: Checking for cgroup 'memory' controller support : PASS
LXC: Checking for cgroup 'devices' controller support : FAIL (Enable 'devices' in kernel Kconfig file or mount/enable cgroup controller in your system)
LXC: Checking for cgroup 'freezer' controller support : FAIL (Enable 'freezer' in kernel Kconfig file or mount/enable cgroup controller in your system)
LXC: Checking for cgroup 'blkio' controller support : PASS
LXC: Checking if device /sys/fs/fuse/connections exists : PASS
Any idea how do i fix these errors?
Its better to explain the problem you are specifically having.
and the specific software and hardware you are running on.
Is anyone else seeing the issue or is it working for them… then try to troubleshoot. Is it your config or a sofware issue?
Well, I don’t see an option to add the GPU ROM via the GUI. So, I went ahead to edit the XML and it works. But when the NAS is restarted or something is changed via GUI, those changes made to the XML files are lost.
I guess the problem depends on the GPU hardware. Can you specify?
If no-one else knows of a solution, I’d recommend a Feature Request.
The GPU is RX5700XT.
The github guide you linked above, see the Attaching the GPU section. When i use virsh edit windows, i get the following errors:
error: failed to connect to the hypervisor
error: Failed to connect socket to ‘/var/run/libvirt/libvirt-sock’: No such file or directory
And when i try to directly edit the XML inside , the changes are not persistent across reboots or if something is changed via GUI.
Just to mention, i’ve done it successfully on Ubuntu KVM, Proxmox, Unraid and ESXi. The same thing works there and the changes are permanent. The bad thing is that there is no way to specify the ROM via the GUI.
Additionally, on a side note, even if the GPU is blacklisted so that it can use Vfio driver, the XML doesn’t have such configuration ( ) line is missing from the GPU when configuring a passthrough the GUI.
@Captain_Morgan So, i managed to edit using virsh edit. But even after i used define command and validate command, the changes are persistent after reboot (did reboot via shell)
That sounds like success??
Sorry for the typo. I meant to say that the changes are not persistent across the reboots.
At this stage, I think a Feature request is probably the best approach… lets see how many other people ar running into the issue or whether they have a solution.
Hmm. How can i do that?
I’ve used KVM/QEMU on Ubuntu and have tried ESXi, Proxmox, Unraid and they all supported two major things: Specify the GPU ROM and manual editing of XML. Adding these two features in the TrueNAS SCALE will simply help the users to opt more for TrueNAS SCALE rather than opting other virtualization platforms. If not the manual editing of the XML file due to how TrueNAS is desgined, option for adding GPU ROM would be really worth.
It’s not too hard to install a GPU on a computer system. Even my friend’s son also does that! who is 10 years old only.
Thanks. Will request one there.
Seems like you forgot to read the thread title. xD
You’re right, passing through the GPU ROM (VBIOS) is often necessary for successful GPU passthrough in virtual machines, especially for certain graphics cards or when encountering issues like the VM hanging on shutdown/restart.
TrueNAS SCALE, being based on Linux and utilizing KVM, does offer ways to achieve this, although the specific method might not be as directly exposed in the GUI as in Proxmox, for example.
Here’s how you can typically add or load the GPU ROM (VBIOS) in a Windows VM setup on TrueNAS SCALE:
Method 1: Using the VM Configuration (if available)
While the TrueNAS SCALE GUI might not have a dedicated field for the VBIOS ROM path like some other platforms, you should first check the VM’s device configuration after you’ve added the PCI passthrough for your GPU.
How to Obtain the GPU ROM (VBIOS) File:
vbetool
or similar: In a Linux environment where the GPU is recognized, you might be able to dump the VBIOS using command-line tools.Method 2: Editing the VM XML Configuration (Advanced)
If the GUI doesn’t provide a direct way to specify the ROM path, you might need to edit the underlying XML configuration of the KVM virtual machine. This is a more advanced method and requires familiarity with the command line and KVM configuration.
/etc/libvirt/qemu/
. The filename will usually match your VM’s name.nano
or vi
) to open the XML file for your VM.<devices>
section and find the <hostdev>
entry for your GPU. It will look something like this:<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</source>
</hostdev>
<hostdev>
entry for the GPU’s audio device.<rom>
element within the <hostdev>
section for the GPU:**<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</source>
<rom file='/path/to/your/gpu.rom'/>
</hostdev>
/path/to/your/gpu.rom
with the actual path to the VBIOS file on your TrueNAS SCALE system. Ensure the file has the correct permissions for the libvirt-qemu
user to access it.libvirt-bin
or libvirtd
service for the changes to take effect. The exact command might vary depending on your TrueNAS SCALE version. You can try:sudo systemctl restart libvirtd.service