Hi guys,
I think i found a solution for the VT-D problem. If you add the boot option pci=nommconf
to grub you can boot every OS including TrueNAS Scale.
I tried with TrueNAS, Ubuntu, Debian and Proxmox.
I was not sure if it is stable so I tested it for about 3 weeks now and i can report that so far everything works perfect.
I am now running proxmox with a TrueNAS vm on top. The NVMe’s are handed over to TrueNAS via PCIe passthrough (VT-D, iommu) so truenas has access to the whole disks and not just virtual disks. So also SMART checks are possible.
By the way this solution is not limitted to 4x m.2 nvme’s, I run it fully populated with 8.
This really opened the device for me to use it as a homeserver/nas as i planned for. I was even able to passthrough the GPU to Jellyfin, immich and nextcloud for transcoding and basic AI workloads.
It took me a long time to find the issue and solve it and I hope it helps some of you!
1 Like
I have an F8 SSD and follow this helpful thread and got TrueNAS Scale installed. But then drives started to overheat and I could not figure out fan control. Has anyone solved this on the F8? If so, could you share the steps? Thanks.
Interested to know how you made it work!
I have proxmox 8.3 installed and with the pci=nommconf option it crashes the whole system, and without it it gives me an error when trying to start the TrueNAS VM
kvm: …/hw/pci/pci.c:1654: pci_irq_handler: Assertion 0 <= irq_num && irq_num < PCI_NUM_PINS' failed.
Could you please add more info on which VM config you have on Proxmox, version of truenas you are using, installation of Proxmox (ZFS, Ext4, etc) Thanks!
Insteresting, If I install proxmox over ZFS and enable pci=nommconf
It doesn’t crash but I get similar error
kvm: ../hw/pci/pci.c:1654: pci_irq_handler: Assertion
0 <= irq_num && irq_num <PCI_NUM_PINS’ failed.`
Attaching my VM config
PCI devices