The ACS Override Patch does not work on Truenas SCALE.
Therefore your options are to try to plug your GPU into another PCI slot, sometimes that separates it into its own IOMMU group, getting server grade hardware where this usually is not a problem, or switching to Proxmox for VMs.
IOMMU grouping is a matter of implementation by the manufacturer.
Server/workstation-class hardware usually gets it right. Consumer-grade hardware might not.
Thank you for your input. Do you have any Server/workstation-class hardware compatible with my CPU. I dont have the availability to buy a fresh setup. It is a homemade nas server.
This nice little board should support your CPU (according to ASRock Rack).
However you can never be sure about the IOMMU Groups until you try it.
They do nicely seperate on my ASRock Rack board EPC621D8A, but thats no guarantee.
I succesfully isolated the GPU. But know i get the following error when passing it to VM
[EINVAL] gpu_settings.isolated_gpu_pci_ids: pci_0000_00_01_0, pci_0000_01_00_1, pci_0000_02_00_0, pci_0000_01_00_0, pci_0000_00_01_1 GPU pci slot(s) are not available or a GPU is not configured.
Definitely a hardware-based inability to isolate the full group. You’ll need to use a more server/workstation-oriented board as suggested by some of the other users. The Supermicro X12SCA seems to be the cheapest of the suggestions here, but I’ve never used one personally and can’t attest to a guarantee as to its ability to split IOMMU groups down.
Careful - your Ethernet controller seems to be in the same group. Booting that VM may result in your TrueNAS system going offline. Disable Autostart on the VM first before trying.
You could also ask Gigabyte technical support for an updated BIOS with better IOMMU grouping.
Gigabyte did it recently for the MC12-LE0 board, after users inquired to be able to pass through the iGPU of their ryzen APUs. But that was the server arm of Gigabyte; don’t put your hopes too high on the consumer arm.