Hey, I really need some help here, as I have no idea on how to solve that in a permanent fashion.
Itās quite a hassle to change the NIC, Iād like to make sure I have a strategy to avoid this issue before doing so.
AI bots says I could use a systemd unit for this, but it didnāt work.
Besides, it sounds like something that should be configurable from TN itself.
Isolated GPU Device(s) is a new feature offered from 25.10 onward I think.
It allows you to select a specific PCI device, isolate it from Truenas host (using IOMMU group if Iām not mistaken) so that it can be allocated to a VM.
I had previously made an attempt at adding an Isolated GPU Device. It failed (Using an NVIDIA Tesla T4 with Truenas VMs on a Dell R730xd)
And it was apparently not properly removed, or at least there was a device still listed in Isolated GPU.
And it seemed to have messed up with TN in an unexpected way:
When I tried to install a new NIC, I guess the system reassigned PCI bus address differently and my HBA got assigned the PCI bus address of the originally isolated GPU. Or at least thatās the way I understand it.
Thatād explain why the wrong driver was assigned to it. (vfio-pci instead of mpt3sas)
It wasnāt possible to deal with the leftover Isolated GPU from the UI, but using this very specific midclt command given by an AI chat, I was able to unregister it.
(I realize now that using midclt call datastore.update is undocumented and reserved for internal usage)
Anyway it worked, the Isolated GPU list is now empty.
But even after that and a reboot, the new NIC would keep messing up with the existing PCI bus addresses in the same way (HBA getting the vfio-pci driver).
I had to give up on that new 10Gb NIC and revert to the existing one.