I’ve already spent quite a bit of time searching online and browsing through the forum archives, but haven’t found any useful information for my specific issue. That’s why I’m reaching out here.
My build:
Board: MC12-LE0
CPU: Ryzen 5 PRO 5650G
RAM: 2x32GB DDR4 ECC
I’m currently using a PCIe bifurcation card (x8x4x4) to run two NVMe SSDs. Both drives are recognized in the BIOS, but only one appears in TrueNAS. After some research, I ran the lspci -v command and discovered that both NVMe drives are indeed listed, but one is being loaded with a different driver, which I believe is causing the issue.
Below is the lspci -v output:
08:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller S4LV008[Pascal] (prog-if 02 [NVM Express])
Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller S4LV008[Pascal]
Flags: bus master, fast devsel, latency 0, IRQ 255, IOMMU group 10
Memory at fcc00000 (64-bit, non-prefetchable) [size=16K]
Capabilities: <access denied>
Kernel driver in use: vfio-pci
Kernel modules: nvme
09:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller S4LV008[Pascal] (prog-if 02 [NVM Express])
Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller S4LV008[Pascal]
Flags: bus master, fast devsel, latency 0, IRQ 43, IOMMU group 11
Memory at fcb00000 (64-bit, non-prefetchable) [size=16K]
Capabilities: <access denied>
Kernel driver in use: nvme
Kernel modules: nvme
Has anyone encountered a similar situation or have any ideas on how to resolve this so both NVMe SSDs are recognized by TrueNAS under the correct driver? I’ve also been considering manually specifying the driver in Linux, but I’m unsure if this is possible in TrueNAS—or how to approach it. Any insights or suggestions would be greatly appreciated.
vfio-pci is the driver used for PCIe device isolation. I assume you haven’t isolated one of your SSDs for a VM or something.
I don’t know if it’s still applicable to the 5000-series APUs, but previous generations would limit the PCIe x16 slot (and permitted bifurcation methods) to x8 width when there was an APU installed and its iGPU was active (as it consumes 8 of the available lanes, leaving only 8 for the topmost PCIe slot)
I believe that as of the 5000-series, it’s internally connected over the same “AMD Infinity Fabric” for the CCD/CCX so that shouldn’t be the cause here - but all the same, I would check for a BIOS update for your motherboard.
Not that I’m aware of—it’s just a generic card from China I picked up cheaply on eBay. I actually had the same card before, but that one didn’t work at all. Here’s the picture (there are many different seller).
I went with this card because I couldn’t find many other options that have an x8 slot on top for future expansion (like an HBA or a 10Gb NIC).
No I just plugged in the card with the two SSDs, powered on, and both NVMe drives showed up in IPMI and in the BIOS, so I assumed it would work out of the box. Turns out it’s not that simple. (I have made the bios settings previously)
By the way I noticed a few warnings/errors in the boot log, which I’m not sure how to address:
can't derive routing for PCI INT A
PCI INT A: not connected
...and the same for INT C
I also see an entry in the advanced settings for “Isolated GPU PCI IDs” with an option for “Unknown ‘0000:08:00.0’ slot.” which should be the missing NVME.
You can query that with lspci | grep 08:00 to see if it matches - if so, then remove that item under the Isolated GPU PCI IDs menu and reboot.
Did you set the matching x8x4x4 option in your BIOS for the PCIe slot? The manual for the Gigabyte board seems to be a little off in that section in that it indicates this option only for the PCIe x4 slot
Yes, it turned out to be the same device. After double-checking, I saw it was indeed selected under the Isolated GPU PCI IDs, even though I didn’t select it (It didn’t show in the list only after clicking on Configure did it show in the extra window). Removing it and rebooting solved the issue—thanks for pointing that out!
I still find it odd that TrueNAS automatically selected the device without me explicitly adding it. Once I unchecked it and saved, the entry disappeared entirely, and after the reboot, both NVMe drives now show up in the GUI with no issues. Hopefully it stays that way!
No, I’ve had the system only since mid-November and haven’t set up any VMs yet since I wanted to get the two NVMe drives first. The only thing I did was adjust the necessary BIOS settings so I could pass through the iGPU in the future if I decide to set up a Windows VM.
So, I’d be really surprised if this was related because the “Isolated GPU Devices” list has always been empty, and I definitely don’t remember selecting the iGPU—especially since I haven’t set up a VM yet. The only thing I did was check if the iGPU was selectable to confirm my BIOS settings were correct. I never had a dedicated GPU in the system either, which makes this even stranger. If I had previously installed a GPU in the same PCIe slot, I wouldn’t have been so surprised.
The board appears to have an ASPEED BMC, so it theoretically would have been possible to isolate the iGPU (which was probably your logic behind buying this board in the first place) but I agree it’s unusual that it would have decided on its own to isolate a device.
Do you have a debug file that you might be able to attach to a DM to me?
Try going into your BIOS and try doing the following. This is what I ended up doing for my device, to get all of them to show up (your manual may be different but worth a try).