Lenovo 9400 Series 430-16i not detected in BIOS

Hello all,

I’m trying to get started with TrueNAS for the first time and may have gotten a bit in over my head. I bought a Lenovo 9400 Series 430-16i from ServerPartsDeals and I’m attempting to use it but I’m not finding a lot of “friendly” information regarding it. I’ve tried using a USB and StoreCli.efi to confirm anything about the card but I can’t get it to run. Other EFI based tools run fine, like the BIOS updater which I used prior to attempting this. The BIOS sees something plugged into it running at 8x but can’t identify it.

On top of that even trying to find the firmware needed for this card seems impossible. The serial number pulls up nothing and I don’t know which of the many servers I don’t own would potentially have this card so I can at least get the firmware files. Is there an easier way to do this?

I feel like an idiot and have been stuck for days on this new build. Any help, advice, etc would be much appreciated.

Parts installed so far-
Motherboard: Gigabyte z790 (BIOS just updated to the latest version)
CPU: I5 12600k
RAM: G.SKILL Ripjaws S5 Series 64GB (2 x 32GB) 288-Pin PC RAM DDR5 6000 (PC5 48000)
HBA: Lenovo 9400 Series 430-16i 4x Internal SAS/SATA/NVME Ports PCIe x8 9400-16i Full Height/Low Profile HBA

Can it be flashed to IT-mode? Has it been? You’re discouraged from running those cards in RAID or JBOD-mode and the result is in some ways undefined.
What specific motherboard do you have? Gigabyte has 30+ boards at least partially matching the name “Gigabyte z790”.
Have you verified that the PCIe slot fully supports 8x?
Have you made sure it’s fully seated?
Have you tested the card in a different system?
Is there any difference if you use legacy vs EFI mode for the PCIe slots? What about running vs not running the OPROM?

Hindsight is 20/20, but this is why you use tried and tested hardware, in this case 9200 or 9300-series cards that can be crossflashed into HBAs with basic LSI firmware.

Hello!

I can’t get StoreCli.efi to run which I believe would tell me what mode it was set to and what firmware was loaded on the card.
The exact motherboard is Gigabyte z790 UD AC.
As for the PCIe slots I can see that the motherboard does see the card running at 8x in two of the three 16x slots I tried. I tried the top and the second from the bottom. I did make sure it was fully seated all three times I moved the card around the PCIe slots.
I haven’t tested this in another system though I only have one other system that I could put it in but it’d require some effort and disassembly.
I wasn’t able to find any “legacy” boot mode in the BIOS though it’s not a menu system I’m familiar with.
I’m not sure what OPROM is… not seeing that in the boot menu. Only seeing the USB drive I was using earlier. Not finding anything on the Gigabyte site about it either.

Thanks!

Thank you.
Unfortunately you don’t have three 16x slots, you have one. The other two are 1x, but 16x electrically. So when you say that you tested the top one, which one is that?

Did you try it in the slot named PCIEX16?

Ah ok, my understanding was flawed on what the motherboard supported. I did try it in the GPU slot at the top, the metal port that should be the full PCIe 5.0 x16 port.

Also further research is telling me to enable CSM on my motherboard but I seem to have hit another road block. CSM won’t stay enabled with an intel CPU using the iGPU. I wasn’t aware of this restriction and bought the Intel for the iGPU to be used by Plex/Jellyfin. It looks like to proceed I now have to buy a graphic card but the HBA is sitting in the GPU slot on the motherboard.

Questions:
Is CSM necessary to run the HBA?
If so, is there any way to get the HBA working with the iGPU enabled post TrueNAS install?

I really should have looked for the hardware suggestions on these forums prior to my attempts but I thought “how hard could it be!”. Oh the hubris. :smile:

Thank you for your help!

No, CSM is a legacy setting that one should avoid enabling if possible. UEFI is the future.

I must admit I glossed over your PCIe 5.0 16x slot entirely the one linked directly to the CPU, my apologies. So you have two slots that it should work in, not the single one I mentioned.


Still plugging away but I actually got an EFI shell created from Rufus to boot. Am I understanding this correctly that this shows my USB drive and all the SATA connections The HBA could make off the one SAS breakout cable I have attached? Also have one HDD (4TB) connected on P1 if that makes more sense with the screenshot. Not sure if this means it’s working correctly or not. I don’t see anything during bootup that would indicate the card is detected and being loaded at all.

Also attempted to run the TrueNAS ISO off USB via Rufus again but got some odd grub errors and couldn’t continue.

Does this pic I took mean I have hope or is this still a lost cause?

If you boot into TrueNAS with the card in a 16x slot and open a shell, what does a simple lspci return?

Please post the full output backticks ` like so:
```
Text here
```

Also, what about this all on one line:
lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID

Unless they posted the wrong info, the Lenovo 9400 Series 430-16i has an PCIe x8 Interface so should work in a slot other than the Graphics card slot and provide the 12GB/s speed. It is possible the graphics card slot depending on motherboard/BIOS may not actually support a card other than a graphics card. Some PCIe slots also share lanes with other systems on the motherboard (like network) ans won’t be full speed or even available is a m.2 slot is in use, so you may need to look at a block diagram or experiment with which slot the card is in.

It looks like it’s working! Tweaked a few settings with Rufus and got TrueNAS to boot and it sees the attached HDD as an installation destination. Screenshots of those commands as requested.

I saw that it was reading the card in a PCIe 4.0 x4 port but running at PCIe 3.0 x4. I wasn’t sure if that was fast enough to run all 4 SAS ports so I put it in the GPU slot and it’s now showing PCIe 4.0 x8 running at PCIe 3.0 x8. If I’m incorrect that’d be great for a potential future dGPU if needed. I finally got things running and I’m transferring over the 60TB of data I have on the old Synology. This software is going to be a learning curve for sure but hoping I’ll grow into this 15 bay chassis with time. Eyeballing a 24 hotswap from Alibaba really hard though. :smiley:
image

For the 12 Gb/s 16i card, Lenovo shows it as a PCIe 3.0 x8 host interface so the fastest speed it will operate at is PCIe 3.0 even if it is put in a 4.0 slot. Putting it in a 4.0 slot will just slow the 4.0 slot down to 3.0 speed. Since it is x8 I would think each connector (4 connectors) would use 2 PCIe lanes at 3.0 speed. This would go back to what else is a slot sharing PCIe lanes with as to what actual throughput and speed you can get from a slot. This would usually only be shown on a block diagram of the motherboard and not or just casually mentioned somewhere else in the manual.

It looks like a good card with a LSI SAS3416 chip as long as you can direct cool the heatsink with plenty of air flow across it or it may (likely) overheat during use.

Ah ok so I’m assuming the other slot with the x4 would not have enough lanes to run the card. Bummer.

As for the card it’s in a Rosewell 4U 15 drive case and currently the contactless thermal reader is seeing it at 108F. Is that a good temp or should I be panicking and looking to get a PCIe cooling fan? I seem to remember that being a thing for sale somewhere.

Thanks for all the great info!

That’s about 42C which I wouldn’t want to go much over that.
Operating Temperature: 10°C to 55°C (50°F to 131°F). Putting a load on the card transferring lots of data or resilvering, will quickly cause the cards temp to rise, especially since it is a 4 port card. If it goes over 55
C and overheats, poof goes your data into possible corruption and loss. If it is under load and does not go over about 52*C then it should be fine as is.

This is not meant to scare you. These type of cards and heatsinks are designed specifically for server chassis which have tight sometimes low profile quarters with high airflow and high static air pressure requirements (it’s why they are loud) and even then the card depending on location may need supplemental air flow. Just drop an appropriately sized fan in next to the card blowing on the heatsink if you want or think it could use extra cooling.