I have been reading a lot about how to add additional SATA ports to my motherboard and it seems that everyone suggests that I should only use HBA cards for this purpose.
From I understand it, I have 1 PCIe 5.0 x 16 (physical and electrical) slot and 3 PCIe 4.0 x 1 electrical slots (but physically they are x 16 slots).
Since I have other use for my PCIe 5.0 x 16 slot, I believe I will be using one of the PCIe 4.0 x 1 electrical slots for the HBA card.
After a bit of searching, I found that the LSI 9300-8i card is the one that suits my needs and seems to be well regarded by most people.
I believe the LSI 9300-8i is a PSIe 3.0 x 8 card. If I use the PCIe 4.0 x 1 slot to run this card, does that mean I am limited to PCIe 3.0 x 1 speed?
If so, I believe that the max speed of PCIe 3.0 x 1 is 2GB/s. The max speed of one spinning hard drive is typically about 250 MB/s. Does that mean that even if I plug all 8 SATA spinning hard drives to the HBA card, I would not limit the speed of my hard drives (because 250 MB/s * 8 = 2 GB/s, the max speed of a PCIe 3.0 x 1)?
Will it work properly if full loaded even though in theory it should have the bandwidth? I doubt it.
I’d personally instead investigate turning the x16 into x8x8 and using some risers.
If you really want, you can check out my signature for ideas on using the 4.0x1 for a 10gig nic; it is functional; your motherboard wouldn’t even require butchering a riser to make it work since your slots are physically x16, but electrically x1
So basically what you are saying is that using the x1 slot will theoretically work for getting PCIe 3.0 x 1 speed. However, the caveat is that I may not be able to get the advertised max speed when I try to fully load it.
Your suggestion of x16 > x8 + x8 is interesting but I may be using that slot for x16 > x4+x4+x4+x4 for loading for NVMe, so I may deal with just using the PCIe 4.0 x 1 slot (I may get two HBAs and plug into two x1 slots if I am truly desperate).
The OWC 10G NIC that you linked in your profile is something that I have been eyeing. Since the NIC supports PCIe 4.0 x 1 mode, I think you are probably right that it’s plug-and-play for me. Just curious, it seems that you had to get a riser AND dremeling it? Wouldn’t a riser (like x1 to x4) sufficient? Why do you still need to dremel (Sorry for my ignorance. I have never used a riser cable before)
All good - the card itself CAN work with pcie 4.0x1, but physically is an x4 because it can also work with pcie3.0x4; my riser was physically x1, but had a closed back - so nothing bigger than x1 would fit… without a bit of dremel work
I’m assuming there there are some gotchyas - will the HBA even negotiate to x1? Maybe, maybe not. It is kinda hard to find any past experience on forms, but I see at least see one quote stating:
So it might not just be bandwidth constraints, but whether the device even wants to negotiate down to x1.
This is where you may want to explore what is available in your market for used server parts. Sometimes good deals for things that’ll have more fully populated x16 and x8 slots than you imagine how to fill, with cpu & ram included.
IPMI is also always nice.
That being said, since you already have the motherboard, I can understand why you’re trying to make it work… just be aware it might not.
What card are you using for that? Because I’ll be honest, I was raiding through the manual for this motherboard & I’m very confused on its level of bifurcation support & if you’ll be able to run x4x4x4x4 successfully on it or not…
I’m assuming there there are some gotchyas - will the HBA even negotiate to x1? Maybe, maybe not.
I did a bit of digging. I found a reddit thread that the 9300-16i would work with x1 speed, so there is hope that 9300-8i would also work.
What card are you using for that? Because I’ll be honest, I was raiding through the manual for this motherboard & I’m very confused on its level of bifurcation support & if you’ll be able to run x4x4x4x4 successfully on it or not…
So according to ASUS FAQ (search for AMD B650), my motherboard will support x4 x4 x4 x4 with my cpu (ryzen 7600x), at least using the ASUS Hyper M.2 Card (I think this one)
Fair enough man - that Asus card just electronically splits the x16 into four x4x4x4x4 slots & relies entirely on the motherboard for bifurcation. This isn’t a bad thing at all, and frankly I have the same card & I enjoy it… but if the motherboard for whatever reason doesn’t bifurcate, then you’re SOL.
Only reason I brought it up is because the motherboard bios manual on asus’ website did not outright state that x4x4x4x4 bifurcation is supported as an option, but instead called it something stupid like ‘sata raid’; which to me was a red flag.
Everything you’re planning to do have could work just fine; in fact, there is likely a good chance that it will… but I feel compelled to mention we’ve compounded enough possible points of failure that I’d be fine with it, but that more conservative folks would suggest looking into more ‘recommended’ hardware choices to ensure a better guarantee of functionality.
Not that weird, honestly…
The main purpose for that is actually to provide enough access to more NVMe storage through a single 16x slot.
(8x+8x can be used for GPUs, but 4x4x4x4x is really specific to NVMe)