So, i’m planning to get X12SPA-TF from Supermicro. But i have some confusion and questions regarding the lanes configuration.
I’m trying to pair it with a Xeon Platinum 32 Cores (3rd Gen) which offers 64 lanes and is supported by this motherboard.
If we look at the datasheet, it states:
Four PCIe 4.0 x16 slots (CPU SLOT1/3/5/7)
Three PCIe 4.0 x8 (IN x16) slots (CPU SLOT2/4/6)
A total of 7 slots. But when looking at the Motherboard diagram, the 64 lanes are distributed all between the PCIe Slots and the rest of the components are tied to PCH lanes (C621A).
But, when analysing the diagram, i can see that there are a total of 4xPCIe X16 G4 lanes which equals to 64 lanes in total. However, according to the diagram, and my understanding, here’s what i understand so far:
The first PCIe X16 G4 either has full X16 lanes or splits into 2X8 lanes with a help of a switch which then separates into an X16 PCIe Slot or 4X4 M.2.
Then, the second, third and fourth PCIe X16 G4 can be used either as 3x individual X16 PCIe Slot or split into 2*X8 lanes on each slot.
I’m not sure if i can really utilize 4*X4 lanes on each M.2 slot and also the X16 lanes on the PCIe Slot 1.
Or is this for surreal that i can utilize 4*X4 lanes on each M.2 slot along with the following:
What the diagram shows is that slots 3, 5, and 7 can have the full x16 lanes…an x16 card plugged into those slots will get the full x16 lanes.
Slots 2, 4, or 6 are x8 slots…an x8 card plugged into one of them will get full bandwidth. But, plugging a card into one of these 3 slots will reduce the bandwidth to the corresponding slot (3, 5, or 7) to x8.
So, if you fill up all 6 of these slots, they will all run at x8.
Slot 1 shares bandwidth with the M.2 slots in a strange way, where you can use two of the M.2 slots and slot 1 at x8, or 3-4 of the M.2 slots, and no slot 1.
All of the above is clearly stated in the motherboard manual:
PCIe 4.0 x16 Slots
SLOT1 will be disabled when either M.2-C01 or M.2-C02 is in use.
SLOT1 will change to PCIe x8 when M.2-C03 or/and M.2-C04 are in use.
SLOT3/5/7 will change to PCIe x8 when SLOT2/4/6 is in use respectively
Relatively unrelated to this is the possibility that the motherboard supports bifurcation on the slots. This means that the x16 or x8 slot might be able to be split into x4x4x4x4, x8x8, x4x4, etc.
OMG. Thank you so much for pointing it out. I probably missed it, my bad.
BTW, when using all the SLOTS i.e X16 and X8 together, will both operate at its full speed? I mean one at X16 and other at X8? Would the case be same with the SLOT 1 and M.2? In short, will they operate at its full rated speed or will it fluctuate?
Also, i see now that its multiplexer switches installed to switch between the PCI lanes on the designated slots. Would that add any extra latency to the devices when installing in such slots? Secondly, would high bandwidth requiring devices such as 100GbE NIC or HBA Card (for SATA SSD) under perform due to such?
For example, i plan to install an HBA Card in SLOT 1 and also use M.2-C03 and M.2 C03. In this scenario, would my HBA Card or either of the NVMe run at less speeds?
Well, while i was writing, i figured it out on the manual:
PCIe 4.0 x4 M.2 M-key Sockets (Support RAID 0, 1, 5, and 10)
*Populating M.2-C03 or/and M2-C04 sockets might have a performance impact on CPU SLOT1. *Small form factor devices and other portable devices for high speed NVMe SSDs
But it says might have a performance impact. So, will it really bottleneck?
Another question is that, on the Supermicro website, I see that the Motherboard has NVMe heatsinks but the product page doesn’t. Can anyone confirm whether Supermicro includes an NVMe heatsink, unlike its previous X11SPA-TF model?
There is a lot of funky stuff going on with this motherboard. The lanes are split all over the place.
The user manual states:
IOU0 (II0 PCIe Port 1) / IOU1 (II0 PCIe Port 2) / IOU2 (II0 PCIe Port 3) / IOU3 (II0
PCIe Port 4) / IOU4 (II0 PCIe Port 5) /
This feature configures the PCIe port Bifurcation setting for a PCIe port specified by
the user. The options are Auto, x4x4x4x4, x4x4x8, x8x4x4, x8x8, and x16.
This tiny statement does not state which slots can be manipulated, I would not assume it applies to all slots. That would be great if it did but you could send SuperMicro and email and ask the question, they generally answer fairly quickly in my experience, however the BIOS options will likely be a bit clearer to understand and likely to identify by Slot.
The fishy part in the diagram is how the lanes are all split up. Most of those I doubt you will get full speed (do you need Gen 4 speed? Maybe on the M.2 interface). Let’s take one small section, the upper left corner, the M.2 slots. You have 16 lanes feeding a multiplexer which splits up the 16 lanes across the four M.2 slots and PCI-E Slot 1. Every slot has to compete with another slot as they all use 16 lanes for 2 slots. I don’t know if Gen 4 speed makes any difference or not but you may not have full speed if you have two devices sharing the same PCI-E lanes.
Just some food for thought. And I’m not saying the board is bad. You asked a question and just trying to give you an accurate answer and opinion.
Last comment: This motherboard may need a lot of airflow to keep things cool.
But it does look like a nice motherboard and expensive CPU. Wish I had that kind of money in the bank.
I think only 4 X16 (1, 3, 5 and 7) can be set to real X16 but then the rest of the NVMe slots+X8 slots gets disabled. Dang shit. What a pitty!
I was really okay with the M.2 slot 1 and slot 2 disabled+SLOT 1*X8 mode. That would do my job. But then, it says the following:
PCIe 4.0 x4 M.2 M-key Sockets (Support RAID 0, 1, 5, and 10)
*Populating M.2-C03 or/and M2-C04 sockets might have a performance impact on CPU SLOT1. *Small form factor devices and other portable devices for high speed NVMe SSDs
This is so weird now!
I needed at least 3X16 Slots+1X8 Slot+2xNVMe. I could have used 2xNVMe C03 and C04 ports and would get X8 on the SLOT 1 which was perfect for me. But i really doubt what Supermicro says in manual regarding the performance impact. I’m so sad right now ;(
Is that a reason why people prefer to use AMD EPYC in the first place when doing NAS builds?
Yes, i need on the 3 Slots at least. Gen3/Gen4 really does not matter on the built-in NVMe port. Although, would love to have that.
How sad is that!
Yes, that’s what makes me especially worried.
Oh yeah, i do get it mate
I’m super Thankful for your suggestions!
Yes, yes. I’m worried less on that matter for now as I’ve solution for that!
Looks exotic. Wish it had at least 3X16+1X8+1*NVMe. Maybe possible via some BIOS options?
What if the device installed in the SLOT 1 is itself an X8 device and being X8, i don’t think it can utilize more than X8 bandwidth, unless an X16 device is installed in the SLOT 1. What do you think?
Great to hear. Does all high-end board have such switching mechanism? Or only Workstations/HEDT ones?
Bingo!
So, just to tell you, i plan to install HBA Card (X8) 4.0 in SLOT 1 and also plan to use the C03 and C04 NVMe (onboard) slots. Would it really make any impact on the performance which would be noticeable?
This will have some D7-P5600 6.4TB PCIe 4.0 drives installed in using PCIe Adapters to keep the setup simple.
SLOT1 will be disabled when either M.2-C01 or M.2-C02 is in use.
SLOT1 will change to PCIe x8 when M.2-C03 or/and M.2-C04 are in use.
SLOT3/5/7 will change to PCIe x8 when SLOT2/4/6 is in use respectively
The “performance impact” to Slot 1 when you install M.2 drives will be that it will run at x8 if you install in C03 and/or C04, and will run at x0 (i.e., be completely disabled) if you install in C01 and/or C02.
Likewise, slots 3/5/7 will run at x8 if you install something in the corresponding slot that shares the PCIe switch (2/4/6). It’s really simple.
What I would do is start by figuring out how many M.2 devices you want to install. If it’s two or less, put them in C03 and C04, and use slot 1 as an x8 slot. If it’s more than two, then just live with the fact that slot 1 is useless.
Then, identify your add-in cards that run fine with an x8 slot. Start in slot 1 if you can, then put two of them in slots that share a PCIe switch (like 2/3, or 4/5). Keep doing that until you run out of cards that need no more than x8. After that, put cards that have to have 16 lanes into any remaining x16 slot. Note that GPUs absolutely do not need 16 lanes of PCIe 4.0…8 is fine. Cards that need 16 lanes might be a 4x M.2 carrier that needs motherboard bifurcation support.
Thank you so much friend for the double confirmation!
So, an X8 device will work at full bandwidth right?
Another question is that, on the Supermicro website, I see that the Motherboard has NVMe heatsinks but the product page doesn’t. Can anyone confirm whether Supermicro includes an NVMe heatsink, unlike its previous X11SPA-TF model?
If it’s not in the “standard parts list”, it’s not included.
And again, every one of your questions is answered by the manual and website. The whole “run at x8 speed” question can be answered by tracing back from the slot to the CPU, and find the line labeled with the lowest number of lanes in the path. That’s the slowest speed the slot will run at. If there are multiple paths with a switch chip, you add those together as if they were one line to get the max speed a slot might have.
An 8x device in a 16x slot will run at 8x
An 8x device in an 8x slot will run at 8x
An 8x device in a 4x slot will run at 4x
All that matters is the electrical lines, not the physical size of the slot. The physical size just matters about if the card will physically fit in the slot.
So will it be an AMD?
My NVMe system is AMD. I did a lot of research before I pulled the trigger. I got exactly what I expected. The only issue I had was I needed a new BIOS which the manufacturer provided me with. Almost a year later the BIOS was released to the public. But you won’t know about those things unless someone else tells you about it.
I know you have been on the forums for a while trying to figure out what you want to build. I’m just curious why all the PCI-E lanes. Do you need them or jus want them? Big difference. You may need to compromise to get close to what you want. Need is the real requirement.
Time for sleep. Got to get up early again, darn kitten my wife brought home won’t leave the dogs alone to eat.
For NVMe, I’d go with something like a Xeon W-3400 or AMD Threadripper 7000 series.
With the right motherboard design, you can have 20 or more PCIe 5.0 NVMe drives with full bandwidth. We are also starting to see PCIe 5.0 x16 boards that have 8x M.2 slots and PCIe switches, which could give full bandwidth if you use PCIe 4.0 NVMe drives. Although you’d have to figure out cooling, 40 PCIe 4.0 M.2 drives would be a pretty amazing system.