X12SAE‑5 + 11th‑Gen CPU NIC issues: Is My TrueNAS SAN Build Still Viable?

Hello, I recently finished a server build that I planned to use as a SAN running TrueNAS for two XCP‑ng hosts.

Unfortunately (thank you, Intel), the Supermicro X12SAE‑5 motherboard paired with an 11th‑gen CPU is not compatible with any Intel fiber NICs. I have not tried Broadcom or Mellanox, but I doubt they would work. Intel’s 11th‑gen chips are officially validated only for storage or GPU devices, no NICs.

I had never encountered such a limitation, but it seems real. No BIOS settings make the 11th‑gen CPU POST (I tried many), and a Supermicro technician confirmed that all NICs are unsupported on this platform except in the two PCH PCIe 3.0 x1 slots, which is obviously useless.

My original plan was PCIe 4.0 x16 for the Intel E810‑XXVDA4, PCIe 4.0 x8 for the AOC‑S3008L‑L8E HBA, and the third M.2 slot as a SLOG for an HDD vdev that would host slower, non‑critical VMs. That plan is off the table.

With a 10th‑gen CPU the system does POST, but the board drops to two PCIe 3.0 x8 lanes. I know that link cannot deliver the full 100 Gb/s. Does anyone see any issues with this compromise?

The two hosts would connect directly to the my TrueNAS system over two SFP28 links each, bonded with LACP, avoiding an expensive SFP28 switch.

Current hardware

  • Motherboard: X12SAE‑5
  • Chassis: SCE826
  • CPU: Xeon W‑1270
  • NIC: Intel E810‑XXVDA4 (PCIe 3.0 x8)
  • Memory: 128 GB DDR4
  • Backplane: BPN‑SAS3‑826E
  • HBA: AOC‑S3008L‑L8E (PCIe 3.0 x8)

Drives

  • 5 × 8 TB SAS3 HDD
  • 4 × 1 TB SAS3 SSD
  • 2 × 2 TB NVMe

I lost the third M.2 slot when I reverted to the 10th‑gen CPU. I already have an Intel Optane P1600X and two PCH PCIe 3.0 x1 slots, so I could add an M.2 adapter in one of them. I’ve read that SLOG throughput when paired with HDDs is well below the PCIe 3.0 x1 limit; would I still get the synchronous‑write benefits on this slower link?

Does anyone see other issues with using this system as a SAN for two XCP‑ng hosts? (Please be gentle.)

If you are ok with using a U.2 style PCIE switch for those M.2 SSDs, you can get a used one off Ebay. That will save you a native M.2 slot on the motherboard.

The specs say it needs 400 LFM of airflow though, so it runs pretty hot.

That won’t work I only have two PCH 3.0 x1. The two other PCIE slots already have devices in them.

You can connect that U.2 carrier to one of the usable M.2 slots, via M.2->U.2 cable. Otherwise, PCIE->M.2 carrier for Optane on the x1 slot would be fine too.

I’d rather not add all these additional things with a pcie switch. That eBay link for the Viking Enterprise Solutions 2.5" Performance U.2 4x NVMe SSD Module isn’t this just something plugs into a SAS backplane? What makes this U.2? I don’t have experience with U.2.

Have you heard of people using a SLOG with PCH 3.0 x1 before?

That Viking carrier is an NVME/PCIE switch; it can be connected to the M.2 slot via a cable. SAS is its own protocol, totally unrelated to NVME.

I’ll let someone else with more experience with SLOG chime in on the Optane.

One last thing you can try - maybe taping over the SMBus pins B5 & B6 of the E810 might help.

that connector on that tray looks like a SAS connector. I don’t really have anywhere to place that inside my case.

Physical connector SFF-8639 is used by both SAS and NVMe, but these are different logical protocols. And different pins are used depending whether it is SAS, U.2 or U.3.

Feeding a U.2 drive from a NVMe M.2 connector is merely a matter of cabling.

But from your first post it seems that the solution is to replace the Supermicro motherboard.

2 Likes

Unfortunately, this isn’t an option.

Would i see any bottleneck or throttling from using this connector with an m.2 connector? The idea is that I would use one of my mirrored drives in the U.2 and also a SLOG?