Has anyone run four NVMe drives on the ASUS Pro WS W680M-ACE SE with an Asus Hyper M.2 x16 Gen 4 Card?
Can anyone 100% definitely confirm via personal setup or testing, you see the 4 drives ready to run a RAID1 ZFS pool?
Thanks
Has anyone run four NVMe drives on the ASUS Pro WS W680M-ACE SE with an Asus Hyper M.2 x16 Gen 4 Card?
Can anyone 100% definitely confirm via personal setup or testing, you see the 4 drives ready to run a RAID1 ZFS pool?
Thanks
Gonna repost my previous response; it won’t work because the asus card just physically splits the pcie lanes, this means that the motherboard has to do the bifurcation. That specific motherboard will only be able to split the slot in x8x8 (two nvme), not x4x4x4x4 (four nvme).
Solution is a pcie switch or different motherboard
Yeah, not the answer you want. You gotta find a motherboard willing to do a 4 way bifurcation on that x16 slot.
Was it you or the other op that needed ipmi, so had this board or a super micro as choices? There are other ways to handle ipmi so that expands the landscape for you to find a 4x4x4x4 pcie slot supported board. Jetkvm, I saw a pikvm on a pcie card too.
Having personally tested various cards of this nature I consider myself to be somewhat knowledgable here because I’ve been trailblazing weird NVME configs with esoteric hardware on TrueNAS for years now.
e.g. and many more permutations
I always recommend spending the extra $ on a PLX chip based solution, and I’ve had good luck personally with Linkreal adapaters from AliExpress. Unfortunately it looks like their store front is gone recently. I think I found the “replacement” Shenzen company but I haven’t bought from them so I won’t share the link. Just search for " PEX8747 (or similar) in eBay or AliExpress and you’ll find plenty, just can’t currently recommend a specific SKU right now.
Ok cool: so the step up is to the W790 chipset right?.. Like the ASUS Pro WS W790E-SAGE SE… It’s not mATX, but at least it has enough lanes and will do 4x4x4x4 guaranteed. Yes it will use more power (energy) running 24/7 but it has IPMI. And you can get a suitable CPU with inbuilt graphics for HDMI - edit… but wait the darn MB doesn’t have an HDMI output port!?
Otherwise attempting to stay with mATX, it’s pretty my trying to find the older form factor ASRock Rack W680D4U-2L2T/G5. If you can find one, they pretty much sell now for a premium.
Which pathway would you recommend? Thanks
Is it for flexibility or is there something else to it?
Go to ARK on Intel’s site and check the bifurcation abilities of your CPU.
That stops at x8x4x4. So the answer is a resounding: NO. (And it’s not the motherboard.)
Ryzen CPUs can do x4x4x4x4. And so can Broadwell-E X10SDV boards…
W790 is an entirely different beast, for entirely different CPUs.
Follow the links and read up on Nick’s findings…
My Requirements
1/Low energy use - particularly on idle
2/Graphics via the CPU. Jellyfin (allegedly according to their requirements) does not like AMD: hence the request for an Intel based MB. HDMI out on the MB to allow the chip to pass HDCP2.2 without the need of a graphics card
3/ 4x4x4x4 bifurcation on a PCIe 4.0 or 5.0 slot allowing 4x NVME M.2 discs (attached to one of these “hyper cards” → all discs seen independently ZFS RaidZ1
4/ enough lanes to support 3/
5/IPMI
6/ 10gb/s ethernet
7/ micro/mini ATX MB (preferably)
that is pretty much it
Very few options as far as I can find
I did but could not find the reason. I might have missed something. I will check again.
There are utubers out there who say the W790 is the only way to go for guaranteed 4x4x4x4 4.0 NVMe using these hyper PCIe cards. The only W790’s around don’t have HDMI out… They are gaming boards (gamers generally use GPU’s - so don’t need graphics direct on the chip). They are large boards which have idle power use of at least 35W (or more) with compatible CPU’s (Xeons) that are also hi energy use… Gamers have different requirements than NAS users (who tend to want to leave their gear on 24/7)
Gaming on W790 and Xeon Scalable??? I knew gamers are crazy, but not to that point…
Anyway your requirements are essentially unworkable due to a conflict between 2/ and 3/: Ryzen CPUs do 4x4 (but not APUs, which have lower idle power and always an iGPU), Intel Core/Xeon E don’t (and the latext Xeon E/Xeon 6300P no longer have an iGPU).
You may drop hardware transcoding and do it by brute force on the CPU, in which case a Ryzen AM5 CPU on an MC13-LE1 or B650D4U-2L2T will about fit (except for lowest possible idle power).
You may drop one M.2 on x16 slot, and stick with Intel. Then your WS W680 can hold 3 M.2 in the x16 slot, a fourth in the x4 slot, and possibly a fitfth from the SlimSAS port from the chipset)—the x4 slot from the chipset takes the 10G NIC and you boot from SATA since the x1 slot is for the IPMI AIC. Or you go back one generation with an E3C256-2L2T or X12STH-F and a Xeon-E23xxG with an iGPU: 3*M.2 in x16 + 1 in x4 (CPU), and a 10G NIC in the other x4 slot for the Supermicro board.
^ Well the older Asrock Rack W680D4U-2L2T/G5 seems to be Ok, but the prices have actually gone up and in any case I can’t seem to source one
Asrock Rack is always a pain to find man - but once again, you could use the motherboard you got listed, but you’d have to use a different card for the NVMEs.
There are basically two types of cards, your card is the type that just physically splits up the pcie lanes & then just lets the motherboard do all the work. The second type would be if the card itself did the bifurcation.
I sadly have no experience, but maybe you can reach out to @NickF1227 & he can give some specific recommendations. So far we know that a card with a PEX8747 chip is apparently pretty solid based on his personal tests.
*Edit: I personally gave up chasing any kind of small form factor unless it is a tiny pc for some kind of office level task. I quickly found out that I have no tollerance for scope limitations and want all the pcie slots possible because I keep wanting every device I own to do more. Personally I’d recommend any motherboard that comes with more slots than you’d ever want, to the point that using bifurcation feels silly.
@NickF1227 there are still a few “4 M.2 NVMe SSD PCIe x16 Adapter with PLX 8747 PCI Express 3.0 x16 Switch to 4 Ports M.2 Adapter Card—STC PE3162-4I” hanging around a Alinowhere but as everyone knows the whole nvme M.2 world has pretty much moved on to Gen 4. My question is to you or anyone else crazy enough to try this stuff, have any of ya’ll found a Gen 4.0 card with one of those PEX8747 chips or is this pretty much where things all stop right now? Thanks
If I had to guess that specific chip does pcie 3.0 & there is a newer version that’ll do pcie 4 or 5… but with that costs likely increase quite a bit & the recommendation for the 3.0 version was a cost & experience suggestion; though I don’t mean to put words into anyone’s mouth.
My Requirements
1/Low energy use - particularly on idle
Well, for this we need to know, exctly, what your target for the server is.
My gut feeling is that you need some kind of consumer class CPU.
My current main server is running under Proxmox, in a VM. It has x QEMU Virtual CPUs assigned.
The Host has 1x INtel Xeon E5-2697 v2. (12 cores/24 threads)
This arrangement sits usually at 0% CPU load.
(The server has two ZFS arrays. Usually the load is around 0, and between 50% and 75% during scrub job)
It has 32GB DDR3 ECC RAM
2/Graphics via the CPU. Jellyfin (allegedly according to their requirements) does not like AMD: hence the request for an Intel based MB. HDMI out on the MB to allow the chip to pass HDCP2.2 without the need of a graphics card
3/ 4x4x4x4 bifurcation on a PCIe 4.0 or 5.0 slot allowing 4x NVME M.2 discs (attached to one of these “hyper cards” → all discs seen independently ZFS RaidZ1
4/ enough lanes to support 3/
It is usually not that difficult requirement.
If the MoBo have a fully wired up PCIe 16x slot with 4x4x4x4x bufurcation, it would be OK
5/IPMI
That will limit you to server grade HW
A possible walk around is to buy an external KVM device, like the nano KVM (<100USD), or the PiKVM (2-300USD (as I remember right now).
6/ 10gb/s ethernet
If you want it integrated to the MoBo, it limits you to Server grade or (very)high-end gamer boards.
If you are OK with an add-on card, another 4x or 8x PCIe slot is needed
7/ micro/mini ATX MB (preferably)
That is the easiest I think
that is pretty much it
Conclusion:
I’m still going to argue that the size constraints are straight up the hardest part of getting this laundry list of features together on a budget… otherwise you’re 100% right, but if we start looking at all these add-on then we’re out of pcie pretty quick
I have not done any testing on Gen4 for this kinda stuff personally. Annecdotally, I will say that on L1Techs and STH and various Reddits I have seen two things.
Also FWIW tollerances and specs are more ridgid in PCIE4/5 so this kinda thing is probably going to die out in favor of other technologies at some point. PCI-Express Fabrics are comin’
Well, There are already a number of conflicts in this spec list.
BUt, I think, the OP can get a nice solution as soon as he finds out, what he actually needs.