I currently have a Xeon E3 1230V2 processor in a Supermicro X9SCL-F-O board with 32G of memory. I use an LSI 9220-8i HBA to connect to three DS4243s and a tape library. I have 8 vdevs of 6 drives each, total storage space ~350T. The DS4243s and the tape library are daisy chained. I recently added a 10Gb networking card.
By way of background, I’ve had some issues with drives “disconnecting” in the past. Sometimes pulling them out and popping them back in causes them to reappear and all is well. Sometimes I have to shut the entire system down and bring it back up again. After jury-rigging a fan to my HBA, the drive dropping has been much less frequent, but I would like to have a motherboard/CPU with more PCI slots and bandwidth so that I could install more HBA cards and avoid daisy chaining. I’m hoping that would increase stability somewhat, improve throughput, and provide more PCI bandwidth for my networking card. It would also be nice to be able to upgrade my memory from 32GB to at least 64GB.
I’ve been looking at the Supermicro X11SPL-F, though even from eBay the price seems a bit steep. Does anyone have any recommendations for my upgrade?
If you’re ready to step up to Xeon Scalable, higher idle power, and full ATX size is no issue, you have a lot to choose from. Look at all Xeon Scalable motherboards, including AsRock Rack, Gigabyte an other, not only Supermicro—and not only the X11SPL.
As an alternative, also consider Xeon W-2000 with the server motherboards X11SRM or X11SRL (Gigabyte also has a server model).
I highly recommend you download the motherboard user manual and read them cover to cover. Pay attention to pcie bifurcation for the slots, and what are dedicated or shared lanes. The cpu alone does not guarantee lots of lanes are available. Take your time and there is a lot of data on the internet.
I take the point about reading the manual. That’s part of the reason I posted here… I was hoping someone had already done the research and could make a recommendation.
If you’re going for Xeon Scalable/EPYC there’s just too many options to chose from so it becomes a matter of opportunity over what’s available near to you and we cannot help with that.
It is also a matter of what you need. If you need high cores and/or lots of RAM, Scalable/EPYC is the way to go. If you only want one more PCIe slot to add a second HBA, Xeon E/Ryzen could still be an option, with the right combination of slots. Or you could go for a single 9305-16i instead of your 9220 (RAID controller “lobotomised” to HBA?) to get more SAS lanes.
All fair points. I would still need to upgrade to make use of a 9305-16i, since my motherboard only supports 5.0Gb across each PCIe2.0 x8 slot. And I already have a bunch of 9220s in IT mode sitting around that I purchased over time from eBay.
I probably don’t need high cores or lots of RAM (this is primarily a Plex and file server), but the X11DPH-T with a couple of Scalables is looking pretty tempting for the price.
Ha, considering I’m running 3 (probably 4, soon) DS4243s, I obviously do not prioritize power consumption!
I don’t know that I’ve found a really good price, but the board seems to be going for about $300 on eBay, which is in-budget. If I want to go with Gold CPUs, I can get a bundle shipped for under $400.
Why do you suggest running it with a single CPU? Does TrueNAS Core not do well with dual processors? I admit I haven’t looked into that issue, yet.
I’m wondering if I should check my priors before investing in a new motherboard.
The manual for the X9SCM/X9SCL has a block diagram that shows slots 6 and 7 as having a “5.0Gb” (with a small ‘b’) PCIe2.0x8 connection. I was assuming that this was the throughput for the slot, which is well below the 4.0GB/s throughput for PCIe2.0x8. But is that the throughput, or is that the serial bit rate, which would make sense given that the slot should have a 5.0GT/s transfer rate?
Then there’s the question of the HBA. It is PCIe2.0x8, and a “6Gbps” card. I assumed that it was limited to 6Gb/s overall, but it appears to support 8 lanes at 6Gb/s per lane, so it should, in theory, be able to saturate 4GB/s as per the PCIe2.0x8 spec, even if the JBODs are daisy chained?
If all that’s the case, then I’m confused as to why I’m seeing the performance issues I’m seeing. I’m obviously missing something, so I appreciate feedback…
The slots look like PCIe 2.0 at 5 Gb/s per lane, so 4 GB/s for a x8 slot.
Mind that no HDD is able to saturate a 3 Gb/s link, much less 6 Gb/s. 6 Gb/s is for SSDs, but then you’d rather want a HBA from the PCIe 3.0 generation.