Build Report: Norco RPC-4224, SuperMicro X10-SRi-F, Xeon E5-2699A v4 Part 2

Continuing on from the Part 1 on the old forum.


I recently upgraded the CPU from E5-1650 v4 to the Unicorn model… the E5-2699A v4. This is an upgrade from 6 cores to 22! with a pretty good base clock.

Broadwell era Xeons are cheap now and I get a kick out of dropping in a CPU with an RRP of 10K.

With scale I now have a bunch of windows VMs etc running on this 7 year old system :slight_smile:

Previously, I was planning on virtualizing FreeNAS on top of ESXi, and as part of that I obtained an Intel 36 Port SAS expander for cheap, so that I could have all 24 front bays, and an additional 4 SSDs all chained off a 8i HBA which was passed through. And then FreeNAS 11’s bhyve support progressed to the point where I could skip ESXi, and I never bothered finishing the Expander work.

Fastforward a few more years and SCALE makes it practical to load up this server again with more VMs and services, and SSDs, and NVMe.

I’ve got a Hyper M2 card on order… and I wanted to chuck all the 2.5" SATA SSDs that I’m beginning to collect as I decommision various PCs into this case… so… it was time to finish the Expander installation!

2 Likes

I’m using an Intel RES2CV360 36 Port SAS2 expander. I got it cheap. They’re expensive again now!

The 36 port expander means I can dual-link up to 28 bays off a single HBA and reclaim a PCIe3 x8 slot, and reclaim an HBA too… and I have other things to do with a spare HBA :slight_smile:

And the 36 port expander was pretty much the same price as two HBAs… so its even.

The current PCIe2 x8 adapter has about 4GB/s of bandwidth, which works out to 166MB/s per disk in the front 24 bays, which is fine. This also leaves an extra 4 ports for something else if I wish, and then then it also vacated the 10 SATA ports on the motherboard.

One day, if I wanted I could replace this with the gen3 adapter and only need a x4 slot.

I plan to use the SATA ports on the motherboard for some SSDs, and upgrade from the USB boot mirrors to some 120GB SATA SSDs.

When the Intel RES2CV360 arrived I tested its functionality and mapped out where it would need to be installed…


Think I will mount it around about here. Between the motherboard and fan bulkhead. I’ve already pulled one of the molex connectors back out from the drive bay section and it will reach the power input on the SAS expander nicely. The heat sink gets hot, so being in the air flow path is not a bad thing. Unfortunately the short cables that come with the expander won’t be much good.

While testing/setting up, I found that this section from the RES2CV360 manual

Is fairly important. If you don’t cable the drive bays A->F (and presumably G) then the slot# in sas2ircu <#> display gets confused.

If you do, then you get a nice display with the sasexpander and connected drives, listed with increasing slot numbers and enclosure number. (each HBA is an enclosure and each Expander seems to be an “enclosure”), so 0 and 1 for my two HBAs, and then the expander is Enclosure 2 connected to enclosure 1.

So, the next thing to do was test if I can set a drive to boot off the expander…

Which apparently worked fine. Need to ensure the HBA has its option rom installed, that option roms are enabled for the PCIe slot its installed for, you can then configure the HBA boot order in the avago bios utility, and then you can select any of the devices attached to the entire sas topology (apparently) from the SM bios’ boot screen, under HD disk priority.

1 Like

I’ve now mounted the expander.


You can see the passive backplanes at the front. Each row of 4 drives has a single backplane, which has a single MiniSAS port, which is 4 ports, and matches up with the MiniSAS connectors on the expander.

One of the nice things with the Norco backplane design is that the PCB is mounted flat which allows maximum airflow.

I ended up drilling some mounting holes in the chassis. I made a template from the back of the expander.

I then screwed into some hex stand-offs

And then screwed the expander into the other side of the standoffs. The 3 other sets of screws rested on standoff areas on the expander.

1 Like

My longer SAS cables arrived. Everything is using the older SFF-8087 connectors.


All cabled up, time to do some cable management!


That’s a bit better. (You can see the Noctua NF12-IndustrialPPC-3000 fan upgrades)


Backplane is looking neater

Now I have a bunch of reverse break-out cables spare :wink:

2 Likes

The next step was I wanted to add more internal SSD bays, since I now had 14 free SATA ports! (not counting the 8 from the other HBA I removed!)

I’m planning on using the on-board SATA ports, which are connected via the DMI interface.

But the current SSD mounting solution is a bit woeful


Two SSDs… on a tray… which doesn’t get much cooling (evidence of ghetto temperature testing present…)

So, I looked for some sortof bracket thing, originally to try double stack the drives… (the factory tray is actually a combi 3.5" / dual 2.5" mounting tray)

And I found this cheap, and next day :slight_smile:

I figured it’d be perfect to hang UNDER the tray… and it would be right in the path of the cooling fan too.

<insert hammering/sawing/drilling montage>


drives all mounted… I’m contemplating hanging another 4 bay rack off the bottom…


SATA data cable management


SATA power cable managment


And done… still plenty of room for more 2.5" drives :wink:

I was now able to install TrueNAS to an SSD… and banish the USBs which it had been moaning about… and the performance is much improved. TrueNAS Scale is slow off USB thumbdrives.

2 Likes

The Noctua NF-F12 IndustrialPPC 3000RPM PWM fans :slight_smile:

These basically spin at twice the speed of the regular Noctua fans. The lowest stable duty cycle is 15%, and they draw 0.3A at 100% when they are spinning at about 3000rpm.

I had to update the upper and lower thresholds in the IPMI :wink:

I believe the fan headers are good for 2A, but even if they were only good for 1A, these 3 fans should still be okay, ganged off the FANA header (for my fan control script)

In this case, these fans are needed if you’re going to have a lot of 7200rpm drives. With 5400rpm drives they were not necessary, but I’ve begun upgrading from 4TB drives to 8TB as I need more expansion and as the 4s fail.

4 Likes

So how are we doing on those future plans?

Future plans include:
More Ram. :white_check_mark: (32Gb → 128GB, contemplating another 128GB)
Another HBA.:white_check_mark: (now replaced with expander)
2 more 8 drive vdevs. :white_check_mark: (now using mirrors, but contemplating a switch back to RaidZ2)
PCIe NVMe SSD drives :white_check_mark: (Optane P4801x for SLOG, have Hyper M.2 card on order)
10gbe Networking :white_check_mark: (Intel X550-T2)
And maybe some 2.5" SSDs. :white_check_mark: (A handful of Samsung drives)
And maybe one day a bigger Xeon. :white_check_mark: (E5-2699A v4, doesn’t get any bigger on this board)

Upgrading to 10gbe networking forced me to re-architect the pool, but i think with special metadata vdevs and a handful of NVMe drives I may be able to go back to RaidZ2. Will contemplate this when I next run out of storage!

3 Likes

They’re finger-slicing fast! Ask me how I know.

3 Likes

Yes. Yes they are. I know too.

2 Likes

Beginning to think about replacing the motherboard in this system before it dies of old age :wink:

If I use an ATX size board, the 24 bays, expander should still be good, but I’d like a lot more x16 PCIe4 slots.

Something Epyc might be nice.

Something like this:

https://www.asrockrack.com/general/productdetail.asp?Model=ROMED8-2T

Which should support circa 28 m2 nvme via carrier cards! And still have 10 sata etc, and built into 10gbe depending on how you slice it up.