Finally upgrading after 10 years, advice on motherboard and CPU choices

Hi guys,

So after a recent drive failure I need to build a backup server. Something I’ve been putting off for the longest time.

My current server is about to be 10 years old and it’s:

Super Micro MBD-X11SSM-F-O
Intel Xeon E3-1230 v5 quad core 3.40 GHz

Its served me well, but since I haven’t paid attention to hardware requirements in 10 years I was hoping I could get some quick advice on building a new server.

Basically, my new server will be my main one (used for storing important data - that’s all) and the old server will be used to backup critical files so I have a duplicate onsite.

The new pool I’m planning will be around 8x or 10x 26TB drives, so around 150-200 TB (really depends on how much the motherboard/CPU/ram eats into my budget).

Ideally I’d like a motherboard with onboard 10gb NIC but if the cost of the motherboard greatly exceeds the cost of buying another 10GB NIC, then it’s no big deal.

Is the super Micro X12SPI-TF ATX Server a good choice? It it overkill? Looks to be available for around $600.

Any recommendations for motherboards or what kind of ballpark figure would be great.

I just don’t want to spend $600 if that much is not required.

As for the CPU, is there a sweet spot? With power in mind as costs for kwh increase, I want to keep it as efficient as possible (but at the same time I don’t want it to affect performance).

Same as motherboard I don’t mind investing in something if truenas uses it, but also don’t want to just burn money on overkill.

For RAM, is there also a sweet spot for the speed? Or just get the fastest I can afford?

Is 1GB per 1TB still the recommended advice?

Thanks for your time, and for any recommendations.

I think it will be difficult to make recommendations without knowing more about what your server will be doing. Will it just be serving data? What protocols will you need (e.g. NFS, SMB, etc.)? Will you be running any services/VMs/containers?

If all you want is for it to serve files, I would expect a very modest hardware upgrade would suffice. You may not even really need to upgrade. I suspect your existing system, adding a 10G nic, may be sufficient.

1 Like

Hi,

Thanks. Yeah it’s just SMB shares.

Its a huge repository for important files.

As for my current hardware it has 10gb NIC. I am wanting to build a second server so I have an onsite backup of my backup (I’d use my old server as my backup, and then use the new server for my main storage).

Since my drive failed it made me realize I need to ensure I don’t lose my work even if one server goes down.

It certainly is capable. Unless you require a large amount of RAM and/or plan to run some heavy applications/VMs on the NAS, it does look overkill.

Recent server motherboards start around $500. :frowning_face:

Then Xeon Scalable is not the most suitable choice.
What computing power do you require?

Then, few cores at high clocks. How many clients? 64 GB RAM is probably comfortable for a 200 TB array at 10Gb/s, and you could fit that in a Xeon E or EPYC 4000 platform, for lower idle power.
The constraint is then to have two PCIe slots from CPU for NIC and HBA if you have 10 drives; for 8, it might still be possible to go with chipset ports, adding some options.

1 Like

Yeah this server literally just serves out data to about 5 clients locally on a 10gb network.

I don’t think I require that much computing power tbh, all this does is serve data so it does scrubs and data transfers to and from it.

I probably will use my HBA (I have some lying around) as I run dual SSD for the boot drive + the drives in the pool.

I was looking at super micro as 10 years ago my first ever nas was a consumer ASRock board and it died within a year.

I then went with my current super micro x11 board and it’s been rock solid for a decade and still chugging along.

Any recommendations for motherboard options? I’m so out of touch on hardware I have no clue right now.

Looking only at not-too-old sever variants of consumers platforms (low idle power, but UDIMM rather than RDIMM):
Xeon E-2300
Supermicro X12STH-F, AsRockRack E3C256D4U-2L2T (AsRockRack != AsRock)
Xeon E-2400 / 6300P
Supermicro X13SCL-F, Asus P13R-E or P13R-E/10G-2T, AsRockRack EC266D4U-2L2T
EPYC 4000
Gigabyte MC13-LE0 or -LE1, AsRockRack EPYC4000D4U or B650D4U variants, Supermicro H13SAE-MF

(bold points to on-board 10G)

Is it me, or is the bolded type kind of hard to distinguish?

@Istvan-5 have you considered using AWS as your backup of your backup? Or what you mean is you want an on-site failover type backup?

My 2 cents, exactly what it’s worth.

For your new array drives… The speed isn’t nearly as important as the quantity. Lean towards lower spindle speeds; 5900rpm drives don’t get as hot or transfer data as fast, alone, but in a team, they can flood that 10g connection. It costs less electricity to spin slower.

For the CPU, if all you plan to do is serve files over SMB, which is single-threaded, you don’t need an 18 headed monster. I bet that quad chip could just as easily be a fairly cheap 8 core modern chip with strong single-threaded performance being the focus. Go for the low-wattage varieties; they should be listed for tested CPU hardware by the motherboard manufacturers. Cores, GHZ, TDP. I’d spend some time looking over benchmarks figuring out what is really good at single thread stuff.

Don’t even bother with insisting on 10g onboard. Get a proper Intel or Mellanox 2xSFP+ card. Why 2? Well you said you want it backing up to a twin host, you throw a 10g in that and direct connect, no network needed for that connection, just a DAC or a fiber cable between the SFP’s. **I’m assuming a proper fiber network/switch not some “just cooking some hotdogs ma” SFP+ to 10ge adapters. They get hot. Everything 10Ge and not fiber gets hot.

Last word, 1GB/TB was some kinda wive’s tale that was playing it safe, I don’t think it was ever tested for truth. Bottom line, give it all the ram you can fit on the board and don’t worry about it after that. My box has 128GB of DDR4, and it has all the headroom I will ever need in addition to scrubbing faster. Yeah, it’s a modest homelab box that nobody depends on in a business, but that’s ok.

Solid base. I can’t believe thats already a 10 year old system?! I’m getting old.

Is this primarily large stuff like video files served over SMB? Or is this something else?

Yes, but for this generation (X12/H12, insert other vendor generations here) AMD really shines in my opinion. Just recently I began personally (re)using an H12SSL-I with a EPYC 7F52, so I put my money where my mouth is this year. Newegg was running a sale recently

But its out of stock now sadly. But I’m sure it’ll pop up again on eBay soon.
https://www.newegg.com/p/1B4-005W-008Z8?Item=1B4-005W-008Z8

Just a sorta high end overview, that single thread performance differance in this example is killer, AMDs example has way less cores and yet still beats the Intel chip.

https://www.cpubenchmark.net/compare/3662vs3753/Intel-Xeon-Platinum-8280-vs-AMD-EPYC-7F52

Other similar boards from Asrock Rack are also available e.g.
https://www.asrockrack.com/general/productdetail.asp?Model=ROMED8-2T#Specifications

You may even find a really good deal on a slightly used one or an off lease/ off contract server at this point, that whole generation is starting to end it’s 5 year lease cycle and is showing up on eBay.

1 Like

1 Like

I mean. Fair, but OPs in this power envelope. :stuck_out_tongue:

And also, the idle power consumption of modern server CPUs is very good, this generation really shines in that arena relative to the parts both vendors replaced. Both vendors also have lower power parts for these sockets. lol :slight_smile:

I just always pick the high end of each platform and work down when looking for parts. Older generation high end is often better than newer generation mid range at the same price point.

I’ve not gotten around to measuring, but I’m fairly confident that my EPYC system even at idle consumes less power than my gaming computer in use (not when gaming).

1 Like

I was just laughing at the EPYC winning and being 240W. My old ass (but still useful to someone) Xeon was like a 75W chip complete with an on-die GPU capable of hardware transcoding. I have no doubt that idling, that EPYC is very lazy. Probably lower wattage at idle than my main pc with a thirsty 9070 and 7900x3d or whatever I put in here. :sunglasses:

Your posts are usually interesting to me anyway. :rofl:

I know you probably know this but in build threads I always like to say…“Thats not a Xeon.” It’s a Core i(X) desktop processor re-binned and re-tooled with ECC UDIMM support. Thats not a bad thing, and it really is badged a Xeon…but it was not “designed” to be like the bigger LGA sockets are. I prefer to buy things that were engineered from the ground up as a server part to those that were re-tooled desktop parts :stuck_out_tongue:

/rant.

Hey me too! lol.

Also, I’ve had good luck with UCS servers in a previous life. Even if you stay within the same generation you have…but move to the bigger CPUs… @Istvan-5

You can probably save alot of money.

This model I was just deploying new in production 3 years ago when I was with another org. Bear in mind you may need to bring your own HBA/sacrifice a slot…but cables should be reusable. This Cisco server was also available with an HBA option so you can find that part number pretty easily, but this vendor example doesn’t have it as an option.

I’ve personally used this seller but no affiliation.

1 Like

It’s you, your browser, choice of font and/or “dark theme”.

No, it’s not. All desktop Intel chips these days support ECC UDIMM memory. The problem is that Intel’s desktop chipsets don’t, by design, because they want you to buy a Xeon for that. But if you get the W680 chipset, which is found on embedded/industrial class motherboards, it supports ECC UDIMM memory. The problem is these motherboards aren’t so easy to find and are very expensive.

Must be the dark theme.