Server Hardware Upgrade Check Request

An update: I’ve copied/pasted my original post below with edits in bold. I’ve decided to continue to run the NAS on bare metal and build a second server for all the virtualization down the road. So I basically just want a very affordable incremental upgrade for my NAS hardware.

Reasons for wanting to upgrade:

Increased RAM/Processing cores/power
Additional RAM (16-->64GB)
Marginal CPU Upgrade
Updated IPMI

No longer:
2.5Gbe networking (ISP 2Gb coax connection)
Possible future virtualization of TrueNAS through Proxmox and expansion of server capabilities
Future addition of graphics card (cloud gaming PC/hardware encoding)

Current build:

Fractal Design Define R5 ATX Case
ASRock E3C226D2I Mini ITX LGA1150
Intel Xeon E3-1275
2x Crucial CT102464BD160B 8 GB DDR3-1600 CL11

Current upgrade plan build:

**Silverstone RM22-312**
**X11SSi-LN4F**
**Intel Xeon E3-1245 v6**
**4x16GB PC4-25600 ECC UDIMM** ([these exact modules](https://www.ebay.com/itm/326160777856?_skw=MEM-DR416LD-EU32&itmmeta=01JM2DSYBKNBN4MGVV55566E8M&hash=item4bf0b28280:g:5OQAAOSwO9FmafGM&itmprp=enc%3AAQAKAAAA8FkggFvd1GGDu0w3yXCmi1cPFtvaKkYfkRYk14lm%2FvfbHhoECAs7c8lO03zdgDzN3I%2BstWiklYwo%2B%2BiwF0nZEJlLPwyswS02LdxRjH7abSr63GOBE%2BZmdlwFRYkHVXeb%2F9CXAJggHs5ihiuYUrSewrGGLvKuSouZqrnyZJ4OHoQg3SHYk%2BBLmd4sfJaKZ2m1UBWvHnXGmZu2hENpEfVdI2i5SOc2od0eBd0fWNLDLGef%2FfG16IsSD79Y7VbhMkUPoZCD3CweeZMBWZKYNvzFlDU%2BTRFr1jM55jFWipRAcONo44%2BAkxExclPK6XF0Zxrr%2FA%3D%3D%7Ctkp%3ABFBMjObnzaBl))

That’s about a $400 parts list, all through eBay. How does that parts list look?

This is a very useful post. Thank you.

In terms of noise, I’ve had good experience with building from parts with cases from Inter-Tech in particular. I run my NAS in an IPC 4U-40248 with SuperMicro server board, LSI HBA, 6x Exos HDDs, a bunch of 2.5" SSDs and 2x NVMe drives in PCIe. The fans are all Noctua, with 2x 12cm fans blowing straight across the HDDs, all RPM-controlled from a script using IPMI. HDD temperatures thus never exceed 40C and the fans are basically inaudible. Can recommend.

I guess the trade-off is rack density - the same hardware could be crammed into 1u at the cost of much more aggressive cooling and therefore noise.

1 Like

What board did you use? For some reason that’s the thing that’s got me stuck. I’m looking at an ATX X11 board, but there are so many options that I’m really scratching my head trying to find the “right” choice.

FWIW, all the issues you mention are valid, and should also fixed by the switch to Incus as hypervisor orchestrator in Fangtooth. Also LXC support, like PVE’s CT support.

In fact, the Proxmox Community Scripts have been ported to Incus and work on TrueNAS

I’m not the person you asked, but - in the hope that this info might be of use to you - I am using a (pre-build) HL15 from 45Drives ( 45HomeLab Store ), which offer the X11SPH-nCTF / X11SPH-nCTPF. My configuration is with the X11SPH-nCTPF, a Xeon Silver 4216 and the Noctua fan-pack with currently 4 WD Red Pro 20TB HDDs. The CPU has a passive cooler and the noise overall is fine for my daily use.
The system sits right next to my desk in a rack and the most obvious sound is the sound of - what I take to be - the HDD heads moving. Of course the system is effectively idle most of the time.

I’m not saying that you should order from them, just showing a board that - perhaps - might be a fit for you.

I use norco rpc4020 and 4224 chassis. I took out the fan walls and put 140mm noctua fans in it. 3 just fit next to each other leaving a bit of room for cabling. It is zip tied together and is put stuck with hard foam around the cabling.

This has much more flow and lower noise than the default 120mm fanwall. I even lowered the rpm with a pwm controller when i had the servers in my ht room. Since i moved they are in seperate room and i take controllers out.

1 Like

I built mine a number of years ago so it’s a X11SSM-F, which was current gen then. If I did it now I would use something more recent.

SuperMicro first choice (for me) as I find them solid and trustworthy and consistent. Start considering which CPU - Epyc, Ryzen, Xeon, etc. Then there will be a couple of models, with different variations of onboard sas controllers, network, etc. Current gen is X13 with X14 around the corner.

If you need a boatload of PCIe slots and huge expandability then go for higher end Epyc or Intel Scalable. Otherwise Ryzen offers amazing CPU performance for less. H13SAE-MF supports the most recent Ryzen CPUs, which would probably have been my choice today.

So, this is what has me so confused. That board is EIGHT HUNDRED DOLLARS on eBay and $650 from Amazon. I guess that’s because of the 10GBE and SAS (if I’m reading the SuperMicro naming scheme right)? This is for Rocket Lake/11th Gen Xeons that came out four years ago?

EDIT: OK, upon further reading, I think I’m getting my confusion. I’m lumping all X11 boards in with each other, where some I’m looking at are like 7th-gen intel (E3-1200 v6/v5) and others are 11th gen intel (11th gen scalable). Is this accurate? I was trying to figure out why some are like $800 boards and others are $150. Have I cracked the code?

I just need a board with a modest upgrade allwoing for more than 16GB of RAM like my current build. I’m not trying to drop $1400 on an upgrade right now for a server running a NAS on bare metal.

The secret code is here:

Please tell everybody…

X11 is Intel’s eleventh generation… in Supermicro’s count.
Within that, ‘P’ is for the Purley platform (Xeon Scalable) while ‘S’ and ‘C’ (Skylake, Cascade Lake) are for LGA1151 and LGA1151-2 Core and Xeon E (E3v5/6, or E-2100/2200).
So, keeping with C2x6 chipsets, you’d be looking at a X11SSH or X11SCH board. Or AsRock Rack equivalent (look up E246D4U2-2T on eBay).

Or X12STH (Xeon E-2300) or X12SCA (Xeon W-1200/1300), going back in circle to your first post.

I have that link already in the post you replied to :slight_smile:

But isn’t:

X11 = 11th gen. Xeon® (E3-1200 v6/v5)

and…

X11 = 11th gen. Xeon® Scalable Processors

…like four generations apart on Intel’s CPU architecture?

Also, X12SCA is a workstation board, so shouldn’t I be sticking to server boards?

E3v5 use Skylake cores (6th gen Core). E5v6 use Kaby Lake (7th gen), a minor Skylake refresh.
First generation Xeon Scalable (x1xx) are Skylake as well: Same architecture as E3v5!

If you’re a purist, possibly. But a SCA-F, with IMPI, should be capable enough in server duties.
The key here is that Intel has bifurcated server Xeon E from desktop Core in this generation (LGA1200, 10/11th gen Core) and you need Xeon E-2300 with X12STH but Xeon W-1200/1300 with X12SCA: See what pairing you can find for an acceptable price…

My bet would be that E246D4U2-2T with Core i3-9100 (max. 64 GB RAM) or Xeon E-2100/2200 (128 GB RAM) is going to win over LGA1200. But I may be wrong.

OK, cool. I’m not a purist at all, just trying to understand the pretty serious number of variables when exploring older(ish) hardware. Thanks for the feedback!

One question: how many nvme drives are you going to want to use? I found once I considered this point, and actually worked out how many PCIE lanes I needed most things fell into place, simply because most options couldn’t do everything I wanted. Hence the used enterprise gear in my sig (which is total overkill but is also dead quiet because it’s never really stressed).

I don’t have any plans to use NVMe drives at all. What are they even useful for, cache drives?

Nive hardware BTW. Again, that motherboard is close to $1K, that seems WAY outside of what I’d like to spend.

Nvme drives can be used for everything or nothing. But if running VMs and apps it’s either nvme or SSD, spinning rust being slow. Nvme drives are actually cheaper than SSD drives in the UK. And if you use a special vdev (requires planning) nvme drives can bolster small file seek times. Ie pool layout planning needs completing before any purchase steps are taken.

Personally I’m using 28 pcie lanes just for nvme drives. If I was using sata SSD, I’d need 4 pcie lanes (coral tpu) and 6 sata, plus the 13 sata for the main pool. Which means I’d need something in the region of multiple hba plus an expander (no PCI slot needed for that though, but for SSD speed they would need a standalone hba) and onboard sata slots, plus a frankly massive case. And I’d still want a GPU for transcoding pushing me well past 30 PCI lanes But that’s me, your mileage may vary.

I got the motherboard, ram and CPU plus some cables for low 4 figures, in the UK. Used is worth looking at. I just had to make a plan and wait a short while. But without the plan I’d not have known what to look for.

Id recommend reading the L2arc and special vdev threads in resources.

Sounds good. I already have my pools built out, and just want the extra CPU/RAM capability.

My VMs/CTs are going to be on a separate server running Proxmox, so no need to dump money into providing capabilities I won’t need on the standalone NAS.

1 Like

At the very least, boot drive—not to spend any valuable SATA port on that.
And one or two drives for an app pool, if you use apps or VMs.
Beyond that lays the “most home users don’t need this”-land, where L2ARC and SLOG belong.

I have an HBA, so I already have a bunch of open SATA ports (I boot from thumb drives anyway). And I don’t need it for apps either, since I’m only running really low resource plugins like radarr.