Sorry for another hardware/build question post, but i have a few specific questions I can’t find the answer to. I am upgrading form an old Synology nas into a new build in the HL15. I plan to use this NAS for the following use cases:
Media vault for Plex
a. 15x ~18TB SATA HDDs
VM storage pool
Share for editing RAW photos
No services will be hosted on the NAS, everything (VMs, Plex, etc) will be hosted on another server. I plan to add some U.2 or at the very least sata SSDs to the build for the VMs/photo share for faster storage. Starting with 10gig networking, but will upgrade to 25gig later.
My Main Concern
Samba is single threaded, and reaching 10-25gig speeds requires high single core performance. As this is a home usecase, I will be the only client, so I don’t need huge thread counts.
Based off this, my original plan was to use Epyc 4004 (AM5, basically 7000 series CPUs) for the lower power draw (idle and under load) but this platform is limiting in terms of dual channel UDIMMs and only 28 PCIe lanes.
My next thought was to go with Epyc 8004 (Siena) as they are cut down versions of the 9004 (Genoa) epyc platform. Less cost, fewer PCIe lanes/memory channels, and most importantly less power (compared to Genoa, not AM5 obviously). But the boost clocks on these chips only go up to 3.0ghz, when AM5 can hit 5.7ghz.
So - for my setup, is AM5 with the higher single core clock speeds be needed or would Siena with more RAM/PCIe be enough for my requirements?
Threadripper isn’t a bad idea, but I feel there will be fewer choices from manufacturers and in the used market for motherboards with IPMI and other server features.
What exactly are these requirements?
AM5 can go to 128 GB RAM and above, which should be enough.
However, you’ve basically committed an x8 slot for HBA and another for the NIC, which doesn’t leave much for a NVMe pool on CPU lanes. This would be a pointer to EPYC (or Xeon Scalable, but there seems to be an unmentioned requirement for AMD CPU).
The requirements are the usecase I outlined above, basically a single user and fast networking speeds/low latency for VMs and photo (and some video) editing.
I think 192GB of ram is enough, although it’s expensive compared to RDIMMS, the main issue is PCIe lanes like you said.
AMD is not a requirement, but I know very little about Intel’s Xeon lineup and they have so many more product families and generations.
I will check out that MSI board, I haven’t seen it before.
That doesn’t tell use how many NVMe drives and/or what capacity you want for the “fast pool”. And that’s pretty much the crux of the matter.
If we had detailed requirements, we could point you towards suitable Xeons. (Despite Intel’s efforts, it’s not really more complicated than finding one’s way around the various EPYC…)
Each NVMe drive wants 4 PCIe lanes, so drive numbers matters a lot.
Consumer platforms could, at best, fit 2-3 NVMe drives on CPU lanes alongside the HBA—assuming the NIC goes on chispet lanes. You’d need a PLX switch.
Or go to EPYC (Siena) or Xeon Scalable. If you don’t mind DDR4 plateforms (= cheaper second-hand RDIMM), you can find some low core high clock (ca. 4 GHz turbo) in 2nd gen. Xeon Scalable (x2xx numbers, for instance most of the Gold 5200 range, and some 6200/8200) or Xeon W-2200, say a W-2225 (C422, LGA2066 platform, “only” 48 PCIe lane).
This would then be a matter of opportunity on the second-hand market for the CPU (and quite possibly the motherboard as well with W-2200).
For some context: I took examples with Cascade Lake (2nd gen Scalable) because 3rd gen may not have hit the second-hand market yet, and 1st gen, which can now be found for pocket change, has distinctly lower high clocks. Don’t get set too early on a particular part number, see what you could find with LGA3647 or LGA2066 (or earlier EPYC) in your local second-hand market.
I added my personal current price/performance winner to that comparison…the Xeon Silver 4510. Single-thread speeds better than a $2K Xeon Gold 6250 for less than $600 new, with 80 (yes 80) PCIe 5.0 lanes and 12 cores. Yes, it’s not as fast as a Epyc 4464P, but I’ll take the extra 52 PCIe lanes over 20% faster performance any day.
Pair it with a sub-$600 X13SEI-F motherboard (also new), and you have a beast of a NAS:
2x M.2 slots
4x NVMe connection
10 SATA3 ports
8 DIMM slots with support for up to 2TB of ECC RAM
2x PCIe 5.0 x16 (dual-slot spacing)
3x PCIe 5.0 x8
For less than $700, you can get the -TF variant and add RJ-45 10Gbit.
The 8275CL is great if you want to do a lot of things at the same time, but none of them very quickly. Again, the whole 2nd gen scalable suffers from very slow single core performance.
As for the “Xeon Gold 6450C” that is available only from China, isn’t listed on ark.intel.com, and the spec number (SRMGH) is assigned to another processor (Gold 5418Y), with documented reports of missing PCIe lanes and other strange functionality. Thanks, but no thanks.
I run most of my servers on Ivy Bridge-EP, so I know about buying cheap used hardware. After that generation, there is nothing until 5th gen scalable that improves on both single core performance and PCIe lane count. There’s a lot inbetween that is significantly worse at one or both. But, if you want a bunch of cores with mediocre single core performance, scalable 1-4 gives you plenty of choices for cheap on the used market.
For your use case, AM5 (Epyc 4004) with its higher single-core clock speeds (up to 5.7GHz) is likely the better choice. Since Samba is single-threaded and you need high single-core performance to achieve 10-25Gb speeds, the faster clocks on AM5 will benefit you more than the additional PCIe lanes or memory channels of Epyc 8004 (Siena). The lower power draw of AM5 is also a plus for home use.
However, if you anticipate needing more PCIe lanes for U.2 SSDs or future expansion, Epyc 8004 could still work, but its lower boost clocks (up to 3.0GHz) might bottleneck Samba performance. For your needs—media storage, VM pools, and photo editing—AM5 strikes a better balance between performance and efficiency. Stick with AM5 unless you foresee requiring the additional scalability of Siena.
Is the Asrock Rack ROMED8-2T ($639 on Newegg) out of your price range? It’s got 7x PCIe4.0x16 slots … 6 usuable unless you disable other one board features like NVMe. Each PCIe slot can be set to 4x4x4x4 so you can hang 4 NVMe drives off each slot if you want to. I’m doing that on my proxmox VE server.
I think you hit the nail on the head. Am5 platform doesn’t have enough PCIe expandability to add flash storage pools, and Siena has slower clock speeds that hamper the speed of the flash storage access.