First of all, I’d like to say hi to all the users of this forum!
As the topic suggests, this is my first build, and after reading tons of information from various sources, I feel like I know even less than I did two weeks ago. Nevertheless, I’ve come up with a plan and need some help and clarification to ensure this project is a success rather than an expensive mistake.
Why do I want to build this?
Data backup for photos and videos
Nextcloud
streaming via Jellyfin
Add blocker
Photo management app
VM.
What I think I want to build?
Something small with reasonable power consumption, ideally in an m-ITX case like the Node 304.
ECC support—family photos, documents, etc. need some protection (or should I not worry too much about it?).
AMD CPU—seems to be more budget-friendly when it comes to ECC RAM (if I decide to use ECC).
Did I mention “budget-friendly”? I know it will cost money, but I don’t want to pay for features I don’t need or won’t use.
The list:
Option 1:
Motherboard - ASRock B550 Phantom Gaming-ITX
CPU - AMD Ryzen 7 PRO 5750GE or Ryzen 5 PRO 5650GE
RAM - 2x 32GB DIMM 3200Mhz ECC
Storage
-128GB M.2 (system)
-3x 6TB HDD (WD Red Plus 6TB NAS Hard Drive 5400 RPM) apparently not much difference between 7200 but less noise and power consumption)
-SDD for apps (not sure about the size yet. 254/500?)
Although the price difference is around £400, it’s almost impossible to get the X570D4I-2T and the SODIMM 32GB RAM, or even the Noctua NM-i115x Mounting Kit. I’ve seen the board listed by vendors in Poland or Germany for an eye-watering £550, plus customs duties. The RAM is available on eBay US, and the bracket for cooling can be found on eBay China.
Suggestions/Clarifications:
Do you think ECC is essential for this kind of setup, or can I skip it and save some money move to Intel board?
Any thoughts on whether I should stick with the B550 option, or is the X570 option worth the extra cost (especially given the difficulty in buying them?)
I’d love feedback on the expansion card—what options are available for future storage expansion that would make sense for this build?
Do you plan to deploy raidz1 with 3 drives? If so, it doesn’t look very optimal.
650w looks like overkill. Each HDD drive usually needs 10w (20w when starting). Enterprise SSD 10-25w (consumer SSDs usually need less). I believe you can calculate the rest parts yourself.
If you plan to use it for jellyfin transcoding, it seems like overkill, unless you would have many simultaneous clients (which need transcoding). Can’t say for AMD, but intel 12-14th gen iGPU can transcode 3-4 UHD streams at once.
IMO, you can save a lot by using non-ECC. Especially when you’re stuck with the m-ITX limitation. However, I highly recommend you take a look at this ECC vs non-ECC thread.
Yet you came up with two rather well-tought lists…
These are 7200 rpm drives sold as “5400 rpm-class” by WD marketing department. Do not refrain from going for (possibly larger) 7200 rpm drives if that is an option.
Why only 256 or 500 MB? 1 TB or even 2 TB M.2 (NVMe) drives are reasonably cheap. These would be hosted in a x8x4x4 riser in the x16 PCIe slot.
That would be SAS HBA, based on a LSI 2308 or 3008, to do it properly. Which highlights that a motherboard with at least 6 SATA ports would be best for a Node 304 build.
Genuine server motherboard, but SO-DIMM, price… and being nearly impossible to get are a lot of drawbacks. Do you need 10G networking?
Do you need hardware transcoding? This wasn’t in the list for option 1.
Strictly speaking, it cannot be said to be “essential”, but ECC is very nice to have, and if you’re going to shop for building it would be a pity NOT to put ECC on the list. With ECC-capable Ryzen CPUs, like your carefully selected PRO APU picks, ECC doesn’t even have to cost extra. (Congratulation on picking the Ryzen PRO options, but I’m not sure the “GE” is doing you much good over plain PRO xx50G; idle power matters much more than TDP.)
you intend to go with ZFS for storage, as data reliability is important for your setup.
if there is a bit rot, you can trust zfs to recover it for you provided the pool redundancy, and proper scrub jobs are in place. (which is one of the main reason we go this route)
if there is a corruption, in memory - that’s where ECC is going to help.
As you are carefully considering the components to assemble, it sounds like you are serious about your data and its protection. So, that should give you the answer to your own question on - if you need ECC or not.
I’d classify ECC as non-essential. It’s good to have, but there are more important items.
ITX seems to be too limiting for you given that this will be a multipurpose machine and not a pure data storage. By allowing larger formfactors you gain expandability, and you may use more mainstream compoments (standard RAM instead of SODIMMs is one example). And you may get boards with 6, 8, or 10 SATA ports.
I’d postpone the processor selection and look for a good server mainboard. This is the foundation of all. First the mainboard, then add a suitable CPU and RAM. In this order.
After using IPMI hardware for years I consider this an essential item. This is a kind of hardware VNC that allows you to do everything remotely, even turning the power on or going into the BIOS setup. Given that servers often land in attics, closets, basements etc, this is a very neat thing, and you will need it when things don’t go as planned.
2.5G Ethernet has become cheap. It’s still a lot cheaper than 10G and can use the standard cabling. Go for it. It helps to have PCIe slots free for additional network cards.
Where do your backups go? RAID is not a backup. Data without backups are not important. This is one of the most important questions!
The “multipurpose” part does not require more than a pair of SSDs for apps/VMs. And it seems that (small) size and (low) noise are also on the requirement list; Node 304 is a good choice for these. However mini-ITX is a killer in that the AMD B550 chipset supports 6 SATA ports BUTonly expensive server boards such as X570D4I-2T do expose the full set; with Ryzen consumer boards, which would otherwise be reasonable choices thanks to unofficial ECC support, one has to go to micro-ATX and larger to have 6 ports. (All my NAS have ECC and IPMI, but if I had to choose I’d go ECC over IPMI.)
My plan was actually to go with RAIDZ2, since I’d like to eventually expand the pool to 6x6TB + another pool with 2x SSD.
Do you think a 500W PSU would be more appropriate in that case?
I don’t expect more than 3 clients at the same time. To be honest, I still need to educate myself more in this area before deciding.
IInteresting thread—it somehow goes from ECC to car accidents and back again. It seems like ECC vs. non-ECC is an eternal debate with no one-size-fits-all answer. Since I’m planning to store personal data that can’t be easily replicated, I’d definitely sleep better with that extra layer of protection.
I built a very similar machine last fall. I used the following:
Gigabyte B550I Aorus Pro AX
AMD Ryzen 5 Pro 5650-GE
2X32GB DDR4 3200 ECC Uunbuffered
M.2 to SATA 6 port adapter, ASM1166
Noctua NH-L12Sx77 CPU fan
Fractal Design Node 304
3x Corsair individually sleeved SATA cables
10x 8 inch Thin-SATA Cables
Corsair RM Series RM650
Fractal Design Node 304
10Gtek 10Gb PCI-E NIC
I chose the motherboard, because it is the same motherboard that 45 Drives puts into their HL4/HL8 NAS offering. The PCI bifurcation works out great for me. I use a PCIE X16 to X8 X4 X4 expansion card, that allows me to still use my 10gbe NIC and add two M.2 NVMe drives to the machine. I used one of the motherboard M.2 slots for the M.2 to SATA adapter, allowing me to have 10 total SATA drives, and I still have three NVME drives, plus my 10 gig NIC. I am not sure if the ASrock boards support PCI bifurcation or not, but if not, I would look at the Gigabyte offering.
I do think ECC is critical which is why I went with the AM4 platform. AM4 gives me enough performance, and the DDR4 ECC memory is reasonably priced. DDR 5 ECC memory is just too costly right now.
I run Proxmox as my main OS, and I have virtualized TrueNAS. I use the four motherboard SATA ports for Proxmox, and I pass through the M.2 to SATA adapter directly to TrueNAS. It has been working quite well for me. Plenty of power for running VMs (5 total) and docker containers (24 total). Its not just my NAS, its my primary home server too.
BTW, I don’t think the RM650 is overkill. It is a unit that allows the fan to turn off when not needed. On my machine it almost never runs. With 10 SATA SSDs installed, the server only draws 40 watts at the wall when idling.
I I have a mish mash of drives. My Proxmox OS and VMs sit on a pool made up of 2 mirrored VDEVs (sort of the ZFS analog to raid 10) using four Samsung SM863 drives. In TrueNAS I have 4 SM863 drives and 2 intel D3-S4510 drives. I was able to find all of these for around $45/TB.
The SM863 drives are 3.6 DWPD and the S4510 drives are 2 DWPD. Either are acceptable/suited for the workloads I run. I just can’t seem to control myself when I find a bargain LOL.
The price difference between 500, 550 and 650 W PSU is negligible when planning for over £1000.
Then ECC (easy enough with Ryzen…) and some backup strategy.
“RAID is not a backup.”
“ZFS is not a backup.”
Then you need at least 4 drives to begin with raidz2.
Total capacity would be 24 TB raw (ca. 20 TB usable), which makes it possible to backup to a single external drive, and raises the question whether you could do with a smaller array of larger drives—even possibly a mirror (although at this size I’d personally want it to be 3-way).
I am using a 750w seasonic focus gold because I was planning to run 12 drives. Plans have changed and I only run 6 with plans to expand to 8 but I have a GPU which was never accounted for.
Get a good quality PSU that is a bit over what you need as long as it does not cost too much.
I mostly mean the price point. However, PSU’s load/efficiency graph is usually an upside-down bathtub. This also can be taken into account when you plan to feed a 200w system with a 650w PSU.
For example, with 230V, 80+ gold has 88% efficiency for a 20% load, and silver has 89% efficiency for a 50% load (wiki). So, for a 200w system, a 400w silver psu would be as good as a 1000w gold psu. But again, my main point is about price, not about efficiency.
I would suggest the Asrock B650i for the motherboard. And as already said, 650W is totally overkill and will impact the build price and your electricity bill for nothing, even if it’s Gold. 350W should be plenty enough. I run a B650i with a Ryzen 8500G and 6 sata disks on a 250W (industrial quality) psu without any hardware issue.
Wow I didn’t expect that the topic will bloom so quickly with so many suggestions and questions. Thanks for all the support.
Let me quickly answer some of the questions
No I don’t but it would come in the package with 8 SATA connection via OCulink on X570D4I-2T
Couse the option 1 ASRock B550 has only 4 on board SATA and PCIe would be with HAB to get me to 8+
That looks like exactly what I want. I bet it’s dead quiet and fast.
How’s the adapter working for you? What about the temperatures—did you make any modifications to the case to improve cooling? Can you share more details about the exact component names and how they’re arranged?
I didn’t really consider an M.2 to SATA 6-port adapter, since in all the articles and threads I’ve read, everyone seems to prefer a PCIe HBA over an adapter.
Quite rightly so, as you’d need to be very carefus with what’s on this USB stick (no port multiplier!), and then very careful with cooling the tiny controller and with never bending the M.2 when plugging or unplugging cables.
With mini-ITX one really has to get everything right on the motherboard to keep the sole PCIe slot for expansion which cannot be done in any other way.
A very good match for the Node 304 is a X10SDV motherboard: 6 SATA, 1 M.2 for boot, x16 slot which can bifurcate all the way to 4x4, IPMI and plenty of cheap (ECC) DDR4 RDIMM—10GBase-T is a common bonus. But one have to find one at reasonable price second-hand, and these boards are getting old, although they are still sold new… for eye-watering prices.
Thanks. It helps to clarify the hierarchy of requirements.
6+ SATA over transcoding GPU; 10G not needed.
6 SATA comes easily in micro-ATX size.
X570D4I-2T brings everything, and some more, in mini-ITX size but at high costs (plural: the board itself is costly, and then ECC SO-DIMM is a further bummer).
Even switching to socketed Intel CPUs may not help: SATA is falling out of fashion, so you might have to look back at older boards for 6+ SATA in mini-ITX, which brings back issues about availability. Xeon E now even lacks iGPU…
Something has to give, but you’re the only one who could decide what.
Very fast, and super quiet. I didn’t need to make any modifications to improve the cooling. I am using a 35 watt TDP processor and the SSDs really don’t make much heat at all. It idles at 36 degrees, and only 40 watts of power.
With the exception of the M.2 adapter, I think all of the exact component names are listed above. If not let me know which items you want more details about. I used this adapter: https://www.newegg.com/p/17Z-0061-000D5?Item=9SIARE9K9G2395
Other than the SSD brackets there’s not much to say about how I arranged things in the case. I had to do one case modification to get the Noctua CPU cooler to fit. I had to chop off the corners of the drive caddies as shown in this picture