Hello folks!
I’m the quintessence of the “long time reading, first time posting” kind of user given that I started with the idea of my own Freenas/TrueNAS around 15 years ago, but here I am in the process of building my first one!
I’d love your feedback on my plan and I also have some questions about my setup and TrueNAS SCALE.
Use case
Media server for pictures and videos
Backup server for laptops, workstations and other devices
SMB file server
Next cloud
Several jails including DBMS, network services, Home Assistant
VMs, but not sure, I’ll try to avoid
Experiment with Sandboxes with systemd-nspawn and Gentoo (requires compiling from source) without docker
Won’t use / Won’t need
Transcoding
Plex
Production level SLA or high performance on the DBMS
Read cache L2ARC (?)
SLOG device
Priorities
Low noise
Low power
Reduced dimensions
Performance
HW
MOBO: MSI D3052, ASRock Rack B650D4U3-2Q/BCM as a second option
The MSI has an Intel NIC, the ASRock a Broadcom, but the M.2 slot is PCIe 4.0 x4 on the MSI while on the ASRock is a PCIe 5.0 x4
DISKS: 8 x 14 TB SATA 5400 rpm WD white label (WD Elements shucked)
SSD: Crucial T500 2TB M.2 (PCIe 4.0 x4)
BOOT DISK: SATA DOM 64 GB (?)
CASE: Fractal Design Node 804
PSU: Seasonic Prime Fanless TX 600W
UPS: APC 1000VA
The NAS will go in an small office area, so my first priority is having it running as quietly as possible.
I have a 25 Gbps fiber link reaching the NAS, connected directly to the router. Having an Intel NIC on the MSI motherboard is what made me pick it over the ASRock.
I was thinking about a single vdev with 8 disks in RAIDZ2.
I can see the benefit of having the sandboxes/jails on the SSD, but I’m not sure how to structure the SSD.
For example can I use a partition as a boot drive and the rest for the sandboxes? If not, maybe add a SATA DOM as a boot drive and use the SSD only for the sandboxes?
What about L2ARC, should I add it and can it be a partition or it must be a whole dedicated SSD?
What’s your take on spare drives? Hot spare attached to one of the SATA ports of the motherboard or cold spare to be attached only when a disk fails?
MSI only lists the Ryzen 7000 serie as supported cpus, but I see no reason why it should not work with the newer series as well. It’s the same B650 chipset of the ASRock that supports 7000/8000/9000 Ryzen series. Am I wrong?
Thanks for the help, I definitely missed this and it’s pretty important. Why going for the EPYC when the problem is the RAM and not the cpu? Can’t I just replace the RAM with “plain” ECC modules?
You can of course, but as far as I know EPYC 4004 is not much more expensive than consumer Ryzen.
Then, of course, you have a brand new DDR5 motherboard with on-board 25G… and shucked white label drives in a single raidz2 which will not come even close to saturate this link. A bit imbalanced.
It used to be like that, but ECC support for the Ryzen 7 9700X is in the specs on the AMD product page, so pretty official.
I didn’t look at the EPYC 4004 in detail, but I’ve read it is essentially a Ryzen 7000-series CPU, so older manufacturing process and most likely higher power. I don’t see the point except maybe saving a few $ on the price tag.
I’ll check the RDIMM/UDIMM RAM though, because that can change things based on what I can find available.
Yeah I know, but I really hate the “cartel pricing” of the so called NAS disks. I know that they are supposed to be a better fit, but in my very personal opinion they are not worth the extra money, especially when buying multiple pieces.
My second option were refurbished server disks, but I found nothing that was 5400 rpm. 7200 rpm disks are too loud.
I don’t care much about saturating the fiber link, I already have everything in place on the network. Having 25 Gbps support on the mobo was just a nice addition, I didn’t pick it for that reason, a 10 Gbps would have worked just fine.
Mind that Ryzen/EPYC 4004 is strictly UDIMM. RDIMM is better for capacity and price (especially DDR4 RDIMM), but then you’re looking at Xeon Scalable or “genuine” EPYC (SP3, SP5, SP6 sockets).
Yes, rebranded Ryzen 7000, and getting official ECC certification. If that means nothing to you, feel free to ignore, but don’t get lured into thinking that Ryzen 9000 is “lower power”: That’s still the same I/O die, and the same relaively high idle power.
For lower idle power, go for an APU (PRO for ECC): The monolithic die puts I/O on a finer process than in chiplet-based CPUs and uses less idle power.
Ok, I guess I don’t have a lot of choice then, going up to “genuine” EPYC makes little sense for my use.
I did a bit of research on this. I can see the benefit of having certified server features on the 4004, that’s the whole reason it was introduced. However for a home/office NAS certifications means little, and certainly nothing to me. The only thing I need is ECC and that is officially supported, even if it is not certified.
What I found out is that 4004 cpus are a big plus for people that need to build small servers where they need for example a cpu certification for BMC, NICs or even server software like hypervisors, etc.
I have to disagree. It’s a zen 5 vs a zen 4, maybe not a huge step, but there is a difference: in the benchmarks the 9000 goes as low as 7.6W in idle while the 4004 is almost double that at 13.5W. Same TDP, same core count.
The zen 4 APU Ryzen 8000G does not support ECC.
The previous APU that I could find with ECC support is the zen 3 5750G, however it is almost 4 years old and most likely not as efficient as modern Ryzens. The 5750G do have a monolithic die, but it’s at 7nm againt the 4 + 6 for I/O of Ryzen 9000. Add a simillar TDP and I find it diffficult to believe it will be lower power than the 9000, even discarding the performance gap that is definitely there.
Did I miss anything here? If you had a precise model in mind, please let me know.
Anyway it was a good pointer, it’s a real pity the 8000G family cannot be used, it would have been perfect, it’s extremely efficient and low power.