AMD Ryzen 5900x Reused for a Truenas Scale Build: Hardware and Power Efficient Build Recommendations

Thanks for sharing this, was going to be searching for a nice set up guide once I got the Hardware and OS set up and configured. Sandboxes seem like a really excellent solution for a lot of what I want to do. Absolutely will check this out in detail when I get to that stage!

Yep, the X470D4U seems like a great solution, bummed I can’t find it for sale anywhere at a reasonable price. Glad you had success with Ryzen though and that the path is decently trodden. By any chance, have you ever been able to measure power numbers with that nice underclock you are running, or experimented with undervolting? I forget all the details of how a PBO underclock effects power draw on Ryzen platforms, chance it might actually not affect power draw at all (assuming CO - 15 is referring to a -15 PBO overclock here).

The true struggle of speccing out a NAS build, separating out what you want from what you need; especially difficult when you are just starting out lol.

1 Like

You could likely make your own IPMI if the X570D4U is even less obtainable than it used to be a year or two ago…

Otherwise I see no obvious flaws. Consider making use of whatever nvme lanes are available for VMs & apps instead of dumping everything onto the SSDs. Also you’ll need an HBA if you’re going 10 SSDs - gives some space on motherboard sata for boot pool (yes you can boot from the HBA, but it is a pain IMO).

I’d argue that you don’t need 128GB of ram… but I do sometimes regret only getting 64.

*Edit: 5900x is a great choice, been using it for a while now. Limited it to uhh… ‘eco mode’? if I remember correctly. Saves the hassle of downclocking/undervolting and potentially introducing instability to something that is meant to be running 24/7 by locking it to no more than 65w.

1 Like

Just saw a video on PiKVM the other day and thought it might be useful for this. Guess I’ll have to dive in deeper and learn how it works and what I need to set it up. Hopefully Raspberry Pi’s are easier to get these days.

Great to know. Now I am considering adding two NVMe drives now for VMs and Apps. I might copy some of the builds I’m seeing here and going with two 1 Tb NVMe’s in a mirrored array, since that seems like a solid way of doing this. I notice a lot of folks running 4+ NVMe drives. Do they plug into the HBA card, or are folks using PCIe adapters to mount them?

Also, an aside, but I see a lot of folks are running mirrored boot drives. I was just planning on using a single Samsung 980 NVMe drive for this to preserve PCIe lanes, and assumed it would be fine since TrueNAS used to boot off of a USB (USBs being famed for their reliable flash memory). Is it standard practice to boot off of mirrored drives now, and what reliability benefits does that bring?

Great point in preserving motherboard SATA. From what I understand, I’d just need to buy a SAS HBA, and then I can break out each of those SAS ports to 4 SATA ports with break out cables and just plug the drives in. Any other things I should be aware of?

Also, by any chance, do you know if the LSI IBM sas9211-8i you are running supports ASPM? Been hard to get details on that, and I’m trying to be diligent to make sure anything I pick up supports it so the CPU can deep idle. That’s the annoying thing I’ve found with ASPM, one spoiled apple ruins the bunch.

I’m leaving two slots open, but the RAM is fairly cheap compared to the total BOM cost of the build (and probably could find it cheaper if I searched around for used)… so I’m tempted just to go for it. No, begone RAM devil, I don’t need it! I’ll just download more later.

Glad to see someone who has had success running the 5900X. Reassuring to know it’s a reliable and supported choice. I’ll absolutely lock it to eco mode, great piece of advice from you and ThreeDee.

Thanks so much for the feedback! You guys are awesome.

1 Like

It’s not bad practice per-se, especially considering that you’re running an SSD pool (this is actually something I missed last night and assumed you were running hard drives).

Mirrors are the general recommendation for block storage. If you’re only having a play around with a couple VMs I can’t see any reason why there’d be an issue, and if you decide you want to upgrade in the future you can just replicate the VM’s zvol(s) over to a pool made up of mirrored SSDs.

1 Like

I’ll piggy back of this thread, as I think there is a relevant question regarding drives for VMs: at a given cost point, it is often possible to buy, for example, an nvme drive with significantly faster read/write characteristics, or one with double the capacity.

I understand that zfs starts slowing down significantly the fuller a pool gets, at least, as explained here by jgreco. Does that apply more to spinning HDD than to NVMe and Sata SSD? In which case, is there a general rule of thumb in relation to focusing on capacity over read/write speeds for m.2 NVMe?

1 Like

For a motherboard, I suggest the Gigabyte MC12-LE0. It has fully-featured IPMI and explicit ECC RAM support. They can be acquired very cheaply from a German retailer, see here: Gigabyte Mainboard MC12-LE0 Re1.0 AMD B550 AM4 Ryzen. Shipping is high to the USA, but I happen to know someone who bought a few extras to sell (and amortize shipping, PM me for details :wink: ).

With just a 5700x CPU, 128GB ECC RAM, a single SATA SSD, and a single 92mm CPU fan, you can expect idle power consumption around 24 Watts. I’m using this board for my home server (running Proxmox, I have a separate system for TrueNAS), and average power draw is about 44 Watts (final config has several fans, Solarflare 10gbe card, and I’m running a three-camera Zoneminder system on it, among other things, so it’s never idle, but generally a fairly low load).

For 10gbe NIC, keep in mind, all else being equal, SFP+ with TwinAx DAC will always have lower power consumption than RJ45 10GBase-T. It takes a fair amount of power to do 10gbe over standard Ethernet cable, whereas direct attach copper sidesteps the need for the PHY thus saving a lot of power (with the caveat that max cable length is rather limited). Some kind of optical (fiber) connection is next-best.

For a 10gbe NIC, I’d suggest getting a Solarflare 5000-series from ebay, e.g. SFN5122 or SFN5152. The 5000-series specifically has very low power draw (their newer cards are still fairly efficient, but higher) and are readily available on ebay for $20 or less.

I’m not sure about the Solarflare’s support of ASPM. In my use case, the network card will pretty much always be active (because of the cameras), so I don’t think ASPM would ever have a chance to be utilized. As for Ryzen C6 C-State: on my (Proxmox) system, it appears C3 is the lowest C-State (e.g. via powertop). I’ve only done some very cursory research, but at first glance it appears the 5700x doesn’t actually implement the lower C-states.

Edit: one thing I should add, that Gigabyte board has two PCIe slots, one is only x4. I would expect a single-port Solarflare SFN5152 to work in that x4 slot. I haven’t tried that exactly, but I have used that NIC in a Supermicro A2SDi-8C-HLN4F, which has only an x4 slot. So presumably you can put your NIC in the x4 slot and use the x16 for your HBA.

2 Likes

I guess the only benefit of nvme for VMs would be the crazy increased iops & latency performance vs sata at a sweet consumer price point.

I’m 99% sure that the 80% utilization limit would still apply. I’d personally get double the capacity vs double the speed in this hypothetical if cost is the same. This is assuming same quality brand & something like pcie gen5 1tb vs gen 4 2tb (which is a somewhat realistic hypothetical).

2 Likes

The 90% utilization limit certainly still applies (when zfs switches algorithms)

Until then the basic problem is fragmentation has increased and finding “holes” to write into takes longer. But on an SSD there’s not much seek penalty

Put another way, “80%” is guidance because you really want to avoid 90% :wink:

4 Likes

I am a spoiled brat, and I refuse to use a system without IPMI for even my homelab at this point. I’m probably a little less risk averse for homelab use cases, and I wouldn’t die on the not having ECC hill. I will die on the not having IPMI hill. :slight_smile: Thankfully for me they typically come hand in glove.

Stick with the tried-and-true server grade cards. In SCALE you should have decent drivers for the Aquantia card, but that doesn’t mean the hardware is necessarily any good. IIRC in CORE, it’s not even supported.

I’d recommend that Intel card for sure. I also really like these cards:
Dell Mellanox ConnectX-4 Lx CX4121C Dual 25GbE SFP28 FH (MRT0D) | eBay

I have no affiliation with this seller, and always be weary buying used cards on eBay. Do your own research, see:
Fake server cards - Resources - TrueNAS Community Forums

But that being said, I’ve happily purchased several ConnectX cards and run them on TrueNAS and Windows systems. No complaints. They’ll even work on CORE, though they don’t have a large market share there AFAIK so YMMV on FreeBSD. I could be wrong.

You can use 10 gig optics in the 25 gig cards, and you’ll probably benefit from having better silicon. For 40 bucks it’s a steal IMO.

Also, as far as RAM goes. Personal experience backed by plenty of others on Reddit and other forums. The X470 D4U board is picky about RAM. It could just be that first gen Ryzen was gernally picky about RAM. IN any case, you’ll probably have to run at JDEC speeds with poor timings and not XMP/AMP if you load her up with RAM.

That being said, anything >/=64GB especially, I’d find something in the QVL of whatever board you find and either find that SKU or find one that reports the same timings and JDEC speed.

When I had 64 in my X470 D4U and 1700x, I had to run at 2400 with all 4 DIMM slots populated, even though the RAM was spec’d for 3200 and ran at that speed with only 2 DIMMs.

I’ve seen weird problems, from myself and others. I just ran a normal update once, everything was fine for months. After it completed, I restarted. The system didn’t come back up. It couldn’t boot and had to hard code the speed to the slowest speed with 1 DIMM, reboot, install another, reboot, install 2 more, reboot…It was obnoxious.

This is “problem” is true for most desktop adjacent platforms in general, not even just the one in question here.

This is also why I now typically run enterprise-grade stuff at home, power usage be damned. :slight_smile:

2 Likes

@OP, not sure where you’re based, but this Aussie site has the x570D4U in stock.

They shipped an x470d4u board to me in NZ for a reasonable price.

2 Likes

Thanks. The context is that SATA SSD in the UK cost significantly more (about 20%+) more than the equivalent quality M.2 NVMe drive, but within the M.2 NVME drives there is overlap between say, 1x read/write speed and 1x capacity v. 0.5x read/write speed and 2x capacity but the same general endurance rating (note, one would expect the higher capacity to have higher endurance, but this is not the case in the exact cost overlap areas I’m seeing). It’s still a fair comparison though as the same data is put through both in this type of scenario.

Considering the use case (increasing storage of small game save files for a game server before they get moved to long term archival storage, and storage of larger audiovisual files before the same move rather than block storage for VMs) in my case I might be better off with the larger storage as I don’t need the extremely rapid data access for VMs as that is already met by the current pool. In general though, it look like another interesting trade off for homelabs :face_with_diagonal_mouth: :money_mouth_face:

Generally speed is a concern if you have a specific use case that drives it. And then it gets more complicated as you need to know what kind of speed it is you want:

  • Raw sequential read/write?
  • Random read/write?
  • Lower latency?
  • Sustained read/write speeds?

Hence unless you have a specific need capacity is best imo (cost, endurance, & quality being equal.)

The strange thing is that I wouldn’t just expect it, but every manufacturer I’ve seen has higher endurance for higher capacity drives (apprx double TBW per double the capacity).
Looking up Total Bites Written on any of these randomly chosen pcie gen 4 nvmes would argue that I’m not crazy:

Unless you meant DWPD in terms of endurance - in which case I’d argue equivalent DWPD for disk A & B, but with higher TBW for disk A, means that disk A still has higher endurance.

That 5900x is overkill, I am planing to downgrade my CPU from 5600x to Ryzen 3 PRO 3200G. As my first thing was to undervolt the 5600x and disable Turbo. That thing heats up for nothing, just idling. Since this will run 24/7 consider the electricity bill.

The missing IPMI feature is not as bad as you think, once you setup everything you will (atleast for me) rarely touch the server.

I have the ASRock X570M Pro4 with ECC RAM and it just works. The fan on the motherboard (right bottom) got stuck in the first week, so thats that.

The board supports bifurcation, which is neat for my ASUS Hyper M.2 x16 Gen 4

The CPU is probably overkill but I am unsure if it has much influence on idle power consumption given that the X570 chipset is not exactly well suited to a low power system.

I recently built a system with an ASRock Rack X570D4U and a 3900x CPU that was repurposed from another build along with 2x 16 GB ECC 2666 memory sticks. The system idles at 29 W with an NVME boot disk on a 550 W Platinum PSU. That doesn’t seem excessive considering the idle load of the X570 chipset which is known to overheat on this board with its TDP of 11 W. Active cooling is required with an after-market solution in anything other than a well ventilated 1U chassis. A 14 cm fan located on the side of a Fractal Design Define R5 does the trick silently.

The IPMI on the board is quite nice since Ryzen 2 and 3 CPUs do not feature any GPU. I would consider it as optional on a Ryzen 4 system but not on this generation.

1 Like

Sorry for the late response, it’s been one of those weeks. Thanks to everyone for the excellent resources and advice, I’ve taken the time to read what everyone shared and do some more research.

I was aware of the 80%-90% rule, but not the fragmentation cost over time. Was surprised that ZFS had no defragmentation built in, but I can imagine the difficulty and trade offs involved in integrating a defragmentation algorithm into a COW file system. Fascinating to learn about; I’m glad I came to the forums.

Based on the resources Essinghigh and Krill shared, I am considering getting a 1x1 1 tb NVMe Mirror (just two drives) for my block storage needs. Figure it’s better to keep unnecessary writes off the primary storage, even if SSDs don’t suffer the same fragmentation issues as HDDs as Stux mentioned (nice bonus for SSDs that I was not aware of). I imagine this should be good for my block storage needs, but let me know if there’s any reason to go with 2 tb drives or a 2x2 mirror (don’t imagine I’ll need too much capacity here).

1 Like

Thanks for input Stux. To make sure I understand correctly, the fragmentation concerns discussed here by JGreco don’t apply to SSDs, because data on an SSD is effectively “fragmented” by default (dynamic wear leveling acts in a similar manner as COW, to my knowledge, and the SSD microcontroller spreads contiguous data written across multiple chips to even out wear).

So keeping the pool occupancy rate low (around 10%-20%) is not a concern with a SSD pool, from a fragmentation perspective, and you will see no appreciable performance impact until you hit ~90%, when ZFS changes algorithms to a slower allocation mode to save space.

I didn’t even think to check B550 boards, thanks for bringing this board to my attention! Of course just when I’m also considering adding in some NVMe drives for block storage. Maybe I can break up a x16 HBA into 2 NVMe and 8 SSDs. Otherwise I have some decisions to make and trade offs to consider between having IPMI or more PCIe lanes.

Great to have a nice real world estimate, thanks for that.

Absolutely, going with an SFP+ card and a DAC cable for that reason. The switch is mainly RJ45 2.5GBase-T, but has two 10g SFP+ ports as well.

Didn’t even check Solarflare NICs, thanks for the tip. Looks like they came out around the same time as the X710s, so there’s a decent chance the have ASPM support. Another rabbit hole awaits lol.

I’d be surprised if the 5700x doesn’t support the C6 power state, as my understanding was that all Zen 3 CPUs supported it. Though the 5700x is a bit of an odd one, since it was released much later along in the Zen 3 lifecycle.

Not surprising that the lowest it’s gotten is C3 though, as my understanding is that Zen 3 AMD CPUs go C1-C2-C3-C6. (PowerTOP is an Intel utility, so it’s a bit wonky with AMD CPUs, which follow a different power state system than intel CPUs). Could be your use case, or could be some device in your system not supporting ASPM and thus preventing the deep idle (the CPU can idle down to C3 with non-ASPM compliant PCIe devices, but not C6). That seems to be the annoying thing about achieving a C6 power state, as all it takes is one device not supporting ASPM to keep the processor from reaching C6. Trying my best to avoid that fate in the buying phase, but easier said than done.


Lots more to reply to, but I got to get to bed, so I’ll reply to the rest tomorrow. Thanks so much everyone for the excellent thoughts and input. Really helped me hone in and clarify some concepts.

You could do that with this adapter and a HBA.

But realistically, the MC12-LE0 is well suited to 6 SATA and 4 NVMe in the bifurcated x16 slot (Asus HyperM2 and similar) OR to a HBA for many SAS/SATA drives but no NVMe (save for the x1 boot drive). 10 GbE NIC in the x4 slot in either case.
For more than 6 SATA and multiple NVMe drives and a NIC, you may look into refurbished Xeon Scalable/EPYC to have more PCIe lanes to begin with—of course, idle power will be higher. Using a 5900X because you have it may not be the best starting point.

Lack of support for C6 may be on the TrueNAS side, not on the hardware side.

2 Likes

Hypothetically, this may be true.

2 Likes