AMD Ryzen 5900x Reused for a Truenas Scale Build: Hardware and Power Efficient Build Recommendations

This is my first TrueNAS build, and first homelab build of any kind, so any advice whatsoever would be highly appreciated!

I have a spare Ryzen 5900X from a previous computer build and wanted to get some more use out of it by moving it into a NAS. Little did I know what I was getting into. I think I’ve ironed out most of the parts and the build, but could use help with a few details from those with experience, namely:

  1. Motherboard selection: My original choice was the ASRock Rack X570D4U, but both that board and its predecessor, the X470D4U, appear to be unobtainium right now, at least from what I’ve seen. I read through the excellent thread on Pixelwave’s Ryzen build, and he also recommended the ASRock X570M Pro4, but that board lacks IPMI and I’m not sure how difficult it would be to set up a board like that headless (I do have a spare graphics card though, so it’s doable). Any thoughts on the importance of IPMI, or other boards I should consider?
  2. NIC selection: deciding between Intel X710-DA2 vs the TRENDnet TEG-10GECSFP or equivalent, which to my understanding is a rebranded Marvell AQN-100. Chose these two NICs for their support of ASPM, to allow the Ryzen processor to idle down to a C6 C-State. Does anyone have any experience running the AQN-100 with TrueNAS Scale, or any issues with it, particularly with a Ryzen processor?
  3. 64 vs 128 GB of RAM: 64 GB should be plenty of L1ARC to start, but I am tempted to get 128 GB from what I hear about Scale and Linux only assigning half of memory to L1ARC by default. I imagine this can be changed in the configs, and I can always upgrade down the line easily.
  4. Any suggestions for good low power HBA SAS cards that support ASPM? One part of the build I haven’t figured out yet, as I’ve been trying to nail down the motherboard first to figure out how many additional SATA lanes I’ll need.
Use Cases
  • Long-term storage archive (OS image backups, documents, pictures, etc.)
  • Music server (Samba for home speakers, maybe Plex Amp or something similar for phones)
  • Game drive (directly load games off the NAS and network)
  • Homelab experimenting
    • Private cloud server (i.e. WireGuard for remote access to NAS)
    • Home assistant
    • Pi-hole or AdGuard Home
    • Maybe a VM or two
  • Optional:
    • Gaming server (Minecraft, Palworld, etc.)
    • Git Server
    • Plex/Jellyfin Media Server
    • VPN/Proxy

Detailed parts breakdown (besides the ones mentioned above) - Please feel free to point out any mistakes I’ve made or other things I should consider:

  • Processor: Ryzen 5900X
    • spare part, wanted to give it a new home. Admittedly there are likely better platforms out there now, and the Ryzen solution lacks Intel Quick Sync (but media encoding is low on the expected use case list). Welcome to hear from anyone who thinks this would be throwing good money after bad by reusing this processor though.
  • Boot Drive: Samsung 980.
    • Cheap and extremely power efficient, with ASPM support.
  • Memory: 64 GB - 2 x Micron 32GB DDR4-3200 ECC UDIMM 2Rx8 CL22
    • Memory speed matches the Ryzen 5900x well, debating 64 vs 128 GB.
  • Pool storage: 8-10 4 TB WD Red SA500 hard drives in RAIDZ-2 single VDEV
    • Probably the most controversial part of the build. I wanted the pool to be fairly performant to allow for playing around with a variety of homelab applications, and based on the ZFS reading I did, the best way to do this was to have a large quantity of drives, at least when it comes to streaming reads and writes. On the other hand, IOPs performance in RAIDZ is tied to the speed of a single drive, and having many smaller HDDs isn’t power efficient compared to a few large drives. Furthermore, I don’t anticipate our storage needs being extremely high. That’s how I landed on SSDs, for their speed, power efficiency and flexibility (and honestly, because it sounded fun. The worst reason, but it’s a reason). Considered a two VDEV setup with RAIDZ-2, but I believe the IOPs from a single SSD will be sufficient performance for what I want to accomplish. Please feel free to talk me out of this one if you feel I’m making a mistake.
  • Power Supply: Corsair RM750x (2021)
  • Case: Sliger C4712
    • Only downside of this rackmount case is the lack of an air filter. Looks better quality than the Rosewill cases though. That being said, if anyone has a suggestion for an alternative quiet 4U rackmount case with a filter, let me know.
Other Relevant Info
  • The NAS will be plugged into a UPS, which is backed by a standby generator
  • The 10g NIC will connect to a 10g port on a 2.5g switch. Plan on upgrading to 10g, but it’s an older home, so probably will have to pull cat6a rather than fiber. The 10g will still be useful of course, if multiple systems are pulling from the NAS at the same time.
  • Chose TrueNAS Scale for its containerization advantage over Core, with Docker and Kubernetes being more robust than iocage. Fits my overall use case better. Still a NAS first, but it’s a NAS+. Also suspect that it will have better hardware compatibility for AMD Ryzen and easier to make a low power build with, though admittedly I haven’t looked into FreeBSD alternatives to utilities like PowerTOP on Linux. Still a lot to learn here.
  • Don’t believe I’ll need L2ARC, SLOG, or a metadata VDEV, since I’m working with an all SSD pool.

Any answers to my above questions or input on my build is much appreciated, and a big thank you in advance.

This is no longer the case as of 24.04.0 (though be aware that there is a bug causing excessive swap usage and slowdowns related to this that will be fixed in 24.04.1, set to be released later this month, there is a workaround for this though until that point).

I typically prefer to stick with Intel as they have great compatibility. I haven’t had much interaction with Marvell so no idea how well it would work in SCALE. Also had no idea TRENDnet made NICs, I’ve got tens of their PoE injectors laying about.
X710-DA2 also has two SFP ports, so you’ve got the potential to scale (pun intended) there with multipathing.

Nothing a quick Amazon search for “PVC dust filter sheet” won’t fix (nice and cheap too)

cpupower perhaps? I haven’t used powertop before
Tiredness getting the better of me… powertop exists in SCALE, and I’ve used it, God knows what I was thinking of.

If you’re looking to play around with VMs I’d recommend creating a small mirror pool to run them from, even if it’s a 2x2-way mirror of small-ish SSDs with a replication task to back up to the primary storage pool. Mirrors are recommended for block-storage due to the higher IOPS (although it looks like you’ve done a lot of research already, so you may already know this :P)

Have probably missed a lot, it’s 2 in the morning and my eyes have started going halfway through my reply.

1 Like

When you’re ready, you’ll probably want to play around with Sandboxes/Jails too.

I demo installing docker, dockge and jellyfin in a sandbox here:


I run the X470D4U with a 5800x… I run the CPU in 65wtt ECO mode with CO -15 all cores.
I just put a cheap 10G Mellanox MCX311A-XCAT NIC ($20) in it hooked up to a cheap 4 port 2.5g switch w/ 2 x 10g SFP+ ports … I run CORE though … it’s been rock solid 99% of the time … that 1% was from my screw ups

I’ve thought of building an AM5 desktop setup to run SCALE on, but IPMI is very handy … and the X470D4U far surpasses what I actually “need:” anyways with just running Plex and UniFi Controller in separate jails … and like I said … it’s been rock solid. Everything I use it for just works, and works well


Excellent to hear that. Assumed that might be the case. And by the time I get all the hardware, I imagine the swap usage fix will be out as well.

These two reasons are why I was leaning towards Intel as well. Though the TRENDnet NIC came onto my radar when I saw this forum post and was impressed by how low the power numbers were. Wasn’t able to find a good apples to apples comparison with the X710 power wise however. Still leaning towards the X710 for the above reasons though. How useful would you say multipathing is?

Exactly what I was thinking.

Absolutely tried to do as much research as I could to make sure I had a basic grasp on things before coming to the forum, and knew of the concept, but hadn’t considered having a smaller mirror back up to the main storage pool. This is why real world application input and experience is extremely useful, thanks for that! I’ll probably start by using the Main pool for this (unless that’s bad practice), since the SSDs should be decently fast on there own, but I’ll absolutely keep this in mind and maybe add some mirrored NVMe drives in the future for that purpose if I do end up heavily using VMs. Still dipping my toes into the rabbit hole, so unsure how deep I will go (I was theorycrafting about hosting a vulnerable VM on the NAS for cybersecurity target practice, before quickly realizing how bad an idea that is - duh!).

No worries, and sorry for such an information dense post. It’s my first time sorting through all this stuff, and I tried my best to highlight the most relevant details, but still a lot of things I’m trying to fully wrap my head around. Hopefully you get some good rest, and thank you for your input!

Thanks for sharing this, was going to be searching for a nice set up guide once I got the Hardware and OS set up and configured. Sandboxes seem like a really excellent solution for a lot of what I want to do. Absolutely will check this out in detail when I get to that stage!

Yep, the X470D4U seems like a great solution, bummed I can’t find it for sale anywhere at a reasonable price. Glad you had success with Ryzen though and that the path is decently trodden. By any chance, have you ever been able to measure power numbers with that nice underclock you are running, or experimented with undervolting? I forget all the details of how a PBO underclock effects power draw on Ryzen platforms, chance it might actually not affect power draw at all (assuming CO - 15 is referring to a -15 PBO overclock here).

The true struggle of speccing out a NAS build, separating out what you want from what you need; especially difficult when you are just starting out lol.

1 Like

You could likely make your own IPMI if the X570D4U is even less obtainable than it used to be a year or two ago…

Otherwise I see no obvious flaws. Consider making use of whatever nvme lanes are available for VMs & apps instead of dumping everything onto the SSDs. Also you’ll need an HBA if you’re going 10 SSDs - gives some space on motherboard sata for boot pool (yes you can boot from the HBA, but it is a pain IMO).

I’d argue that you don’t need 128GB of ram… but I do sometimes regret only getting 64.

*Edit: 5900x is a great choice, been using it for a while now. Limited it to uhh… ‘eco mode’? if I remember correctly. Saves the hassle of downclocking/undervolting and potentially introducing instability to something that is meant to be running 24/7 by locking it to no more than 65w.

1 Like

Just saw a video on PiKVM the other day and thought it might be useful for this. Guess I’ll have to dive in deeper and learn how it works and what I need to set it up. Hopefully Raspberry Pi’s are easier to get these days.

Great to know. Now I am considering adding two NVMe drives now for VMs and Apps. I might copy some of the builds I’m seeing here and going with two 1 Tb NVMe’s in a mirrored array, since that seems like a solid way of doing this. I notice a lot of folks running 4+ NVMe drives. Do they plug into the HBA card, or are folks using PCIe adapters to mount them?

Also, an aside, but I see a lot of folks are running mirrored boot drives. I was just planning on using a single Samsung 980 NVMe drive for this to preserve PCIe lanes, and assumed it would be fine since TrueNAS used to boot off of a USB (USBs being famed for their reliable flash memory). Is it standard practice to boot off of mirrored drives now, and what reliability benefits does that bring?

Great point in preserving motherboard SATA. From what I understand, I’d just need to buy a SAS HBA, and then I can break out each of those SAS ports to 4 SATA ports with break out cables and just plug the drives in. Any other things I should be aware of?

Also, by any chance, do you know if the LSI IBM sas9211-8i you are running supports ASPM? Been hard to get details on that, and I’m trying to be diligent to make sure anything I pick up supports it so the CPU can deep idle. That’s the annoying thing I’ve found with ASPM, one spoiled apple ruins the bunch.

I’m leaving two slots open, but the RAM is fairly cheap compared to the total BOM cost of the build (and probably could find it cheaper if I searched around for used)… so I’m tempted just to go for it. No, begone RAM devil, I don’t need it! I’ll just download more later.

Glad to see someone who has had success running the 5900X. Reassuring to know it’s a reliable and supported choice. I’ll absolutely lock it to eco mode, great piece of advice from you and ThreeDee.

Thanks so much for the feedback! You guys are awesome.

1 Like

It’s not bad practice per-se, especially considering that you’re running an SSD pool (this is actually something I missed last night and assumed you were running hard drives).

Mirrors are the general recommendation for block storage. If you’re only having a play around with a couple VMs I can’t see any reason why there’d be an issue, and if you decide you want to upgrade in the future you can just replicate the VM’s zvol(s) over to a pool made up of mirrored SSDs.

1 Like

I’ll piggy back of this thread, as I think there is a relevant question regarding drives for VMs: at a given cost point, it is often possible to buy, for example, an nvme drive with significantly faster read/write characteristics, or one with double the capacity.

I understand that zfs starts slowing down significantly the fuller a pool gets, at least, as explained here by jgreco. Does that apply more to spinning HDD than to NVMe and Sata SSD? In which case, is there a general rule of thumb in relation to focusing on capacity over read/write speeds for m.2 NVMe?

1 Like

For a motherboard, I suggest the Gigabyte MC12-LE0. It has fully-featured IPMI and explicit ECC RAM support. They can be acquired very cheaply from a German retailer, see here: Gigabyte Mainboard MC12-LE0 Re1.0 AMD B550 AM4 Ryzen. Shipping is high to the USA, but I happen to know someone who bought a few extras to sell (and amortize shipping, PM me for details :wink: ).

With just a 5700x CPU, 128GB ECC RAM, a single SATA SSD, and a single 92mm CPU fan, you can expect idle power consumption around 24 Watts. I’m using this board for my home server (running Proxmox, I have a separate system for TrueNAS), and average power draw is about 44 Watts (final config has several fans, Solarflare 10gbe card, and I’m running a three-camera Zoneminder system on it, among other things, so it’s never idle, but generally a fairly low load).

For 10gbe NIC, keep in mind, all else being equal, SFP+ with TwinAx DAC will always have lower power consumption than RJ45 10GBase-T. It takes a fair amount of power to do 10gbe over standard Ethernet cable, whereas direct attach copper sidesteps the need for the PHY thus saving a lot of power (with the caveat that max cable length is rather limited). Some kind of optical (fiber) connection is next-best.

For a 10gbe NIC, I’d suggest getting a Solarflare 5000-series from ebay, e.g. SFN5122 or SFN5152. The 5000-series specifically has very low power draw (their newer cards are still fairly efficient, but higher) and are readily available on ebay for $20 or less.

I’m not sure about the Solarflare’s support of ASPM. In my use case, the network card will pretty much always be active (because of the cameras), so I don’t think ASPM would ever have a chance to be utilized. As for Ryzen C6 C-State: on my (Proxmox) system, it appears C3 is the lowest C-State (e.g. via powertop). I’ve only done some very cursory research, but at first glance it appears the 5700x doesn’t actually implement the lower C-states.

Edit: one thing I should add, that Gigabyte board has two PCIe slots, one is only x4. I would expect a single-port Solarflare SFN5152 to work in that x4 slot. I haven’t tried that exactly, but I have used that NIC in a Supermicro A2SDi-8C-HLN4F, which has only an x4 slot. So presumably you can put your NIC in the x4 slot and use the x16 for your HBA.


I guess the only benefit of nvme for VMs would be the crazy increased iops & latency performance vs sata at a sweet consumer price point.

I’m 99% sure that the 80% utilization limit would still apply. I’d personally get double the capacity vs double the speed in this hypothetical if cost is the same. This is assuming same quality brand & something like pcie gen5 1tb vs gen 4 2tb (which is a somewhat realistic hypothetical).


The 90% utilization limit certainly still applies (when zfs switches algorithms)

Until then the basic problem is fragmentation has increased and finding “holes” to write into takes longer. But on an SSD there’s not much seek penalty

Put another way, “80%” is guidance because you really want to avoid 90% :wink:


I am a spoiled brat, and I refuse to use a system without IPMI for even my homelab at this point. I’m probably a little less risk averse for homelab use cases, and I wouldn’t die on the not having ECC hill. I will die on the not having IPMI hill. :slight_smile: Thankfully for me they typically come hand in glove.

Stick with the tried-and-true server grade cards. In SCALE you should have decent drivers for the Aquantia card, but that doesn’t mean the hardware is necessarily any good. IIRC in CORE, it’s not even supported.

I’d recommend that Intel card for sure. I also really like these cards:
Dell Mellanox ConnectX-4 Lx CX4121C Dual 25GbE SFP28 FH (MRT0D) | eBay

I have no affiliation with this seller, and always be weary buying used cards on eBay. Do your own research, see:
Fake server cards - Resources - TrueNAS Community Forums

But that being said, I’ve happily purchased several ConnectX cards and run them on TrueNAS and Windows systems. No complaints. They’ll even work on CORE, though they don’t have a large market share there AFAIK so YMMV on FreeBSD. I could be wrong.

You can use 10 gig optics in the 25 gig cards, and you’ll probably benefit from having better silicon. For 40 bucks it’s a steal IMO.

Also, as far as RAM goes. Personal experience backed by plenty of others on Reddit and other forums. The X470 D4U board is picky about RAM. It could just be that first gen Ryzen was gernally picky about RAM. IN any case, you’ll probably have to run at JDEC speeds with poor timings and not XMP/AMP if you load her up with RAM.

That being said, anything >/=64GB especially, I’d find something in the QVL of whatever board you find and either find that SKU or find one that reports the same timings and JDEC speed.

When I had 64 in my X470 D4U and 1700x, I had to run at 2400 with all 4 DIMM slots populated, even though the RAM was spec’d for 3200 and ran at that speed with only 2 DIMMs.

I’ve seen weird problems, from myself and others. I just ran a normal update once, everything was fine for months. After it completed, I restarted. The system didn’t come back up. It couldn’t boot and had to hard code the speed to the slowest speed with 1 DIMM, reboot, install another, reboot, install 2 more, reboot…It was obnoxious.

This is “problem” is true for most desktop adjacent platforms in general, not even just the one in question here.

This is also why I now typically run enterprise-grade stuff at home, power usage be damned. :slight_smile:


@OP, not sure where you’re based, but this Aussie site has the x570D4U in stock.

They shipped an x470d4u board to me in NZ for a reasonable price.


Thanks. The context is that SATA SSD in the UK cost significantly more (about 20%+) more than the equivalent quality M.2 NVMe drive, but within the M.2 NVME drives there is overlap between say, 1x read/write speed and 1x capacity v. 0.5x read/write speed and 2x capacity but the same general endurance rating (note, one would expect the higher capacity to have higher endurance, but this is not the case in the exact cost overlap areas I’m seeing). It’s still a fair comparison though as the same data is put through both in this type of scenario.

Considering the use case (increasing storage of small game save files for a game server before they get moved to long term archival storage, and storage of larger audiovisual files before the same move rather than block storage for VMs) in my case I might be better off with the larger storage as I don’t need the extremely rapid data access for VMs as that is already met by the current pool. In general though, it look like another interesting trade off for homelabs :face_with_diagonal_mouth: :money_mouth_face:

Generally speed is a concern if you have a specific use case that drives it. And then it gets more complicated as you need to know what kind of speed it is you want:

  • Raw sequential read/write?
  • Random read/write?
  • Lower latency?
  • Sustained read/write speeds?

Hence unless you have a specific need capacity is best imo (cost, endurance, & quality being equal.)

The strange thing is that I wouldn’t just expect it, but every manufacturer I’ve seen has higher endurance for higher capacity drives (apprx double TBW per double the capacity).
Looking up Total Bites Written on any of these randomly chosen pcie gen 4 nvmes would argue that I’m not crazy:

Unless you meant DWPD in terms of endurance - in which case I’d argue equivalent DWPD for disk A & B, but with higher TBW for disk A, means that disk A still has higher endurance.

That 5900x is overkill, I am planing to downgrade my CPU from 5600x to Ryzen 3 PRO 3200G. As my first thing was to undervolt the 5600x and disable Turbo. That thing heats up for nothing, just idling. Since this will run 24/7 consider the electricity bill.

The missing IPMI feature is not as bad as you think, once you setup everything you will (atleast for me) rarely touch the server.

I have the ASRock X570M Pro4 with ECC RAM and it just works. The fan on the motherboard (right bottom) got stuck in the first week, so thats that.

The board supports bifurcation, which is neat for my ASUS Hyper M.2 x16 Gen 4

The CPU is probably overkill but I am unsure if it has much influence on idle power consumption given that the X570 chipset is not exactly well suited to a low power system.

I recently built a system with an ASRock Rack X570D4U and a 3900x CPU that was repurposed from another build along with 2x 16 GB ECC 2666 memory sticks. The system idles at 29 W with an NVME boot disk on a 550 W Platinum PSU. That doesn’t seem excessive considering the idle load of the X570 chipset which is known to overheat on this board with its TDP of 11 W. Active cooling is required with an after-market solution in anything other than a well ventilated 1U chassis. A 14 cm fan located on the side of a Fractal Design Define R5 does the trick silently.

The IPMI on the board is quite nice since Ryzen 2 and 3 CPUs do not feature any GPU. I would consider it as optional on a Ryzen 4 system but not on this generation.

1 Like