Advice for TrueNAS Build

Hi everyone,

I’m completely new to TrueNAS (i have set up a test system on an old PC for a few days thats it), but I’ve already gone through the basics and I’m fairly comfortable with IT in general. Now I want to build a proper system and have been researching what would make sense for a while. I’m really focusing on energy efficiency since electricity costs in Germany aren’t exactly cheap.

The main purpose of the system would be as a NAS that I can also actively work on, while running a few VMs and Docker containers. I’m also considering using it to run Nextcloud for my family in the future. Performance is important to me, so I’d like to build a fairly powerful system from the start.

Here’s the setup I’m currently considering:

TrueNAS Scale

Hardware:

Server: Supermicro AS-2015SV-WTNRT
CPU: AMD EPYC 8124P
RAM: 4x Kingston Server Premier 64 GB reg. ECC DDR5-5600 KSM56R46BD4PMI-64HAI
HDD: 4x Seagate Exos X18 ST16000NM004J 16TB 24/7 512e SATA (RAIDZ2)

For the boot drive, I’m still unsure whether to:

Use 2 SATA SSDs in the front bays in a mirror setup
Or go with 1 NVMe SSD and leave the other slot free in case I want to add a SLOG or L2ARC later on

I’d also like to keep a few PCI slots free for future upgrades (HBA, NIC, GPU, etc.).

Now, I’m not sure if this setup is overkill for what I need or if it’s even worth going with brand-new hardware (especially in terms of energy efficiency).

For roughly the same price, I could go with an more powerful system:

Server: Supermicro AS-2024S-TR
CPU: 2x AMD EPYC 7313

My question is, is it worth it? Are these CPUs much less efficient? Do I even need that much power (though having extra power isn’t exactly a bad thing, right? :D)

If you have any alternative builds with similar specs that are more affordable and just as energy efficient, I’d love to hear about them.

I’ve also noticed that a lot of people go with Intel CPUs and fewer with AMD. Does TrueNAS run more smoothly on Intel than AMD? If so, is there a comparable or better system with energy-efficient Intel CPUs?

I also have a few more general questions:

Is 256GB of RAM total overkill? Would 128GB or even less be enough to start with?
Are the HDDs I’ve chosen appropriate, or would it make more sense to go with more, smaller drives or a different configuration?
If I use 10GbE ports, what kind of performance can I expect? And if I’m working actively on the system, would more RAM (related to question 1) be beneficial?
Does anyone know how much power the processors consume when idle? The system will be running 24/7 and likely idle quite a bit, so I'd like to understand the impact on energy usage.

I’m still pretty new to server hardware, as I’ve mostly worked with consumer hardware in the past. This year I have started with “Server” hardware and built myself an OPNsense box. If what I’ve picked out here is way off the mark, please feel free to point it out!

I know system-building questions are asked frequently, and I’ve already looked through a lot of threads, but I just want to be sure before making such a large investment. If you have any relevant threads or posts, I’d really appreciate it if you could share them.

Thanks in advance for any help, ideas or suggestions!

Please pick ONE…
Seeing some come for advice on a power efficient system, and then go straight for EPYC is quite surprising. As serious hardware goes, Siena is probably not bad, but then going from there to a DUAL Milan, well, you realise this thing will use 150 W just to idle, do you?

Define what you want and how much resources the apps and VMs require, because 256 GB looks like massive overkill for a home NAS. Just four spinning drives? With a pair of SSDs for apps/VMs?
You can probably save a lot of money (including on the electricity bill) by going for older hardware—and quite possibly a class under EPYC/Scalable.
Few large hardrives is the right direction to save power, and 4 is just enough for a reasonably safe raidz2.

Yeah, I’ve also noticed that it’s a bit contradictory. I probably should have mentioned that, in the end, performance is more important to me (My goal isn’t extremely low power, but rather efficiency with good performance, if that makes sense anything under 80-100 watt idle will do). Regarding the processors, I wasn’t aware that Milan also consumes so much power in idle (that’s why I’m asking, as I said, I’m new to server hardware). My thought process with the first system was that relatively new processors are likely to be much more efficient than older ones. Then I roughly put something together and saw that for a lower price, I could get the other system, which is much more powerful but apparently very inefficient.

Could you give me a tip in the right direction regarding other systems “a class under EPYC/Scalable”?

For the 256GB of RAM, I figured that if I’m working with a lot of data simultaneously, more could be cached in RAM, and I’d get better performance rather than constantly loading from the HDDs (I’m working with Unreal Engine, and the assets can get quite large). And yeah, 2 mirrored SSDs just for the VMs and apps sounds pretty good, especially since it’s not a huge cost compared to everything else.

As for what my VMs will use, I’m not exactly sure yet. I have a few things planned for the future, like Nextcloud, OpenProject, and maybe a game server or two (Minecraft), though I’m not sure if that’s a good idea yet.

As I see it, EPYC is for a ton of PCIe lanes which is useful for a lot of NVMe.

If you only want a handful of NVMe that can be accomplished with PCIe bifurcation cards and one or two slots.

Otherwise, modern Gen high clock CPUs are for 25gbps+ work loads where very high single thread perf can help.

What do you want to achieve?

For a “powerful” starter nas, I’d be looking for a microatx board (or better) that supports 10gbe, ECC and ideally an intel/amd igpu. And IPMI.

And the capacity to have both a gpu, an 8i LSI hba and a few m.2 nvmes.

If you want more nvme then you think about epyc :wink:

2 Likes

Yeah, that sounds like the right direction you’re suggesting. (I wasn’t aware that the current Intel Core CPUs support ECC.)

I did some more research and came across a few boards, like the Pro WS W680-ACE IPMI, X13SAE-F, and the Pro WS W790-ACE (the only one with 10GbE).
However, I wasn’t able to find any AM5 boards with official ECC support.

Do you have any boards or barebone servers that you would recommend?

If you are not doing so already, I highly recommend that you download the User Manual for each motherboard and read it from front to back (this gets the motor running), then read it again so you might understand it a little more. Look for features you want. And you may not understand much, or you might understand a lot, but be familiar.

What does “actively work on” mean? I ask because some people want to edit video files which is a major task, however if you just want to use the NAS as a shared space (SMB) for your computer to access and create/edit Word Documents, Spreadsheets, or similar type things that do not require much power, then that is important. As well as if the Dockers you want to run need a lot of CPU power and RAM.

Write down a list of “Requirements” that the system must support. Forget thinking 256GB RAM, it really is about what you need it to do and then choose the correct parts to make that happen. You should not just choose parts that sound cool.

You have people providing you good advice, telling you how much power a CPU draws, talking about PCIe lanes. These are important things to know but you must know the requirements.

  1. How much storage capacity do you need for the next 5 years? (5 years because hard drives typically will last 5 years), however you can always add more storage, and soon it will be easier (I hope).
  2. Do you need really fast network speeds or is 1Gbit/s good enough? And don’t forget the infrastructure you will need for faster speeds, unless you already have a 10Gb network already.
  3. How fast does data need to be accessed? Do you need it instantly upon request or is a 1 second delay okay? This actually allows decisions about the Pool design, and how much RAM you may need.
  4. Will it be an iSCSI host? This too is important for capacity concerns.

I think you catch my drift. You need to know what is expected of your system (requirements) before you choose the parts.

And you probably already know this… The hard drives are “consumable” items, meaning they will be replaced as they fail. The fans are consumable as well, they will fail eventually but some fans last a very long time and some have a short life. The rest of the components are likely to last the life of the system, before you decide to upgrade it. With that said, buy good quality parts or you likely will be buying something again sooner than expected.

These are just my comments and I hope you find the correct parts and build a great server.

Good luck.

3 Likes

All these big server CPUs, Scalable and (non-embedded) EPYC, have quite idle power… because they are not designed for that.
If you find you do need many cores and/or PCIe lanes, keep it to a single EPYC.

That would be Xeon E (= Core with ECC UDIMM), EPYC 4004 or consumer Ryzen, most of which unofficially support ECC.
Another option would be the embedded Xeon D: D-2100 (Skylake, i.e. embedded 1st gen. Scalable), D-1700/D-2700 (Ice Lake, 3rd gen.).

Two classes under, you’ll find the low power storage champions: Xeon D-1500 (Broadwell) and Atom C3000. These, however, may not be fit for demanding VMs.

Is this about running Unreal in a VM, or about serving assets stored on the NAS to Unreal running on your client?

These are workstation boards. They are certainly capable of server duty, though not most suited for this application.
Look into AsRock Rack W680D4U and variants for genuine server boards using 12-14th gen. Core, some including genuine 10G server NICs.
Mind that the W790 board is for Xeon W-3400, which is a variant of Sapphire Rapids (4th gen. Xeon Scalable). And the AQC113 is not exactly a good choice for server use.

Most AM4/AM5 support ECC, without certification. The safest AM4 options would be server boards: AsRock Rack X470D4U, X570D4U, B550D4U, and Gigabyte MC12-LE0 (this one being available very cheap right now).
AM5 possibly comes with official ECC support if you look for EPYC 4004, though the very same boards would likely work as well with ECC-enabled consumer Ryzen: AsRock Rack B650D4U, Gigabyte MC13-LE0/LE1, Supermicro H13SAE-MF (workstation).

Follow @joeschmuck 's advice and list your functional requirements: SMB or iSCSI? how many clients? networking speed? storage capacity?
Then, from the number of drives and PCIe lanes/slots, and the useful amount of RAM, we can narrow it down.

3 Likes

Features I’d like to have:

  • IPMI for remote management
  • 10GbE (not a must-have right now, as I can add a 10GbE NIC later)
  • 3-4 PCIe slots for future expansion (HBA, NIC, GPU)
  • 6 or more SATA ports (2 for boot drives, 4 for HDDs, plus additional SATA or NVMe ports for VMs and containers; fewer SATA ports are fine if I can use an HBA or onboard SAS)
  • 19-inch chassis with >6 front bays or a combination of 2 back and >4 front bays
  1. My current storage needs aren’t huge—maybe 2-3TB of data and another 2-3TB for backups. However, I’d like to have some extra space for the near future and plan to add more drives as needed.

  2. My main PC already has 10GbE, so I’d like to set up a direct 10GbE connection between the TrueNAS system and the PC. While I’m planning to eventually upgrade my entire network to 10GbE (with fiber), that’s probably at least two years away. For now, I’ll would like to have one PCIe slot reserved for that.

  3. Instant access is very important to me. I don’t mind a slight delay the first time I access something, but if I’m actively working on the system, delays would be frustrating. I’m also thinking about spinning down the drives when idle for more than an hour to save energy.

  4. I’ll mainly be using SMB, with maybe 1-2 NFS clients. I had initially planned on using iSCSI, but setting it up seems complex. If I do go down that route, I’ll likely create a separate pool and dedicate it to a single iSCSI connection to my PC. If anyone has good guides or videos on this topic, I’d really appreciate the help, though I might save this for a future project.

Client Setup:

Main client: Who needs the performance for active work.(Me)
Other clients: Around 3-4 additional clients, mostly for backups and office documents.

VMs and Containers:

I’m planning to run a few services like Nextcloud (with like 10 users), OpenProject, a wiki System, a git Server (like gitea), Maybe perforce. I also want to run a Windows VM for testing purposes and might explore GPU passthrough.

Overall, I’m looking for a system with a bit more performance than I currently need, so I can play around a bit, test new things, and spin up additional VMs or Docker containers in the future.

I know it’s impossible to plan for everything, and maybe it’s not even wise to try. It might be better to build a second system in 2-3 years if I need it for additional tasks. But that’s exactly why I’m asking for advice here in the forum – to get a sense of what makes the most sense now and what I can leave for later.

The NAS should only serve the assets and Project files, I already have a powerful PC for tasks like 3D rendering. However, I’ve considered offloading tasks like compiling code to a system with more cores, as even with my i9 12900K, it’s a slow process. I’m not sure if the TrueNAS system would be ideal for this, but it’s something I was considering (but probably won’t do).

And thanks for pointing out the different platforms! I’m gonna take a closer look at those options and see if I can find something that fits my needs.

Your storage needs are limited, and call for a limited number of drives. I don’t think that you’d need a very large number of cores to run these VMs, or 256 GB RAM. RAM s always nice, but you might get away with 64-128 GB and fit that on a UDIMM platform (Xeon E/Core 12-14/Ryzen/EPYC 4004) with limited PCIe lanes but high clocks for SMB to a limited number of clients (SMB is single threaded by client).
Preferably SFP+ NIC in PCIe slot.

As I see it, your server could fit in a MC12-LE0. Onboard M.2 x1 to boot. NIC in x4 slot.
x16 slot bifurcated x4x4x4x4 with a Ryzen CPU for NVMe pool (no Windows VM).
Or x16 slot bifurcated x8x4x4 with Ryzen CPU for mirror NVMe pool and low-progile GPU for Windows.
Or x16 slot bifurcated x8x4x4 with Ryzen PRO APU for mirror NVMe pool; Windows VM on the iGPU… and there’s still 8 lanes.

I let you work out a budget, or options for similar setups with Ryzen 7000/EPYC 4004 or Xeon E. Hopefully, this comes cheaper than EPYC 8000.

1 Like

With how I understand your requirements, you are looking at way(!) too much power. I am running a NAS without VMs on a 4-core E5-1620 and a Supermicro X9SRi-F. The same board and an 8-core Xeon E5 run 20 VM (although mostly idle) which are stored on the NAS.

You don’t give us quantitative information, which is ok. But in that light I would suggest to look at something really cheap (and old) to get a better understanding first. An 8-12 core Xeon E5 is probably more than enough and those can be found (together with DDR3 RDIMMs) for very little money.

BTW: I am also in Germany and my NAS (with 8 HDD plus 4 SSDs plus 10 Gbps NIC) idles at around 100 Watts. The VM system idles at around 75 Watts.

2 Likes

Another thing to consider is the noise level. Any rack mounted chassis, will be loud. Very loud. Of course you can mod them by replacing the fans, but that will not work with epyc or xeon scalable with their standard 2U passive heatsinks. The active 2U and 4U heatsinks are howling aswell. Also you will probably run into cooling problems with the HDs

I have a tower cooler from Noctua with a 14cm fan. Its silent, but wont fit in a 2U chassis.

System is idling at 160 Watts…

1 Like

With your PCIe slot requirements, it’s looking like an ATX sized motherboard.

I would love to be able to recommend a mobo, but I haven’t really determined that since the X10 generation.

It can be very beneficial to run compilations on the NAS, especially if you can get it done with gitlab runners etc.

I run gitlab + CI runners etc using docker on mine. Lots of cores makes good work on compiling with ARC keeping the hot set loaded.

Maybe EPYC is the way to go. That’s what I’m planning for my upgrade for my Primary NAS.

(See signature)

Primarily so I can get more NVME.

If cost is important you can start with a smaller number of cores and an older gen epyc cpu.

I did a similar thing with my Primary… started with an E5-1650v4, now have a E5-2699Av4 which was a fraction of the price of the original
CPU a few years later.

I’d be looking at ASRockrack and Supermicro.

1 Like

First of all, thank you all very much for your answers.

After thinking more about what I really need and want at the moment, I’ve decided that a smaller (and cheaper) system will be enough for now. My plan is to primarily use it for storage, along with a few small VMs and Docker containers, and see how much power they actually consume. If, over time, I find I need more performance, I can always upgrade to a more powerful system in the future and repurpose the old one for backups or other tasks. With that in mind, I took a closer look at the system that etorix recommended.

I researched the components and selected a few to compare with other systems.

I found the MC12-LE0 for around 70€, whereas most other comparable boards (for AM4, AM5, EPYC 4004, Intel 12th-14th gen, or Xeon E) were at least 275€. I paired the MC12-LE0 with a Ryzen 5 PRO 5650G (160€) and 2x Kingston 32GB DDR4-3200, KSM32ED8/32HC (210€), so the base system would cost around 440€. The cheapest alternative system I found was with an X470D4U2/1N1 at 645€ (using the same CPU and RAM). So, I figured I might as well go with the MC12-LE0 and save some money.

I checked out several other systems as well, including:

EC266D4-4L + Xeon E-2436 + 2x32GB DDR5 for around 1145€
X11SCL-F + Xeon E-2234 + 2x32GB DDR4 for around 785€
H13SAE-MF + EPYC 4244P + 2x32GB DDR5 for around 1080€

Just to list a few for comparison. But overall, I think the MC12-LE0 offers great value for the price.

For the other components, I was thinking of something like:

Case: Inter-Tech 4U-40248 (130€) (max CPU cooler height 150mm)
Power Supply: be quiet! Pure Power 11 600W (65€)
CPU Cooler: Noctua NH-D9L (60€) (height 110mm)
HDDs: 4x Exos X18 ST16000NM004J 16TB (1065€)

The total system cost would be around 1760€ + boot SSD and SSDs for the VM pool.

(For reference, the system I posted earlier would have cost around 5500€ + boot SSD and SSD pool for VMs).

What do you think about the setup?

I have a few questions left:

  1. Regarding the bifurcation card - can I use any card, or are there specific ones I should look for?
  2. For NVMe SSDs, are there special models for NAS/server systems, or can I just use something from the consumer market like the Samsung 980 (Pro)?
  3. Does it make a huge difference if I boot from an NVMe SSD versus a SATA SSD? (The system will likely be running 24/7).
  4. Is it worth mirroring the boot drive? If so, I’d probably go with two mirrored SATA SSDs, but if not, I’d stick with one NVMe as etorix suggested.
  5. On a similar note, would it make sense to use two SATA SSDs for the VM/container pool? That way, I could use the NVMe slot for boot and keep the x16 slot free for a GPU or additional storage in the future.

@ChrisRJ: I don’t have a fixed budget as you can see. I was ready to spend over 5000€, but I’d much rather “only” spend around 2000€ if I can!

@Farout: Noise won’t be much of an issue. I have a separate cellar room with 30-50cm thick stone walls, and with the setup I posted, I don’t think it’ll be too noisy. Thanks for the advice, though!

@Stux: Your setup for compiling sounds nice, and I might come back to you in the future to ask how you’ve set it up. For now, I think I’ll manage with my main PC, and I’ll consider building a more powerful system once I have a better idea of what exactly I need.

1 Like

Here’s a Micro-ATX board for consideration,
ASRock Rack B650D4U-2L2T/BCM Micro-ATX Server Motherboard Single Socket AMD EPYC™ 4004 and AMD Ryzen™ 8000/7000, DDR5-4800/5200 PC5-38400 DDR5 SDRAM
I would have preferred going with the EPYC 4004, but decided for now to use an AMD Ryzen™ 7 7700X 8-Core, 16-Thread as I wasn’t able to find the EPYC 4004. I will eventually switch it out and use the Ryzen elsewhere. I’ll be using an 850W PSU, a total of 4 3.5 8TB each, 4 2.5 2 GB SSD’s and one M2. It will be a bit of a power hog, but, fortunately, I live in British Columbia, Canada. We have an abundance of cheap reliable hydropower…(I’m sure some enviro people will complain.)

I think I’ve come across this board before, but I’m not sure if it’s worth almost €700 more than the system I’m considering. The board alone would cost me around €520, the EPYC 4244P is about €275, and 64GB of DDR5 would be another €325. I’m just not sure if the CPU would offer enough of a performance boost to justify the €700 price difference, especially if I’m planning to build another system in the future.

WOW, that’s expensive, my cost here in Canada was $545 (about 400 Euro) The EPYC 4244P is about the same cost. It’s strange how the prices are so different. I’m retired from IT, so this is a hobby and helps me keep my brain greased and working…lol.

That’s quite a list. Thanks for the research and sharing the results.
AsRock Rack boards would expose a bit more of the X470/X570/B550 chipset I/O, but for about 400 € more that’s just not worth it.

No, but the M.2 slot on the MC12-LE0 is PCIe 3.0 x1, so it is purposefully flagged for use as boot device and crippled for any other use. Keep your SATA ports for hard drives, that’s more useful than mirrorring the boot device in a home NAS.

You have an APU, so bifurcation stops at x8x4x4. You can have up to three drives, not four, in an Asus Hyper.M2 or equivalent, but the riser of choice is something like that:

Two M.2 for your app pool (directly on CU lanes), and a x8 slot for a low profile GPU or HBA if you ever need one. For graphical VMs, you already have the iGPU in your PRO 5650G, and the last BIOS F13 should make it possible to pass it through.

2 Likes

I would like to join the thread because I am currently also planning to replace my home server.
I have shortlisted the ASRock Rack X570D4U-2L2T as a motherboard because I have enough SATA interfaces. The motherboards with current chipsets usually only have 4 SATA ports.
Is the onboard graphic only intended for the IPMI or can it also be used for a Windows VM?

Opening your own thread may be more appropriate.
The BMC has a basic 2D GPU. For the sake of fun, you might run a basic Linux desktop on an ASP2500, but Windows will not like it.