New to TrueNAS & Server Building – Looking for Advice on Planned (Theoretical) Build

Hey everyone,

I’m fairly new to TrueNAS and to proper server building in general. Up to now, I’ve mostly used old desktops for small projects like game servers, but I want to explore something a bit more serious—and fun. At the moment, this is more of a theoretical build that may turn into a real project later on, depending on what I learn.

Right now, I’m torn between using TrueNAS or Proxmox as the main OS. I’m not sure whether I should dedicate the system purely to storage or allow some room for virtualization as well. ZFS is new territory for me too. I originally thought of setting up something like RAID10, but I’m definitely open to better suggestions.

I’d really appreciate any advice or insights, especially on the following questions:

  • Is the chosen CPU complete overkill? Are the PCIe lanes sufficient?

  • RAID controller or HBA? With or without cache?

  • Do HBAs need additional cooling/fans?

  • How much RAM is realistically needed for a system used primarily for storage?

  • Is this the kind of setup a newcomer can handle, or should I seek help from someone with more experience?

Below is the hardware list I’m currently considering:

  • Chassis: 4U AMD single-CPU RA1436-AIEP server

  • Mainboard: Supermicro H13SSL-N

  • TPM: Trusted Platform Module 2.0

  • Networking: 2x 1 Gbit/s onboard LAN (Broadcom BCM5720)

  • CPU: AMD EPYC 9555P (3.20 GHz, 64 cores, 256 MB cache)

  • RAM: 128 GB (4x 32 GB) ECC Reg DDR5 5600 (Samsung)

  • Drives: 24x 24 TB Western Digital Ultrastar DC HC590 SATA HDD

  • Cabling: Broadcom SlimLine SAS cable (SFF-8654 to dual SFF-8643)

  • Backplane: SAS-III/SATA expander backplane (single expander chip)

  • HBA: Broadcom 9500‑16i SAS/SATA/NVMe

  • Networking (additional): Intel X710‑T2L Dual‑Port 10‑Gbit RJ45

  • PSU: 2× 1200W redundant hot‑swap (80 Plus Platinum)

Intended usage:
Video playback, fast backup/restore operations, storage of RAW video files, and possibly light virtualization (web servers, data gateways, maybe a VS Code server).

If there’s anything missing that would help you advise me better, please feel free to ask.
Thanks in advance - D.

First: welcome to the forums.
I use TrueNAS quite a lot and also have quite a bit of Proxmox experience, we also use some Thomas Krenn servers at work so I hope I can give a somewhat qualified answer here.

TrueNAS also allows for some virtualization - not nearly as advanced as Proxmox but TrueNAS is definitely more virtualization-capable out of the box than Proxmox is capable of being used as a NAS.

Imho: yes. For some webservers, whatever you mean exactly by data gateway and maybe a VS code server I’d say that an EPYC 9555P is way overkill. Looking at the configurator this upgrade alone will cost you 4610€ while I’d say that even the base EPYC 9015 would probably be enough.

For ZFS: definitely an HBA, never use a RAID controller with ZFS. HBAs typically don’t have cache.
A UPS would be very advisable though.

It depends. In a rackmount case it will probably be fine, if the intake air isn’t too warm. Newer HBAs have integrated temperature sensors so you can check once the server is deployed and strap a fan to the heatsink if needed.

For ZFS? The more the better. If you want to spend the few thousand you could save on CPU, use them to upgrade the RAM to 256GB (or more).
Honestly though, with current pricing, I’d probably “only” order 128GB for now and hope that’s enough until RAM gets cheaper again.

Hard to say. I think the “hardest” part here will be choosing a vdev layout that matches your needs. For 24x24TB drives and the usecase “video playback, backup, storage” I’d probably do 2 12-wide RAIDZ3 vdevs or 3 8-wide RAIDZ2 vdevs. Generally speaking one RAIDZ vdev caps out at the write speeds of the slowest single disk in the vdev (for sequential writes). I don’t think you need to be able to saturate a full 10G link in sequential write speeds - to do that you’d need to use a very different layout, but I think that would be a major waste of space for the use case.

I would very much recommend adding a few SSDs though for the VMs - even if it’s just for their boot disks. You could just take two relatively large SAS SSDs and mirror them as a seperate fast pool but even a single SSD will probably outperform basically any plausible HDD-only vdev configuration. You could also use multiple SSDs as a 3- or 4-way mirrored special vdev and adjust the dataset parameters so a VM-dataset only gets stored on the SSDs but I don’t think that would be worth the effort and it would be complex for someone new to ZFS.

It would be interesting to know what qualifies “fast” backup/restore to you :smiley: If you actually do want to saturate a 10G (or even a bonded 20G) link for backups then my advice would of course be very different.

Hey, thanks so much for the in‑depth response!

I forgot to mention that I’m also planning to add around 4 TB of SSD cache. I’m still undecided whether it should be NVMe or SAS SSDs.

You’re absolutely right about the CPU the one I listed is probably way overkill, so I’ll most likely choose something more reasonable.
Is it fair to say that RAM is more important than CPU for ZFS? If so, why is that the case?

I think it’s feasible to reduce the CPU a bit and increase the RAM to 256 GB instead.

Regarding “fast restores”: what I mean is restoring around 600–700 GB of work‑related data, mostly SQL databases. It’s not strictly required that it’s extremely fast, but since I’m already investing a lot into this build, I wanted to make sure the performance is strong.

One more thing:
Does CPU cache matter much for this use case, or should I focus more on core count or clock speed?

Thanks again for all the help :slight_smile:

ZFS uses your RAM as the primary cache. So adding “Cache” drives - in most cases - have no benefit.

Databases mostly require sync writes, and sync writes to HDDs are really slow. So invest in a proper SLOG device. That beeing either a Intel OPTANE drive or a SSD/NVME with PLP (power loss protection).

And just for comparison: My “new” SSD build uses a puny Intel Xeon D-1521, which is 11 years old, has 4 cores and clocks at max 2.7Ghz. It can saturate 10G no problem.

My other build you can see in my signature. Its now a proxmox host and is mostly bored, with what I throw at it.

So yes, your build is total overkill, but if you want it for fun, go ahead. Just dont expect it to really matter in your usecase.

Heyy, thanks a lot for the explanation!

I’ll definitely do some more research on ZFS caching in general and especially on how SLOG works.
About SLOG devices would you recommend something like the Intel Optane H10 (1 TB + 32 GB Optane), or is that not suitable as an actual SLOG device?
From what I’ve read so far, it seems that a proper PCIe Optane drive might be the better option, but I’m not fully sure yet.

Your CPU comparison really helped put things into perspective, thanks a lot for that.

Just so I understand correctly:
You’re running Proxmox as the base OS and TrueNAS inside a VM/container and you’re not seeing any performance issues at all?

Yeah, that woudnt be suitable. Its needs to be a regular Optane. The 900p / 905p was a “prosumer” drive, not datacenter, but it works very well for a SLOG. The SLOG also doesnt need to be big.

The Proxmox host is running a Truenas VM with 32GB of RAM and the SATA Controller passed through (and the driver blacklisted in proxmox). It also has its dedicated 10G ethernet passed through from the host. The HDDs are the bottleneck. Its the backup machine for my new primary.

Truenas on proxmox has a lot of caveats. If you dont do it right, you can lose your data.

Well, as described you’d be using 8 lanes for a HBA and 8 for a 10G NIC. EPYC 4000 could handle that—with higher clocks, but more expensive UDIMM.

EPYC 9000 might make sense if you were designing an all-flash NAS for a platoon of video editors to work simultaneously, edting over 25GbE.

Ha! So far we had data storage, assumed to be bulk storage—meaning raidz2/3.
But a live database part would be best handled on mirrors, at low occupancy (< 50%). Possibly on SSDs, alongside the apps and VMs.
So change the design to have at least two different pools.
Or is it NOT the actual database but only a backup?

How are you sharing? And how many concurrent users?
SMB is single-threaded per user.

I think a lot of the answers depends on your use case. if you are simply using it as a file server, it is a massive overkill.

if you intend it to be your main virtualization machine, I would go with either PVE or an all-in-one NAS OS (trueNAS scale, fnOS or zimaos…). those all-in-one NAS OSs are nothing but a PVE with a built-in NAS server. I’m using fnOS and is quite happy. Thinking about completely moving away from PVE btw.

1 Like

I Asked TrueNAS for a quote and they recommended me something like this. Not really the recommendations you guys in the Forum are giving me. From what i reasearched on my own you all seem to be correct tho xD.

So, this is what I’ll likely go with for now. I did some research based on the answers I got here, and this is the summary:

CPU →
Probably too powerful for my needs. Even with KVM + containers, I’d benefit more from investing in RAM.

RAM →
Since I’m using ZFS, RAM is important. I’ll start with 128 GB RDIMM and keep the option open to expand to 256 GB or more later.

HBA →
Yes, I’ll need one. If you have any specific recommendations, I’d appreciate it.

Optane →
Still unsure. I’ve read a few Reddit posts saying the performance difference may not be worth it. More input here would be helpful.

OS →
TrueNAS SCALE. The system will mainly serve as a storage server, and the built‑in VM and container support is all I really need.

Pools →
Not decided yet. I need to figure out the best balance between redundancy and usable capacity. I’m aiming for a 36‑bay 4U chassis with around 300 TB usable to start, and ideally the option to grow toward 600 TB later. Recommendations welcome.

SSD Cache →
Seems mostly unnecessary.

Drives →
Likely Seagate Exos or WD UltraStar, probably 24 TB+ in 3.5". Whether I choose SAS or SATA will mostly depend on pricing.

I am still not sure if you want to only store database backups (using xtrabackup or something similar) or if you actually want a database server to directly access the NAS.
If the first is the case then you won’t need an SLOG device or anything like that.
If it’s the second case then you will definitely be needing an SLOG device for proper performance (Read SLOG Devices | TrueNAS Documentation Hub and ZFS ZIL and SLOG | TrueNAS Documentation Hub for that).

A 9500-16i as you planned to use should be a good choice while a 9400-16i would most probably suffice too.

Using an optane drive instead of a normal NVMe SSD isn’t as much about performance as it is about write endurance - optane is crazy good on that.

Again: depends on your workload. As @etorix said if you want the database to run on the NAS and for example be an SQL replication target then <50% filled mirrors would be advisable.
If not I’d stay by my first recommendation (8-wide RAIDZ2s or 12-wide RAIDZ3s).

Keep in mind how extremely different the use cases for l2arc (which is called a cache vdev in ZFS) and slog/special devices are.

2 Likes

It Would be just Database Backups. They are run on other servers.

Is there a Optane Device you would recommend?

Ill read into that thanks

In that case I don’t see why you’d need an optane device.

Honestly if you want to use one for slog an Intel M10 module (which can be had very cheaply on Aliexpress and the likes) will probably be enough, even with 16GB - they also come in 32GB and 64GB but are much “more expensive” than the 16GB version (still pretty cheap though).

These also make great boot drives - 16GB is enough for a few TrueNAS boot environments.