Tips for Rackmount TrueNAS build

Hi folks,

I would like to replace my 10 year old Synology RS815 with a self-made TrueNAS.

Not sure if it’s going to be Core oder SCALE. I would prefer Core as I’m pretty used to FreeBSD and ZFS. But the rumors about it being dead unsettles me.

# My workloads and expectations

It will be focused on storage only. It does not have to do any virtualization, containers, transcoding etc. I already have a separate machine for that.

Currently, I’m accessing the data over NFS and iSCSI.

It will run 24/7, so a good balance between energy consumption and performance is important to me.

I’m based in Europe (switzerland exactly), which sometimes makes it hard to get some parts. This is what I found so far:

# Case

Must be rack mountable. This one looks nice:

Fantec SRC-2080X07-12G (2023)

It’s 2U, has 8x 3.5’’ bays with a 12G SFF8643 Backplane

# PSU

I’m pretty confused about the form factor required for the SRC-2080X07 case.

They are selling an official one called NT-2U60E. The data sheet only mentions “2U” as form factor.

I do not like having to buy a proprietary(?) psu for my case. But I’m also not that familiar with the form factors that exists.

# HBA

I found the LSI SAS9300-8i controller which looks affordable and is mentioned a lot for TrueNAS setups.

Also a pair of SFF-8643 to Mini SAS cables.

Is this good or does anyone have alternatives?

# HDDs

Currently, I have about 20TB of data in use.

I got a pretty good deal on 6x WD Gold 10TB drives.

Will start with 5 disks in raidz2 and keep one as a spare drive. This should give me about 25-30TB of usable storage, which should be enough for a while.

# Mainboard & CPU Combination

I am having a hard time finding an affordable motherboard and CPU combination that is power-efficient, reasonably priced and supports ECC. What are your opinions about the

AsRock Rack X470D4U with a Ryzen 5 5600 ?

What I like about it:

  • It has 2 M.2 slots which I could use for the OS (mirrored). Something like 2x WD Red SN700 500GB should be enough?
  • 2x 1Gbit/s Intel NICs (Support for Intel NICs are better right?). Could even do LACP.
  • IPMI
  • The CPU is very affordable, has ECC support and 65W TDP. Is performance enough?

# RAM

I think a total of 64GB (ECC) should be enough?

# CPU Cooler

The Silverstone AR09-AM4 looks like it can fit into the case.

This is my first build and I may have forgot to consider some things.

What do you think guys? Did I forgot anything? Does this make any sense at all? :smiley:

look at the system in my signature, I’m using the same case in a selfbuild 19" 6HE rack.

Edit: The only issue i had at the time i build the system around 4 years ago was that the rails for the rack were not available. I found someone on ebay who sold the original rails, and they work, but are IMO not of a good quality…

1 Like

Hey thank’s. What PSU do you use? The fantec one or something else?

Rails are also available here. But yeah I do not expect that much quality :smiley:

I’m using this PSU with a adapter bracked i cut from a sheet of metal

1 Like

BTW here’s a picture of my “rack”

2 Likes

I’d look for a Pro 5600G or GE model - these are laptop units that use far less power than the regular CPUs. Just be sure to get an unlocked model. The “Pro” buys you official ECC support whereas non-pro models only imply ECC support.

As for the LSI, I’d try to find a 9400 era model for lower power consumption. Just like the x710 vs. the 520-era Intel NICs, later LSI HBAs offer lower power consumption and more advanced power saving modes.

Anywhere power is expensive, more efficient PSUs make a difference.

1 Like

What “rumors”? CORE has been on life support for at least the last five years; iX haven’t done anything meaningful with it since they announced SCALE. iX have been recommending SCALE for new deployments for years now, and have said that CORE is in a “sustaining engineering phase.” There will never be another major release of CORE, it’s based on an EOL version of FreeBSD, and the most you can hope for is that you might see a hotfix if there’s a showstopper of a security issue. iX haven’t always been honest about these things, but in the past year or so they finally admitted what many of us had guessed: CORE is all but dead.

Use SCALE, or use a different OS.

That’s what I meant. Thank you for clarifying. I’m completely fine with SCALE.

I also saw the GE models but they are really hard to get. I first had the PRO 5650G on my list and then realized that I don’t need integrated graphics and assumed the 5600 draws less power. But I see your point about ECC support. I may go back to the 5650G.

And thank’s for the tip regarding HBA. I just realized that the 9440-8i is even cheaper than the 9300 from where I order it.. wow.

See this thread for a really interesting discussion re: what Ryzen to go for, how to save power, etc. @Mark_the_Red did some really great research into how to build a very efficient NAS using the Ryzen 5650G.

1 Like

That got me in the right direction. I found the Silverstone TX500 Gold.

There is a 3d-printable mounting bracket on Printables exactly made for my case.

If anyone is interested, search for SRC-2080X07 site:printables.com

1 Like

haha so where’s my unicorn then? :rofl:

Have a look at ricardo.ch.

Good rackmount chassis use dedicated server-style PSUs. Rackmount chassis which use desktop-style PSUs are designed to lure consumers looking for “a rack”; expect poor build quality and dubious designs.

If you do want a rack, your best option is to buy a complete server, second hand (ricardo) or refurbished. These will not be the most power-efficient however.

With these requirements you could build in a Fractal Design Node 304 with a X10SDV-4C-TLN2F motherboard (eBay, from China). 6 drives, no HBA, low power (CPU is 45 W at full load), your choice of ATX PSU. But not a rack case.

Oops! Raidz2 does not fit well with iSCSI.

1 Like

Yeah I looked at some of them. Even was at a local recycler that gives some of them away for free but these enterprise chassis are often proprietary as f**. I guess it depends on the vendor. The build quality may be superb and they look cool but they also have very dubious designs. At least the ones I saw there. Getting spare parts in the future will be a nightmare. And as you said, they are not that power-efficient. So I don’t think that’s an option for me at the moment.

Interesting. Do you have more information on that? I had some really bad experiences with using database containers over NFS. After switching to iSCSI no problems at all. But maybe I’m replacing the two M.2 Drives that were originally planned for the OS drives with bigger ones and use them in a mirror for the DB workload. I guess two S-ATA SSD’s for the OS should also be sufficient.

Yes, they are. They were also produced in huge numbers, so spares are widely available and inexpensive.

4 Likes

If you stick to name brand chassis providers, spare parts will be plenty. Super micro is but one of them. Chassis don’t break, components do, and those components are usually normed - fans, PSUs, etc.

What I would stay away from is hardware combinations that are not using standardized motherboards, backplanes, and the like. Based on very limited eBay listings I’ve seen, some of the larger computer OEMs (dell, Fujitsu, etc) built some funky boards / chassis / etc. but I have no experience with same. It likely very much depends on the particular chassis.

So if rack stuff is of interest, I’d first look into the size rack you’re after (1u, 2u, etc) then get familiar with common components, OEMs you like, etc. and then jump over to your friend the scrapper.

For example, over here the supermicro chassis can be dirt cheap. Supermicro still makes PSUs and they can be had in many capacities, Platinum / Titanium rated, at prices comparable to ATX. They may be louder, however, as Supermicro pretty much ignores noise as a design criteria.

Good luck. :smiley:

My experience: Don’t build yourself. Get a refurbished machine from a reputable brand from a reseller. The overall experience is much better. Everything fits, everything works, airflow is optimal, …

Then the little things pile up: multiple 10G interfaces, IPMI with remote management (vnc in hardware), redundant power supplies etc etc.

Example:

You won’t get it cheaper by building yourself.

1 Like

I buildt it myself. After seeing this, I would not do that again.

I generally agree, but with this counterpoint: If chassis depth is a concern, you’ll find precious little commercial gear that will suit. Anything I’ve encountered that would be anywhere near suitable as a NAS has been at least 750mm deep, with 1000mm (or close to it) fairly common. If you need much less depth than that, you may need to roll your own.

Second counterpoint is power consumption–anything you find commercially available is almost certain to be grossly overpowered in the CPU department (probably two-socket), which is going to burn lots of watts.

If those two things are acceptable, you’re going to get better hardware, cheaper, by going the “used server” route.