Looking for recommendations on build

I have 24 professional 4TB SAS SSD’s that I would like to build a new NAS server for. Might only go for 20 of them, and keep 4 in reserve. I am open for any recommendations for what would work best for TrueNAS Core, but I have the following requirements:

  • ECC memory
  • As silent as possible, within reason
  • Power consumption not important, but nice if it uses less
  • Size doesn’t matter (ATX/ITX)
  • Reliable for 24/7/365 home use
  • Preferably 2.5gbit connection(s)
  • At least 2 m.2 slots, or PCIE-expansion for this
  • Preferably some sort of IPMI/ILOM/Servicecontroller (optional)
  • Preferably on a budget (no professional servers, but home built economical solution)

I have another server for all my home server stuff, so this server doesn’t need to be beefy for those use cases. Just NAS.

I am looking for recommendations on Motherboard/CPU/Memory/PSU/SAS controller/M.2 drives and also if anyone have a recommendation on a case for this that would fit the number of ssds. Maybe a 3D printed case could be viable since this is only for SSDs?

Preferably this should be available in Norway, but if I have to order from abroad (AliExpress or something), that is a second option - however because of toll in Norway it adds a lot to the cost usually.

Thank you for reading this far. I hope for some good recommendations.

Please state your use case for the server and the budget. This will ultimately dictate what can be recommended.

What I picked up on immediately: go 10g or stay 1g. 2.5g is not recommended.

1 Like

Use case is for home use. I have a server with Proxmox and around 50 docker containers with the usual stuff, plex, torrent downloading, home assistant, audio book and paper archive and so on. It’s definitely overkill with SSDs, but I got them for free, so I am not looking to max out the SSDs here, just get a good NAS that is also silent and stable. No video editing or anything that would demand high performance.

Budget is difficult to say, since I don’t know what I would have to dish out. I am not looking for the absolutely cheapest solution, but definitely nothing professional. “Good enough” that makes economic sense.

SAS drives means you need a HBA.
A genuine server case with a SAS backplane would make your life easier. Else you’re looking at a LSI 9303-24i and lots of cables, and trouble to find a consumer case with so many 2.5" bays.

I have several old Sun X4250 servers and was considering using one of them, so I fired one of them up yesterday. They have up to 16 drives in front, but it makes noise like a jumbojet and is probably crazy on the power consumption. So that is a no go.

I was thinking more along the lines of octopus-cables and possibly a 3D printed case, if anyone have a link to something that would fit. Since it’s only SSDs it should be possible to get something small and shouldn’t require that much cooling?

Most of what I find in the bay/expander-area are for 3.5" and it would be nice to use less space than those. There are some really cheap ones on AliExpress though, like this one: https://www.aliexpress.com/item/1005005901398655.html
Two of those would be an option.

I did consider the new N100 cards, but I guess that would be difficult with only SATA-ports and a very weak PCIe-slot. https://www.aliexpress.com/item/1005006391942404.html

Would something like this be viable? https://www.youtube.com/watch?v=29bkHx2Kgd8

You could use the Node 804: with some 3.5’’ to 2.5’’ adapers[1] you will be able to easily fit 20 drives.

You could perhaps even go Node 304 (plenty of 3D printable improvements) with a more compact build that would pose different challenges.

If you go for the ITX build you could go with a X11SDV-8C-TLN2F | Motherboards | Products | Supermicro that provides you:

  • plenty of RAM and enough cores (from 4 to 16)
  • a PCIe x8 slot for a LSI 9305-24I HBA
  • a PCIe x4 slot via OCuLink that can be used for a M.2 drive
  • Dual LAN with 10GBase-T
  • low power due to SoC

  1. plenty of 3D prientabile like Printables and Printables ↩︎

That Node 804 with the 3D printed addons seems like a very good candidate.

The motherboard seems quite costly though, at least in Norway.

If you opt for the 804 you can go mATX, which brings down the price… and strangely enough, up the compromises.

What do you think of the X11SSH-F | Motherboards | Products | Supermicro? You can find it used on eBay for around 220€.

  • PCIe 3.0 x8 (in x16) for the HBA
  • PCIe 3.0 x8 for the 10G network card
  • PCIe 3.0 x4 (in x8) for a full speed M.2 slot
  • PCIe 3.0 x2 M.2 slot on the board
  • great availability of CPUs to choose from, although an i3-7100 seems to be a good option

Also, seems right to hit you with the 10 Gig Networking Primer | TrueNAS Community.

In STH testing, X11SDV idles at or above 60 W! These are not quite “low” power.
Finding a motherboard with enough slots for the HBA, a NIC and some extra NVMe is the lesser concern: There are many candidates… to be withered down by availability in Norway.

As I wrote, low power is not my main concern here. I would like a candidate that is kind of modern with enough PCIe-lanes, ECC memory and kind of quiet, those are the main issues. Of course power is tied to how much cooling you need which in case affects the noise. Also, Xeon and 10GbEe are not really high on my wish-list. This is solely for NAS, and not heavy use either, so 1GbE would possibly be enough, but since 2.5GbE is getting so cheap these days I think that would be sensible. A switch with 10GbE also has some issues when it comes to cost and cooling/noise, so I think 2.5GbE is more sensible for my use.

Those reasons are why I was looking at this as a possible candidate, if I could find one with the one CPU that supports ECC memory… https://www.youtube.com/watch?v=29bkHx2Kgd8

If you are using SSDs, this is very realistic.

Well, is it a no-go? And are you handy with electronics? Here is my point, remove the high airflow fans, those are great for high density hard drives generating lots of heat but you have SSDs which generate very little heat, and the spacing will give very good airflow. That takes care of the jumbo-jet. But you still need airflow for the motherboard and CPU, you will need to look into a solution for that. Maybe mount a fan on top of the heatsink?

The only downside may be the power consumption of an older unit. I’m not sure if a new motherboard could be retrofitted into that server case, but it is worth looking into. If you can use a backplane, for that many drives, you would be so much happier.

You added another requirement, “Enough PCIe lanes”. To you what exactly is enough? I assume you mean exposed PCIe lanes to the card slots, and what about bifurcation? These go hand-in-hand. All I will say here, download the user manuals, ready them. Check out other reviews of the motherboards you are looking at. I had to do a lot of research myself as I too needed exposed PCIe lanes that could be bifurcated 2x2x2x2, for what I wanted to do.

The folks here have given you a lot of good ideas, what you need to do is figure out how much effort you want to put into it. Cost, well nothing is cheap, but the fact that you have the drives, that is a huge part of the cost. I have an all NVMe system, dead quiet. 41 watts idle, a little more when active. Was not cheap but I wanted it so I pulled the trigger. But I run ESXi and am very happy with it.

Good luck, you have a lot to think about.

1 Like

ECC and “enough PCIe lanes” quickly drops you into Xeon/EPYC territory, which may well come with 10 GbE.
Mikrotik CRS305 and CRS309 switches are silent (passive!). My QNAP M408-2C has a fan, but cannot be heard. All of these are reasonably priced.

1 Like

2.5G and CORE don’t mix: if you want to go grater than 1G you have to go 10G.