Just want to be sure about hardware components, before buying

Hi,

I’m planning to switch from Synology to TrueNAS Scale and have spent several hours researching my future TrueNAS build.

Before purchasing all the components, I would appreciate the opinion of the TrueNAS experts in this forum on whether my setup looks solid for running TrueNAS Scale and if it’s suitable for my intended use case.

Hardware:
Mainboard: ASRock E3C246D4I-2T (https://www.asrockrack.com/general/productdetail.asp?Model=E3C246D4I-2T#Specifications)
CPU: Intel Xeon E-2104G (https://www.intel.com/content/www/us/en/products/sku/134929/intel-xeon-e2104g-processor-8m-cache-3-20-ghz/specifications.html)
RAM: 2x Kingston 16GB DDR4-2666 ECC SO-DIMM (KSM26SED8/16HD) → Total 32 GB
HDD’s: 8x Seagate Ironwolf Pro 24 TB

My usecase:
This system will serve purely as data storage for daily use and as a backup target for other devices. It won’t be used for VMs or any heavy processing tasks.

Reason for choosing these components:
I need support for 8 drives, and having 10 GbE onboard is nice. ECC memory support is also a good add-on.

Do you think this build is suitable for my use case? And will TrueNAS Scale run smoothly on this hardware?

Thank you for your information.

1 Like

The CPU is pricy for what you get, other than that I guess it would work.

Have you mayhap considered buying used instead?
You would probably get something with equivalent performance for much less.

Don’t forget the case, fans and possibly CPU cooler, not sure if that CPU has a SKU that comes with one. Edit: And the PSU… duh, thanks to SinisterPisces for remembering. :slight_smile:

This is me being pedantic, but you say this device will be a backup target. Does that mean you will in total, have one copy of the data, or more than one?

Hello!

I’m fairly new to all this, so unfortunately I can’t help answer your questions directly, but from my own experience of things I didn’t consider when I was setting up my server:

  1. Try to get some idea of power consumption/heat/noise at idle for the components you’ve chosen. Your server will spend most of the time idling. That CPU is from 2018, so even though the TDP is relatively low, it might not be the most efficient older Xeon/Eypc option.
  2. Asrock Rack isn’t kidding about their RAM QVLs. If RAM isn’t on their list, it might not work. Also, they’re not kidding about what speed they say the RAM will run at depending on how many sticks are installed. (In some boards, more sticks == slower RAM so things stay stable).
  3. Likewise, X550 NICs can get pretty hot. I’d suggest planning for some dedicated cooling on their chipsets.
  4. You’ve got one 3.0x16 PCIe expansion slot. See if you can confirm whether it supports bifurcation. If it does, you might be able to break it out into more than one PCIe card, which could be useful later.
  5. You mentioned doing two 16 GB RAM sticks. Any chance you could bump that to two 32 GB sticks?
    5.1 The easiest way to increase ZFS performance is to give the ARC more RAM to work with, so you’re most likely to want to max the RAM in the future.
    5.2 If you go with 2x32 GiB now, and want to expand later, you won’t have to try to sell/trade the 2x16 GiB modules, which probably won’t hold their value given that DDR4-2666 is getting a bit old at this point.
    5.3 The system also only supports DDR4-2666, so it’s not the fastest RAM. More RAM will counteract that.

I’m not suggesting at all that the config you’ve chosen won’t meet your needs, but those are the sorts of things I failed to think about the first time I built a server and I ended up sad about it later. :wink:

1 Like

I considered to buy second hand, but I am okay to buy “shiney” new (I know it is not really necessary for server-grade hardware, just my personal preference).

Of course ;). My main point of my first post was to “fix” the main components, like mainboard and cpu. All other components will follow afterwards. I plan to install a cpu fan and 2 additional fans for general cooling.

Thank you for your feedback!

I should be more precise: This build will be one of the backup targets ;). I am aware of the 3-2-1 rule.

3 Likes

A very good point. I have just checked the RAM QVLs of the Asrock mainboard. Unfortunately, this list is very short.

My preference for the Kingston 16GB DDR4-2666 ECC SO-DIMM (KSM26SED8/16HD) is not exactly on the list. The last 2 letters of the module code (“HD”) are different. In the QVL is “KSM26SED8/16ME” mentioned instead of “KSM26SED8/16HD”.

However, I can only buy the “HD”-variante and not the “ME”-variante in my country. Google says that the last 2 letters are different chip suppliers. I hope this will work fine.

Thanks for the tip! The mainboard only has 3 fan-connectors. Maybe I will search for a case with a fan-hub included, so i can use more fans for specific cooling purposes.

There is only 1 32 GB stick on the QVL list and this one is not available anymore. However, I can upgrade to 4x16 GB.

Thank you for your help!

There’s nothing wrong with your list, but for your use and limited RAM amount (no need to increase performance on a backup target…), a Core i3 8100/9100 is likely cheaper—and does support ECC.
Likewise, if your case takes micro-ATX boards, an E3C246D4U2-2T is a better choice: Standard UDIMM rather than SO-DIMM, more extension, no need to chase a SFF-8611 breakout cable.

2 Likes

This is a server-grade board. Check with Asrock Rack to confirm, but those fan headers should have quite a bit of amperage on them, so they should be fine to drive multiple fans off one header using a splitter. The last Asrock Rack board I owned had 3 amps per fan header (36 watts), so I could do pretty much whatever I wanted since I wasn’t using Delta fans. :stuck_out_tongue:

Likewise, if your case takes micro-ATX boards, an E3C246D4U2-2T is a better choice: Standard UDIMM rather than SO-DIMM, more extension, no need to chase a SFF-8611 breakout cable.

+1 for full sized RAM and fewer disk breakout cables. :slight_smile:

Nice, didn’t know that Core i3 support ECC.

The E3C246D4U2-2T sounds like a good improvement ;).

While 10Gbps Ethernet is nice, do you need it right now? Are your backups needing that kind of speed?

I ask because if you run a backup in the middle of the night, time is likely not an issue. However if you run multiple backups a day, time may be an issue.

Why? The cost of the overall system. This includes cooling and fan noise. There are some nice server boards with a built-in cpu, low power, everything you would need for a backup server, and you could still run some jails or Dockers as well.

As said already, some of use, even experienced people have made choices they with they could change. The unforseen stuff that happens.

You are right - I don’t need it now. It’s just a nice add-on.

Without staggered spin-up you should get a PSU with at least 750 W.

1 Like

Only up to 9th gen.

Indeed, 10GbE may be overkill for a backup server on spinners, but the trick here is that most C246 boards at retail are either Unobtainium or priced in accordance with their rarity (try finding a Supermicro X11SCH…)—except for the E3C246D4U2-2T, which is widely available from Chinese sellers on eBay.
U2 is the second revision of the design, and it could be that the -2T variant came after its -2L2T sibling, which is better documented online. This board was released late in the LGA1151-2 product cycle, very late, certainly too late. So there are lots of “new old stock” boards to offload.

I might have missed something.

How do we know there’s no staggered spin-up? Wouldn’t that depend on what HBA is being used?

But yes; it certainly didn’t occur to me when I started out that I needed to account for power-on being one of the most demanding things the server would do because of HDD spin-up.

Those drives pull approx 8 watts per drive and are 7200 RPM which means more heat and power consumption. I don’t “personally” know if you can sleep these drives or if you plan to power the system down when not actively in use, but power consumption and cooling could become an issue.

As for the 10Gbps NIC, if you can find it in a motherboard, as said earlier, it is apt to cost a lot less than an aftermarket card.

What is your planned setup/configuration for the 8 hard drives? And do not forget that you need a single boot drive as well, preferably not USB.

As you can see, you are getting a lot of advice. If I could rebuild my system, I would make a few small changes, one of those being a 10Gbps NIC by the way. But my money is spent and I will live with this setup for about 7 years, or until it dies on me. And to be honest, I don’t even need a NAS, it was just a fun project back in 2008 for me and I’m still enjoying it after all these years.

Add a cheap optane m.2 H20. put your OS on the SSD portion and a metadata only L2ARC on the optane. if you’re using large block sizes (you should) you don’t need a big metadata only disk, you just need a fast one to back the spinning rust.

I’m not sure you can break out the Optane and QLC parts, and these drives are out of support so the pool could not be moved out to a more recent system if need be.
Bad advice, IMHO.

1 Like

I thought by default they presented as 2 separate drives. But on looking closer that appears to be more down to luck - Or a seeming frustrating firmware adventure.

But the point stands. Put a low latency metadata only L2ARC SSD somewhere. It need not be large, 32GB is probably enough unless you’re using small blocks. It is not required for the pool to function (unlike a special vdev), but will speed up seeks to blocks that don’t have metadata cached in ARC.

Thank you, will buy a PSU with “power” ;).

My setup will be 1 vdev, RaidZ2 with 8x Seagate Ironwolf Pro 24 TB.

I’m also looking into SSDs as boot drive. I think the Wester Digital WD Red SN700 with 500 GB is a nice choice. I’ve read some reviews about this SSD, which should be especially suitable for NAS systems.

Thank you @etorix for this recommendation. I’ve just ordered the AsRock E3C246D4U2-2T :slight_smile: .

I am reading the documentation for this board. At first, I got a little bit confused because of this line regarding SATA / M.2: “The M.2 slot (M2_1) is shared with the SATA_0 connector. When M2_1 is populated with a M.2 SATA3 module, the Pin 7 of SATA_0 is disabled”.

My interpretation of this sentence: If I use the Wester Digital WD Red SN700 in the M.2 slot, all 8 SATA ports of the E3C246D4U2-2T will be still available for 8x Seagate Ironwolf Pros. Can somebody confirm this?

:laughing:

I like your optimism. Your interpretation of the manual telling you a pin will be disabled disabled if you use the m.2 slot is that all ports will be enabled? My interpretation is SATA_0 will not be usable if you have an m2 slotted in M2_1.

But, “M2_1” makes me think you have an M2_0 that you can use and keep all 8 SATA ports… Guess I’ll look at the manual…