Updated ASRock Build Plan Feedback

After learning my lesson in Resilvering Stuck after changing disk array connector I would like to run our new setup plan past the community :wink:

Here are the new components we are planning to use:

  • Mainboard: ASRock B650 PG Lightning, connecting 3x1TB 2.5" SSDs directly to the board, as well as 192GB of RAM and an NVMe Bootdisk
  • Broadcom SAS 9300-16i, PCIe 3.0 x8 to connect 8x18TB TOSHIBA_MG09ACA18TE drives as ZFS raidz2 (Saw some internet posts suggesting to split such an array into two? But that again means there is no spare during resilvering which I am not a fan of - e.g. debian - ZFS endless resilvering - Server Fault)
  • A bifurcation PCIe 5 x16 adapter for the 2TB Lexar NVMe SSDs (as already running now) which will run as ZFS raidz1 with regular backups to the HDDs

Sounds good?

For 8 drives, you could go with a 9300-8i. Remaining motherboard SATA ports can serve for spares or replacement drives; ZFS does not care where the drives are attachedā€¦ as long as it is a proper controller. If you do want a -16i, go for a 9305-16i: It is a single controller while the 9300-16i is two controllers and a switch on the same card; the 9305 uses less power.

I donā€™t like the gaming motherboard and its distribution of slots (x16 + x2); youā€™d want x8 + x8. And for that amount of RAM and a NVMe array (how many of these Lexar? 4?), you may consider Xeon Scalable/EPYC.

What is the budget? This is no longer a cheap ā€œdata dumpā€. (Although actually a second-hand Xeon Scalable or EPYC system with refurbished DDR4 RDIMM may well come out cheaper than an AM5 build with 192 GB of DDR5.)

I havenā€™t looked at the specific motherboard or whether it will interact with the PCIe cards OK, and someone knowledgeable should double check this, but at first sight it looks OK to me.

Make sure you buy or flash this into IT mode.

Maximum recommended width of a vDev is 12-wide. 8-wide is pretty normal.

From a redundancy perspective IMO you are better off with one 8x RAIDZ2 than two 4x RAIDZ1. If you have particular high-performance requirements (both throughput and response times) for random I/O, you may get better performance from 2x smaller vDevs, but unless this is a specific factor I would personally stick with an 8-wide RAIDZ2.

Are you intending to retire the 8x 8TB drives? Are these 8x 18TB drives new or a move of the existing ones?

Once you have a migration plan, we would be happy to review it if you like.

We do not have a fixed budget, but the intention is to make it work on consumer-grade hardware. We already have the CPU and RAM, and those Server CPUs are boldly priced: Prozessoren (CPUs) AMD mit CPU-Serie AMD: Epyc 9004/Ryzen 9000, Kerne: 16-Core heise preisvergleich Deutschland

Yes 4 Lexars, see Resilvering Stuck after changing disk array connector - #4 by melonion for details.
In how far would the x2 negatively impact the HDDs connected via the Broadcom? I hope it is not as bad as this:

We are considering to connect two arrays of 8 disks - maybe that is what the two chipsets are built for? Unsure though, feel free to enlighten me and thanks for that insight already!

Yes the 8TB disks may become part of that, and the 18TB disks are the existing ones.
Since our total current data adds up to only around 60TB, we can easily migrate by moving back and forth between the arrays.

Would love an elaboration on this, not quite sure what you mean.

Thanks for the detailed feedback already both of you!

You dont have to buy new gear. Used server gear is more suitable than gaming gear, that is not designed to run 24/7.

DDR5 RAM will give you no benefit.

Look for bundles on ebay ( with this one you need to replace the cpu cooler)

The 9300-16i and also the 9305-16i are designed for server cases with lots of airflow. They will overheat in tower cases. So you need to strap a fan to it.

You say 2 arrays of 8 discs. Are they all gonna be internal discs?

IT Mode

1 Like

Iā€™m certainly NOT advising for the latest, greatest, and most expensive gear to build a NAS. (Altough EPYC 8004 Siena could make for some attractive proposal.)
Itā€™s all about 1st/2nd genrataion Xeon Scalable (dirt cheap by now) or EPYC 7002/7003, second-hand for home build, or refurbished from a professional for business use with invoice.
But interestingly, just clicking some additional boxes in your price engine (AM5 Ryzen 7000 / 8000ā€”APUs are lower powerā€” and EPYC Roma / Milano / Siena) shows that retail EPYC can come cheaper than Ryzen! (And for 12 ā‚¬ extra, Iā€™ll take a 7313P over a 7950x any dayā€¦)

EPYC can run all your NVMe drives, and some more, off inexpensive adapters, where Consumer Ryzen wil require a PCIe switch if you also need lanes for a HBA. A 4-M.2-in-x8 card with switch will cost around 200 ā‚¬ for PCIe 3.0, 2000 ā‚¬ for PCIe 5.0.

You want 192 GB RAM? Fine. With consumer-grade DDR5 thatā€™s 4*48 GBā€”and praying that your motherboard plays nice a 2DPCā€”, which starts at 139 ā‚¬ a piece. (Admittedly, not as bad as I feared.) Second-hand DDR4-3200 (because EPYC, 2666 for Xeon Scalable is even cheaper) RDIMM comes at 50 ā‚¬ a piece. Take 8 to fill all of the memory channels of an EPYC and thatā€™s still 400 ā‚¬ for 256 GB vs. 560 ā‚¬ for 192 GB, with proper ECC for ZFS.
(Professionally refurbished DDR4-3200 32 GB: 72 ā‚¬ / 87 ā‚¬ in the Netherlands.)

Server-grade hardware can come cheaper than consumer-grade. Put the savings towards a better motherboard.

1 Like

Iā€™m with @etorix here - my experience with 2nd hand hardware is pretty good. My NAS is used (I have had it for a year but it is quite old), my laptop was used (but I have had it for over a decade) and ditto for my wifeā€™s laptop.

I tend to buy new disks, because I want the longest life for my use and the lowest risk of buying a dud - and because prices can be relatively high for the % of useful life already gone.

Whether thatā€™s ā€œtwo poolsā€ or ā€œtwo vdevs in the same poolā€, you donā€™t need two discrete controllers to run two arrays. The 9300-16i is put together with two 8-lane LSI 3008 controllers; the 9305 family uses the LSI 3216/3224 controllers which were later designed to provide more lanes in a smaller package and lower power budget.

Not as bad, but a bottleneck, and the x2 electrical slot may not deliver enough power to the HBA card.
And starving the HBA of lanes to give all to the NVMe is clearly an imbalanced design.

On second tough I might have misread that for a bifurcating (active) adapter where you really meant a passive adapter, relying on bifurcation by the board.
Anyway thereā€™s no escaping that a consumer-grade platform does not quite have enough PCIe lanes for all the storage you want. There might be some AM5 boards with 2 or 3 M.2 slots and switchable x16/x0 or x8/x8 slots from the CPU which would allow to run a HBA as x8, two M.2 on motherboard slots and two more M.2 in the x8 slot if it can still bifurcate further to x4x4 but thatā€™s not easy to vet out, especially as block diagrams are rarely available for consumer boards. Supermicro H13SAE-MF? Asus ProArt B650?

The new Epic 4000 series is another option. Not a lot of options from Supermicro. But thereā€™s Gigabyte and Asrock Rack

Asrock Rack

Gigabyte

EPYC 4000 are essentially rebranded consurmer Ryzen and run on the same motherboards. Not much of ā€œanother optionā€ here.

Going for the 9305 because of lower power use at a marginally higher price, thanks!

When it comes to mainboards and processors, the decision for this build is made since as I said we already own the CPU and RAM. But I am open to learn.
The question with server-grade hardware like the ebay build shared is coolling - our server is next to the sleeping room so it should be quiet, is that possible with such boards?

But generally we are also fans of second hand / refurbished hardware - even for disks if they are nicely redundant.

Why? More lanes? Lower power? Or is it about the auxiliaries you mentioned after that? Is there an advantage of the CPU itself?
Cause 50% more frequency per core sounds like a sizeable plus of the Ryzen 9.

Those look interesting. Do they support bifurcation as well for the NVMes?
But they seem to be about 100ā‚¬ more than the mainboard we are planning to get, and if the only difference are marginally higher speeds for the HDD connection I donā€™t see the point.
The question for me in that is how much of a bottleneck the lane count is - does two lanes mean it can only access two disks at a time thus ZFS has to jump around all the time, or is it simply a measure of speed?
Cause honestly for that data lake we do not need maximum speed.

The issue is with server cases, not with server boards or CPUs in themselves. But if noise and cooling is a concern, Iā€™d go for Xeon D-1500 or Atom C3000.
And the noise floor is defined by spinning drives. 16 drives are not going to be silentā€¦

More, a lot more, PCIe lanes. And cheap DDR4 RDIMM. Idle power, however, is going to be much higher on the EPYC.

To do WHAT exactly? A NAS, in itself, is mostly idle; only the monthly scrub will bring it close to full CPU use. So these 16 cores have been picked up, hopefully, for another reason than running ZFSā€¦ Then, by design, an EPYC can run 24/7 at full speed on all cores within its declared thermal enveloppeā€”try that on a consumer Intel CPU for a laugh!

That is the point I cannot clarify without going through the full manualā€”and then maybe even not. There may be other candidates as well, but thatā€™s a lot to research.

I donā€™t see your use case and the design goal.
Why precisely 16 cores and 192 GB RAM? (That should not be for ZFS alone.)
Why a 4*NVME pool next to the big and bulky HDD storage?
And why is there no mention of a NIC, ever? Whatā€™s the point of any flash storage to serve files at 1 Gb/s (or 2.5)?

Two lanes limits the total throughput to and from all drives. Which is an obvious bottleneck with 16 drives.
More critically, it also limits the power that is available to the HBA. A 9300-16i has an extra power connector because it could draw more than what the PCIe slot can deliver. A 9305-16i can feed fully off a PCIe x8 slot, but probably not on x2.

Otherwise get any board which can bifurcate the ā€œGPU lanesā€ x8/x8 to two slots and throw in an adapter with a PCIe switch for the M.2.

Generally, iā€™m looking for balance in the allocation of resources. Balance which I do not see, for instance, in the pairing of a cheap consumer motherboard and a large amount of expensive DDR5 RAM (itā€™s still two times more expensive par GB than DDR4).

The NAS serves two main purposes:

  • Being a data lake for our media and backups with network shares and media server - the latter could benefit from CPU power if live-reencoding a 4k movie for example.
  • Running applications and VMs - this is where the RAM and CPU and NVMes come in handy, and we also thought that when VMs are down the RAM can be utilised nicely by ZFS. I think I once heard something about 1GB of RAM per 1TB of storage being ideal for ZFS, that might have been one of the reasons we went that high back then because I think our previous rig had 64GB.

A third use we thought of was throwing in a graphics card and running some AI stuff on it, but so far TrueNAS did not seem particularly up for that.

We have a 1Gb NIC.

As for the noise, true the drives are the main factor and they are spinning nicely (spindown is something I once investigated but so far discarded because of the arguments I found against it), but they are encased and padded on a wallmount so that they donā€™t penetrate the walls with their noise, unlike the server case fans we once had while repairing a customer serverā€¦

According to a quick wikipedia read, power should not be an issue cause the 9305 purportedly only uses 10W

  • All PCI express cards may consume up to 3 A at +3.3 V (9.9 W)
  • x1 cards are limited to 0.5 A at +12 V (6 W) and 10 W combined.
  • A full-sized x1 card may draw up to the 25 W limits after initialization and software configuration as a high-power device.

But PCIe lanes are not something I am very familiar with, so feel free to correct me here.

1 Like

Broadcom doc indicates ā€œ16.2 W typicalā€. Try at least to find a board with a x4 slot (25W) for it.

Thereā€™s a big difference between the motherboards I linked and a Proart B650. Both are ā€œserverā€ motherboards with IPMI.