I plan to upgrade my main PC so I will get access to a 3950x which I could either sell or repurpose to a more strapping NAS since I will need increased storage for high resolution video; I will also run a few jails and use SMB as my main file sharing protocol.
I have a few ways going into this: keeping the Node 804 means going mATX, so a motheboard like an AsRock Rack X570D4U would be nice: two M.2 slots and two PCIe x8 slots (one in a x16) would be enough for an HBA and, eventually, a SFP+ or a Base-T 10G expansion card.
I will need the HBA because I plan to have at least 10TB of storage, and here in Italy I can find new 4TB Ironwolf drives for 110€ each: having up to 10 slots in my case, I was planning a 6 to 10 RAIDZ2 VDEV.
If we leave the mATX format and go ATX then we have the AsRock Rack B550D4M which has a dedicated mezzanine slot for the network card, a PCIe x8 slot for an HBA, and a second PCIe x4 slot that can easily accomodate a second M.2. I will have to change case, finally finding a justification to have my own rack… and I think I am in love with the CX4712 | Sliger… I could go 12 drives in RAIDZ2, or maybe 5 drives in RAIDZ1 plus a hotspare (easily scalable to another 5+1), or 6 drives in RAIDZ2 (again, quite easy to double the VDEV).
Another option is going AMD EPYC™ 7F32 coupled with a ROMED8-NL (BTO) or a EPYCD8[1] for its high RAM capacity (not going to use beyond 500GB) and aboundance of PCIe lanes, likely more than I will ever need.
All of this withouth considering AM5, which I have not evaluated yet. I’m open to opinions.
also available with integrated 10G-BaseT ie EPYCD8-2T. ↩︎
The B550D4M is not available. I guess I could repurpose my ASUS PRIME X570-PRO as well… giving up IPMI in the process. I admit that it’s a tempting solution, especially not contributing to the creation of more e-waste more than the money saved.
Here in Italy 16TB Ironwolfs seem to be starting from 360€, refurbished.
If I go with a 6-wide 4TB RAIDZ2 VDEV I get two parity drives and around 13TB of usable space for 660€ while doing so with larger 16TB drives means I have to go three-way mirror for a total of 1.080€: even if we add a generous 100€ for an HBA, it’s still 300€ cheaper.
I do realize that the cost per TB is way lower with greater drives, but realistically speaking I don’t think I will need more than 20TB of storage in the next 5 to 10 years.
I selected that specific EPYC ROME as a choice mainly because of the abundance of PCIe lanes and SATA ports, as well as the accesibility of the price at around 200€ used; it has the highest clock speeds amongst those with fewer cores. Plus, an Italian with an EPYC ROME CPU? That’s sick, I could name my new NAS CAESAR!
But then I realized I could use my ATX motherboard as well, it having an Intel I211 and supporting[1] ECC… and having the chance to repurpose a significant part of my current hardware is a big plus.
the way Ryzen supports ECC on consumer-grade hardwarem, that is. ↩︎
Three way mirror? I would feel safe, if I have a proper backup… You could consider to run your current server as backup server (replication)
No HBA needed then too, which allows you to consider Gigabyte MC12-LE0 with your 3950X and ECC RAM.
10G NIC in PCIe 4x slot and the x16 slot can be either used for 4x NVME with bifurcation, or o e 8x4x4 adapter to use with 2x NVME + small GPU for transcoding (if needed)
It was my intention if I were to use my 3950x, at least for a few really can’t lose it file. For most of the pool though I wouldn’t have enough space for it. It could be a compromise.
I could also take that mATX board, slap my 3950x on it along 128GB of ECC RAM as well as a 500GB M.2 SSD for L2ARC, and swing it inside my current case. But I don’t know how would I feel having just two 180TB drives in mirror… 3TB is one thing, six time that another.
I have the Mainboard here with an 4650G and 128GB ECC RAM.
Works very solid with BIOS F14 and also very power efficient.
It’s that good, that I will make it my main server and my Asrock Rack X470D4U with 2700X becomes my backup server.
I‘ve tested it already with HBA, Intel X520-DA2, Asus Hyper M.2 and TrueNAS Scale Dragonfish Beta - works pretty well so far.
My main server has 4x16TB Toshiba (Helium) as striped mirror and I’ll add 2x Optane P1600X as special vDev for the new setup
Toshiba run very smooth since more than 2y - enterprise grade, helium filled (power draw) and best price per TB
Check, Check, Check
I has WD Enterprise Gold 10TB prior to that and both died- got my money back thanks to 5y warranty…
As you are in Italy:
I’m German in Portugal and prices for hardware (new and used) is very high.
I ordered the Toshibas from Alternate Germany and ordered them to Portugal. Thanks to EU laws, it was cheaper incl. shipping costs😁 You have to ask via specific email, but no problem at all
How “sensitive” is TrueNAS scale related to speed of boot drive?
I ignored it so far and used leftover NVME SSDs with 250GB (from Laptop SSD replacements) for OS - therefore I thought that the PCIe x1 interface of the MC12-LE0 is no problem at all (still twice the speed of SATA SSD)
“Bifurcation riser” or “bifurcation adapter”. Since these are essentially traces on a PCB, I would trust even the most generic variety, especially for PCIe 3.0 (4.0 may require more engineering care, and possibly retimers, bumping the cost a lot).
Variants: Vertical; With a fan, like the ASUS Hyper M.2.
A x8x4x4 riser lets you put a x8 card (HBA) and two NVMe (mirrored app pool or L2ARC + single drive app pool). (OCD warning: Half-height cards have their screw in the wrong place for a full height slot.)
So, a few things changed. I will totally need a second system, and not for backup purposes.
In fishing for possible chassis (beyon that lovely rack mounted that would however be hard to justify if were to use the MC12-LE0), I stumbled in the new JONSBO N4. I’m mainly concerned by the half-height PCIe slots and the possible overheating issues.
The N4 appears to be a strange beast: Rather than 8 HDD bays in a symmetrical 4+4 layout with symmetrical cooling, like the N3, it has 4 HDD bays on backplane on the left and 2 HDD + 2 SSD bays on the right—no backplane, no fan but the PSU. What was the designer thinking? What’s the intended use case?
In the Jonsbo N line, the even numbers have dubious cooling model: A 15mm thin fan for the N2, where some static pressure will be required to pass through the back plane; half-cooling for the N4.
Half-height slots are no concern for a HBA, if needed (best not!) or a SFP+ NIC.
I’m now a huge fan of the Lian-Li PC-Q26 case… but its material has long transmutated from aluminium to unobtainium.