Looking to build my first NAS system (TrueNAS Scale), to run 5 x 24TB vdev in RAIDZ2 (I’ll add another 5 x 24TB vdev in a few years), and was going to run a simple Gen 4 build, but since longevity (hopefully lasting 10+ years) is something I’ve had in mind, some have suggested strongly for me to go Gen 5.
This will be mainly for automated weekly backups of all my data, and also will power it on once or twice a week (as-needed) to access large video and music project files.
Would it be way better in my case to spend about $1000 for Gen 5 parts on:
$500 (?) (gen 5; 2 x 32gb ecc ram)
$325 (gen 5 mobo) - AsRock Rack B650D4U-2L2T/BCM
$130 (cpu) - any AMD Ryzen 9000/8000/7000 off ebay (?)
Or go with the specs I’m currently looking at? (Gen 3 /4), which should cost me about $630:
$350 (gen 4; 2 x 32GB ecc ram)
$150 (mobo) - ASRock B550 Pro4
$130 cpu - Ryzen 7 PRO 4750G (used off ebay)
Considerations:
Looking to get a mobo + cpu that supports ECC functionality
Looking to get a cpu with onboard graphics (to avoid having to run a separate GPU that draws power and takes up an extra PCIe slot)
Other parts I will get:
HDD storage: 5 x Western Digital Ultrastar DC HC580 24TB 7.2K RPM SATA 6Gb/s 512e 3.5in Recertified Hard Drive
HBA: LSI 9300-8i (still the best value HBA) - $99
Parts I Already Have:
Cooler Master HAF 922 Case (5 x 3.5" HDD bays + 5 x 5.25" bays)
Who and WHY? As stated, you do not have any PCIe part for now, and the HBA is PCIe 3.0.
Note that with a server motherboard such as the B650D4U you do not need a GPU at all: TrueNAS runs headless.
So my answer would point towards a Gen3/4 build… but preferably with a server motherboard rather than a consumer one—that’s also a consideration to last “10+ years”.
Considering this is a backup target & your budget, I’ll be very impressed if your data transfer needs manage to saturate a PCIe 4.0x16 link within the next 10 years, let alone gen 5.
If you’re planning something like having a bastardized NAS/Hypervisor/AI training powerhouse then PCIe gen 5 could be interesting. If you’re sticking to something that’ll be occasionally powered on for backups, go with the cheaper option.
I did forget to bring up noise, and that most likely I’ll be keeping this NAS in my bedroom (at least for now, I might move it to garage down the line).
So noise is also a concern for me.
Would running server grade hardware be much louder or hotter, versus the consumer parts I’m looking at?
Eventually I do want to go run server grade, once I learn more, and can set aside more space in a separate room or garage.
If you’re only using it for file storage, a used X10 or X11 Supermicro board with a Xeon CPU of 2Ghz or more, will work just fine. And use registered DDR4 RAM.
The drives will be noisy, but can be set to spin down when not in use.
Also that SAS controller should have a fan aimed at it to prevent overheating.
Just the same: Same CPU, same heat. Only fan management might be different, but if you go to ten spinners, THAT will be your main source of noise.
Fair advice, though a used X10SDV board (for RDIMM) would be about ten year old already…
A X11SS_ or X11SC_ board would still use ECC UDIMM but provide 8 SATA ports; if that’s enough (two 4-wide Z2, or a single Z2 vdev expanded up to 8-wide) you might do without the HBA. In this generation the AsRock Rack E3C246D4U2-2T (equivalent to a Supermicro X11SCH-TLN2F… which does not exist with this networking configuration) can be found new (old stock) on eBay from China—but if you’re in the US tariffs probably ruins this deal.
Hard drive spin down is not recommended for ZFS file systems and can cause file corruption. My understanding is that ZFS counts on the file system being available for ‘heartbeat checks’ on a timely basis (usually every 5 minutes or so.) When drives are spun down, the ZFS system can be subject to cache corruption caused by delayed writes.
Wrong. It is not recommended to spin drives down and up repeatedly, due to concern for mechanical wear. It is however reasonable to spin drives down if they can remain so for hours—implying a limited number of cycles per day.
Beside the fact that this implies that there really is no activity on the pool (system dataset), the main issue is that TrueNAS-the-company appears to have decided, for the sake of simplicity, that drives should be spinning 24/7 in all cases and is actively removing power management features from TrueNAS-the-OS, while enforcing “monitoring” which forcefully wakes drives up.
I guess we’ll have to agree to disagree on that one.
If you have a cached write and take a power hit before your drives spin up to actually preform the write, that’s a recipe for data corruption. So it’s your data…you do you.
Secondly, many spinning rust hardware failures occur on spin-ups… So repeatedly spinning down drives over time, in my opinion, is counter-productive; because the little you save in power doesn’t account for the extra wear and tear you put on the drives. If you want power savings… use SSDs.
My plan is to keep it in room at first, for setup / troubleshooting / etc. Also to utilize 10GbE port on my desktop PC in my room, which will be 2 feet away.
Later will move to garage, but need to figure out wiring via attic, setup a switch, etc (would be next project).