I’m in the process of building a storage server, this machine is primarily going to be used for file storage, media, and plex, etc. It’s common to have 2-5 users streaming simultaneously. It does some transcoding, but the files will have a lot of read access, but writes will be rare… only when uploading new content. In the future I do plan on building another machine and syncing it off site, but this may be 6 months to a year out before I do that. For the drives here’s what I’ve got.
18 - 24TB HDD’s
My thoughts were either putting all 18 drives in a Z3 - this would give me a 3 drive failure tolerance.
Or the other option was to run a Stripe of RAID-Z2, putting 9 drives in each of the Z2 raids. This would give me 2 to 4 drive failure tolerance (depending on which drives failed) and the added benefit of increased performance. I guess my only hesitation in going this route is only having 2 drives for parity.
A 3rd option would be doing the same thing as above but instead of striping 2 Z2 arrays, going for the Z3. I kinda ruled this out as losing 6 drives and 1/3 of my total storage seemed a little ridiculous.
Just looking for opinions on what you all think would be the best option. What do you all recommend?
I would create 3 RAIDZ2 of 6 drives each or 2 RAIDZ3 of 9 drives each. Everything else would be too wide for my gut feeling.
I do not have hard numbers, sorry.
Except that I consider using a third of your raw capacity for redundancy a really good and efficient use of that capacity. Mirrors need at least half, so a third is great.
I’d recommend going with multiple RAID-Z2 vdevs instead of one giant RAID-Z3. Here’s why:
1. Failure domain & resilver risk
A single 18-wide RAID-Z3 looks nice with 3-disk tolerance, but it’s one huge failure domain. If you lose drives during a days-long resilver, the whole pool is at risk.
Two 9-wide RAID-Z2 vdevs still give you 2-disk tolerance per vdev, so you survive any two failures and up to 4 if they’re spread across both. Resilvers are shorter.
Three 6-wide RAID-Z2 vdevs shrink the failure domain even more and make rebuilds quicker, though at the cost of 6 total parity disks.
2. Performance multipliers
This is the part that often gets overlooked. For ZFS, random IOPS scale with the number of vdevs, not the number of disks inside a vdev.
1× 18-wide Z3 → ~1× random IOPS
2× 9-wide Z2 → ~2× random IOPS
3× 6-wide Z2 → ~3× random IOPS
The same applies to multi-stream sequential reads (like your 2–5 Plex users): each additional vdev adds parallelism to the pool, resulting in 2–3 times the aggregate throughput compared to a single huge vdev. Latency also improves since there’s less queueing per vdev.
3. Capacity tradeoff
1× 18-wide Z3 = 3 parity drives total
2× 9-wide Z2 = 4 parity drives total
3× 6-wide Z2 = 6 parity drives total
Yes, Z3 wins the raw space game, but with 24 TB drives, resilvers take a long time. Burning a couple extra disks on parity is a fair “speed and safety tax” in exchange for more IOPS and less rebuild stress.
If it were my NAS, I’d pick multiple RAID-Z2 vdevs—either 2×9-wide Z2 or 3×6-wide Z2, depending on priorities.
2×9-wide Z2 → my default if I want strong performance without giving up too much space. You get ~2× the random IOPS of a single big vdev, shorter resilvers than an 18-wide, and only 4/18 disks are parity (~22%). Great balance for Plex + general file serving.
3×6-wide Z2 → my choice if I care most about resilver safety and snappy small-IO workloads. You get ~3× the random IOPS, the smallest failure domains, and the fastest rebuilds. Tradeoff: 6/18 disks are parity (~33%), so a bit less usable capacity.
I’d avoid 1×18-wide Z3: it maximizes capacity (only ~17% parity) but concentrates risk and makes resilvers long and stressful. The 6–10 disk vdev width sweet spot keeps both performance and rebuild times sane.
If you’ve got the bays, I’d lean 3×6-wide Z2; otherwise 2×9-wide Z2 is an excellent practical choice.
I had a 12xZ2 pool once - developed years ago back in the old FreeNAS days. Z2/Z3?.. doesn’t matter…I would never run a wide pool again. That server was one heck of a power hungry SOAB, so I ended up not leaving it on 24/7 when electricity prices started soaring as the likes of Al Gore stole the power narrative and the wife started complaining about the power bill. So it became a backup storage server and I moved the “active” media library to more power efficient hardware. However, Ignoring a big server like that (not attending to it regularly) itself has it problems. And that sucker started dying (without me realising it) and I ended up losing a lot of the data on that pool - long story short, it’s real hard to just pull out 12 discs send em off to some sort of data recovery centre (many of whom don’t have the faintest clue what TrueNAS actually is).
Point is - whatever you do, it’s only as good as the hardware you run on it and your maintenance of the server.
If I could afford to buy/build a big server like you plan and the power bill to leave it on 24/7 I would definitely go the (less wide) array ie 3x6. That way if you are really stuck and need to pull out those 6 discs, send em off for backup recovery (or move the pool to another server or whatever) it’s one heck of a less hassle (than a really wide array). As a DIY type (that I am) you will always find someone (or be able to build yourself quickly to get up and running again ) a 6 bay NAS to rebuild/port a pool. Finding someone with another large 18 bay NAS at the drop of a hat (or DIY another build from scratch) is another thing entirely. Just my 2C