20TB disk raid level suggestions

Hey all, I’m about to finally build myself my first NAS but was hoping to get some insight into the best raid levels to use.

The NAS will be used primarily to store large media files, and secondarily for Plex. There will be more reading of data, than writing of data.

The disks I’ll be using are 20TB Seagate EXOS X20 drives, and these are the two options I’ve considered:

  • 8-disk vdevs in raidz2
  • 5-disk vdevs in raidz2

In terms of resiliancy to data loss and rebuild times, I’m guessing the 5-disk raidz2 is going to be a lot better? This is also my preffered configuration, but if anyone else has any insight into what might be best, would be great to hear.

I’m also not fussed about pooling all the vdevs together, so long as Plex can see them all.

How many total disks do you have? How much storage are you needing?

Don’t have the disks yet (except 2 that are being used as general storage, but will get repurposed once my first vdev goes live).

The case I have can take 16 natively, or 20 if you want to get creative. If I were to do 16-disks total, then 2x 8-disk z2 vdevs work out to 220TB, but so would 20-disks with 4x 5-disk z2 vdevs.

And yeah, about 200TB is my ideal target, as I’ve got about 80-100TB of data on various old drives that needs backing up, plus I would like excess for future storage.

Avoid being creative and go for a maximum of 2*8-wide then.
You don’t have to bring all the vdevs at pool creation.
Thanks to raidz vdev expansion (but with some drawbacks), you don’t have to bring all the drives in a vdev at creation.

1 Like

By creative I mean you’re able to put 2x 2-drive cages up at the top of the case where you’d mount fans. The drives would be upside down, bu apparently that’s not a problem.

Since I’ve posted that I’ve kind of been talking myself into 5-drive raidz2, from the perspective that buying 5 drives is way cheaper than 8 drives, and with their being fewer drives and less total data, a rebuild should hopefully be faster :thinking: I’d also much prefer to just get all the drives at once and not have to concern myself with expansion.

Note that with 100tb of existing data and 200tb of total pool size you will hit 50% occupancy right away. Your read/write speeds will start to (likely slowly) deteriorate once you hit 50% capacity; so if the speed is an issue or requirement, maybe you should aim for even larger pool.

p.s. there are plenty of resources on the forum regarding zfs speeds vs occupancy (there is somewhere even a table with expected speeds vs. % of free space)