Hello guys,
So, finally i’ve got all the drives and planning to create a pool from scratch. I see four options for the layout:
- Single VDEV. 16x16TB disks in RAID-Z2. However, when reading the ZFS primer, it tells that " We do not recommend using more than 12 disks per vdev. The recommended number of disks per vdev is between 3 and 9. With more disks, use multiple vdevs." So this is definitely not a good option.
- 4xVDEV. 4x16TB disks in each VDEV in RAID-Z2. However, as RAID-Z2 requires 2 disks for parity and 2 disks for read/write which means a total of 4 disks required atleast for the RAID-Z2. Although, this is okay and multiple VDEV might give me good performance (more IOPS), i still loose 2 disk per VDEV which means a total of 8 disks for parity calculations (2disks*4 VDEVS=8 disks). This gives me just half space but good IOPS i think.
- 8xMirrored. This may work but i don’t prefer it in my case as i don’t need that much of redundancy for the moment. Trying to have a balance between the performance the redundancy.
- 2xVDEV. 8x16TB disks in each VDEV in RAID-Z2. This is what i’m actually planning to go with? A total of 2disk each per VDEV for partity, which means 4 disks (2disks*2VDEV=4) and this leaves me with 12 disks for data storage which is fine. I think this layout is perfect as it creates balance between the performance and redundancy. Offers 2 VDEVs, has RAID-Z2 and has a maximum tolerance of 4disks for the entire pool and the pool is still operational. As far as i’m aware, a failure of 4 disks (2 disk from each VDEV or 4 disk from either of the VDEV) at the same time is very less likely to happen, however, it could be in the event of catastrophic failure (PSU short-circuit or expose to fire or bad batch and maybe extreme bad luck). So, if a disk fails from a VDEV1, i can replace the disk easily and during resilver if another disk fails, the pool will still work/survive (am i right here?) unless there is a third disk failure from the VDEV1 during the resilver (which will make the VDEV not working i guess, maybe dead?) I’m not sure of it.
My second question is, if there are 2xVDEVS in RAID-Z2 containing 8 disks each, and let’s suppose 2 disk from a VDEV1 fails in a row. Still, the pool will be operational i guess. I can easily replace the two faulty disk and resilver the pool. However, I’m worried what if other disk fail (third one from VDEV1) from the same VDEV? Then, i think the pool is a toast, yeah? I know that’s unlikely to happen as i wrote above with the exceptions, but i’m just trying to understand the basic things here before the actual deployment.
Also, i’ll do the burning tests as per the expert’s recommendation on this forum. Any idea if i can do it through SeaTools?. Its available for Windows, Linux and DOS. I think of doing SMART Test+Short Test+Long test for each drive. Would this be sufficient for determining pre-failure of the disks or are there any smart script out there for this specific burn-in test?
Thanks