Pool Creation

Hello Guys,

So, i’ve a quick question. People say RAID-Z1 is dangerous as if one drive fails and you replace with a new one and during the resilvering, the other disk is stressed so it might fail and if it fails, the whole data is lost. Isn’t this the same for Mirror as well?

Yes, in theory. With a mirror you are just using, in a two wide mirror, one disk to resilver from, so you have one point of failure.

In a six wide RAIDZ1 you are using five disks to resilver from, so five points of failure.

Mirrors have no “parity to calculate”. A resilver for a mirror vdev basically copies (at near full speed) the data from the “good” member drive(s) to the newly added drive.

Smaller timeframe where a drive failure could potentially occur.

But the disks are less stressed as there are multiple disks right?

No, parity calculations will hit all disks.

1 Like

Well, then even in mirror, during resilvering, the other drive might stress out or simply fail, then again the data is lost right?

Yes.

Some people use 3 way Mirrors either for speedier reads. Or to reduce risks when using Mirror vDevs for more IOPS.

Remember, any complete loss of a Special or data vDev, Mirror or RAID-Z1, and the ZFS Pool is toast. Even if you have other data vDevs that you think have complete data…

So, anyway to reduce the risk? I guess multiple mirrors is the way, yeah?

For example, let’s say i have 8 disks so, i create 4 way mirror and each will have 2 disks so, from a vdev, if a disk fails, the other can still work and the data is not lost or am i wrong here?

2 4 way mirror would mean 4+4 (capacity of 2 discs). You mean 4 2-way mirrors. So 2+2+2+2 (capacity 4 discs). In this config you could theoretically loose 4 discs. But not if 2 of them are in the same vdev.
You could do a 8 wide raidz2 (capacity ~ 6 discs) or even raidz3 ( capacity ~ 5 discs).
Then you could loose any 2 or 3 discs.

Yes, it is all about risk and what you, as the owner, (decider / chooser), of the hardware and configuration want for risk, (or risk reduction).

Their are no magic configurations. Well, a 4 disk setup can have a somewhat magic configuration. Either 2, 2 way Mirrors, where you can loose 1 disk per Mirror pair, but NOT both in a Mirror pair. Or a 4 disk RAID-Z2 where you can loose ANY 2 disks.

ZFS does attempt to make things more reliable than prior file systems. ZFS by default has 2 copies of standard metadata, (like file directory entries), and again by default, if their are 2 or more vDevs, put the copies on different vDevs. Further, more critical metadata has by default 3 copies, again spread around if possible.

Even though this takes more storage space, ZFS takes the approach that any Metadata is important, and deserves extra copies.

This can allow some odd behavior. My miniature media server has 2 storage devices, (not running TrueNAS, just Linux). I partitioned both and made a small portion available as a boot Mirror using ZFS. The rest is striped, (aka no data redundancy), for my Media, (which I have multiple backups).

Every now and then I loose a video file, (because they are larger than Music or Photos, statistics mean higher chance of loosing a larger file…). I simply remove the bad file, restore the file from backups. That’s really nice of ZFS, all on-line recovery, with no reboot or other funny steps to restore functionality.

But once I saw a read error, except ZFS fixed it, no pool data loss. HOW??? Took me a few weeks to realize it must have been redundant metadata. ZFS had a redundant copy, and when the copy it read was bad, simply read the other copy, and fixed bad copy.

Now does this help most people?
No. But it does show some of the differences with ZFS that were designed into it.

2 Likes