New to True NAS, created a new RAIDZ with 3 drives. How can one check to see the status of the initial RAID build.
I presume it works similar to when creating a RAID 5 so on my QNAP NAS when creating a RAID 5 it gives you the status as a percentage (0 to 100%) of the first initial RAID build. I cant seem to see that within True NAS.
Thanks for response.
How does it work in True NAS out of curiosity ?
I have 3x18TB drives, usually on Qnap would take days to build before I can use it.
On the True NAS, created the RAIDZ1 pool with the 3 disks in one Vdev, and its ready to use within minutes.
Typical RAID (hardware / mdadm) operates blindly at a block level, with no understanding of the filesystem. It has to initialise every bit across the disks to ensure parity across the entire array.
ZFS is different. It is both the filesystem and the volume manager. It knows the disks are empty, so it can initialise them immediately. Parity is then calculated on-the-fly when a write is received.
I’m thinking of getting another hard drive to expand the vdev from 3 to 4 disk (from what I have been reading that’s doable with the latest TN Scale version).
How would that work with the ZFS - would it be similar to a RAID 5 expansions of adding another drive or does it work differently on the ZFS ?
ZFS’ RAID-Zx expansion does have to copy all used data blocks around, to include the new column disk. Then, as a final step, ZFS will automatically perform a scrub of the data to verify that there was no bad RAID-Zx stripes.
However, this can be faster than RAID-5 because ZFS will only copy data blocks in use. While on the other hand, RAID-5 would have to read each stripe, re-calculate the stripe’s parity, then write the new column. But, ZFS does have to do the scrub as well, which re-reads the entire set of data a second time…
Note that there is a quirk with RAID-Zx expansion. Some would say bug, but one which does not affect data integrity. The free space calculation remains at the old data to parity ratio. Annoying, but it is what we have now. Plus, old data remains with the old data to parity ratio. Only new data has the new data to parity ratio. Of course, you can re-write the old data to force it to use the new data to parity ratio.
Many people feel that single parity is a bit dangerous with disks larger than 1TB. Partly because of the time it takes to re-silver, (aka re-sync), the replacement disk into the RAID-Zx vDev. And partly because the more disks, and larger disks, the more chance of a unrecoverable read error.
ZFS is better than others RAID schemes, because one bad block would only affect a single file. With ZFS’ redundant metadata, (like directory entries), if a bad block occurs during re-silver on RAID-Z1, no problem, read the other copy and fix the broken one. Yet, another failed disk during RAID-Z1 re-silver damages the whole pool…