While mostly true, their are rare cases where errors can be corrected.
ZFS by default makes 2 copies of metadata, (the directory entries, etc…), and 3 copies of critical metadata. This applies EVEN on single disks or redundant pools. ZFS considers the loss of a metadata worse than the loss of a file block or the entire file. This is because a loss of a directory entry could take out hundreds of files.
I’ve seen this in action. A scrub on my non-redundant media pool found an error but corrected it. Puzzled me for weeks until I figured it out. (Other faults on that non-redundant media pool required me to restore the file(s) from backups.)
One thing about cold storage, is that you probably want to bring the storage devices back in for regular ZFS scrubs. (And SMART tests...) Whether that is every 3 months or yearly, is your choice. But, if you wait too long and too much bit-rot has accumulated, you might loose data, even if it is a 2 way Mirror.
One thing a scrub can find, is a block that is failing, but not yet failed. In theory, ZFS won’t do anything. The storage device, (disk or SSD), would apply it’s error detecting and correction code against a failing block. And if good, supply the block to ZFS and spare out the block. Sparing out means write the corrected block to the new location, update the translation lookup table so all new references for that block use the new locations.
None of that happens on a cold, (un-powered), storage device. Nor does the storage device actually verify all blocks just because it has power. That is where the strength of ZFS scrub comes into play. It forces the storage device to read all the blocks that ZFS is using, allowing the storage device to find and potentially spare out failing blocks.
I say potentially spare out. A completely failed block where the error detecting and correcting code can’t recover it, nor does ZFS have redundancy, means the block is gone. Thus, backups are useful.
One side note. A totally bad block will stay on a SATA disk forever, even if their are spare blocks available. The mechanism to force that sparing is a write to that block. Then it will be spared out, with the newly written data safely stored. That is what ZFS does IF it has redundancy available to re-create the missing data block(s).
Sorry for the epic response. But others in the future may want some of these details.