How bad is the failing disk? You could add the 12TB disk, have a 3 disk mirror, then when the 12 TB disk is written remove the failing disk. You could even do it all together as a 4 disk mirror if you have both 12TB disks.
Otherwise what you’re suggesting makes sense. My VM array (3 way mirror of SSDs) has gone through 2 rounds of a similar process.
The failing disk have hundreds of checksum errors, so I guess pretty bad. I’m still wondering whether I should just offline it at this stage or keep it going. I’ve replicated the dataset that sits on this pool on another healthy pool anyway, and I’ve done this after I saw that the disk is failing.
That’s exactly the way it works, once you click the “expand pool” button. iX broke pool auto-expansion in SCALE and have decided “we meant to do that.”
On 25.10 I saw my raidz2 pool space automatically grow after replacing a 6tb drive with a 12tb drive. The same should happen as long as you replace the drives directly and not create another another vdev