If you have both the old 44x 18T JBOD and the 78x 22T JBOD, I would suggest you simply make a second, new pool on the new JBOD, and use a ZFS replication job to send your files, data, and existing snapshots over to the new pool:
The fastest thing to do is likely what our resident “don’t give a sh*t” suggests, with the additional (potential) advantage that you can change your pool layout as (or if) desired. But you can’t do that without at least some disruption of services. OTOH, that method leaves all your data intact on the old JBOD until you decide to decommission it.
Not quite as fast, but I’d probably vote for:
It’s what I did the last time I did an in-place upgrade on my pool: replaced 6x 4 TB disks with 6x 16 TB units, simultaneously. I expect it would have worked just as well if I were upgrading two or more vdevs at a time, but that wasn’t my situation.
@blanchet - One neat feature of OpenZFS is that you can replace in place. This is what you are doing with the extra JBOD & disks.
How this works with ZFS, is that ZFS will create a temporary Mirror between the source and destination disk. If the source suffers a block problem, redundancy is used to populate the destination disk. If the destination disk dies during the process, no problem, your source disk is still present. When the destination disk is fully synced, the source disk is automatically detached, leaving the RAID-Z2 vDev as normal.
What this means, is that it is reasonably safe to perform 100% disk replacements at the same time. Of course, performance will suck and managing that many disk replacements is a concern too.
So if performance is not too bad, you can safely start another disk per vDev. For example, if an existing set of replacement disks is almost complete, but you want to go home, start the next. By morning the earlier set should be done and you did not waste any time overnight.
But, changing 1 disk per RAID-Z2 vDev is both safe and lower performance hit.