Partial Pool Snapshot Transfer Possible?

Good evening, for the last couple of weeks I have been struggling with the announced resume replication feature in SCALE This new feature is not working for me even though both NAS’ are running, can communicate, and have the latest pool feature flags updated.

A large 2.9TB dataset keeps getting its initial replication interrupted - not unusual given my ISP (Comcast) and occasional summer storms around here. The pipe is only 10Mbit/s wide, so the process takes a while.

Any interruption creates an error. Sometimes, the two NAS’ simply restart anew, sometimes the remote dataset is flagged as busy, which I have only been able to clear by restarting the remote NAS. Either way, from my limited user perspective, the resume feature appears to be non-functional or perhaps quite picky re: when it will resume.

So, I want to consider additional options. For example, along the lines of the sneakernet question I posed a few weeks ago, would it be possible to only copy the missing snapshots to a external drive (the whole Pool doesn’t fit in a single drive anymore), take those to the remote NAS and upload them directly via a import Pool command? or can only the whole pool be replicated like this?

Sure, just take the same output of zfs send, redirect it to a file rather than piping to zfs recv, sneakernet it over, and load it into zfs recv from the file.

Of course, the devil is in the details, in getting the flags right to match the existing replication.

1 Like

Or, if you have smaller datasets / zVols, you can perform each dataset / zVol separately.

For non-ZFS backups, I normally do RSync by file system. That way I can see what file system(s) may have error’ed out. Plus, there are some file systems I don’t want to bother backing up, (so they are normally un-mounted but if I forget, this reminds me).

I wonder why the receive resume token is not being kept / used?

On the destination, do you see a resume token after an interrupted replication?

zfs get receive_resume_token mypool/dataset

You could, next time, do the initial transfer via the command-line.

Bypassing snapshot-to-snapshot isn’t necessary for an initial replication. You can create a (temporary) ZFS pool on a spare drive, replicate the actual dataset’s latest snapshot. Then just replicate this dataset to the destination pool when you have physical access.