Interesting Replication Behavior

So, I had a replication target go offline for a little bit and now it’s back. As the replication is doing its thing, the run is showing an increasing amount of data having been sent (as expected) while the data still in queue needing to be sent is decreasing with time (which was not expected).

At the moment, there is pretty much a 1-1 correlation for a sent increase vs. a needed transfer decrease. I.e. when this replication started, over 5 TiB allegedly had to be transferred.

As of now, it looks like the two measures will converge right around 4TiB.

Is this something common in 25.10 or just an issue with 25.10.0.1, which is what my NAS is running at the moment?

So we are a few days later and the replication continues merrily.

Note how the slow progress has not been 1-1 for transfer increase vs. payload decrease as it looked like a few days ago. Instead, it’s about 2-1 - i.e. the total transferred rose from 3.09 to 3.71TiB while the total to be transferred dropped from 4.93 to 4.66 TiB.

So now it looks like the cross-over will happen closer to 4.3 TiB.

Here we are, several days later, and as predicted, the total and the transferred are converging around 4.3 TiB. Has anyone else seen this behavior, where the expected total payload decreased by ~20% as the replication progressed?

Different compression between source and target?

Any block-cloning used on the source?

TrueNAS middleware/UI might have a different calculation than doing it in the command-line because it uses a “passing the baton” replication, rather than a single combined stream.

1 Like

One difference is that the remote pool does not feature a sVDEV. Other than that, the snapshots should be 1-1 between the main pool and the replication target. I don’t quite understand why the total transfer needed would decrease either way…

AFAIK, the answer is no. It’s just a “basic” Time Machine dataset being replicated rather slowly to a remote host.

Yeah, this replication seems different than in the past, by virtue of the counter, suggesting that the TrueNAS app now slices the replication into smaller slices to prevent interruptions in replication from requiring the entire snapshot to be resent.