Replications yields empty Snapshot

Good morning Community.
I am in need of assistance again in understanding Snapshots a bit better.
Enviroment
Two Truenas Scale Servers, one Failover and one Primary Server.
Primary Storage is on 24.04.2.1

Failover is on 24.04.2.2
Both Datasets are Standard Filesystems with LZ4 Compression and both set with a Snapshot Retention Policy of 1 week.

the Replication is setup from Primary Storage to Failover Storage with the Following Settings.
image

We had setup a Replication Task from One Truenas Scale Server to another Truenas Scale Server.

Originally we replicated the Data set from Primary Storage to Failover Storage.
The Data Sets within the Two Servers are Identical up to a certain time, then my Failover Server has Snapshot that identify no change from a certain time and remains as such on the Failover Server.

Here is my Primary Server Snapshots
image

and below is the Failover Server Snapshots.

Why would the Failover Server now identify the Snapshots as 0B from 20:00 Pm on the Snapshots.

The Primary Server is a storage server for my hypervisor, and this is constantly changing on the System so the replicated Snapshots should resemble the same. What am i missing in this regards

To follow on the above (we’re in the same team), we started a fresh replication yesterday when we picked up that replication “stopped” working at some point in time. I have gone through the replication logs.

  1. 1st initial seed started at 18:20 yesterday, and it appears to have seeded all data to the destination by 20:30

In the initial seed, the last log entries:

[2024/12/17 20:42:03] INFO     [replication_task__task_3] [zettarepl.replication.run] For replication task 'task_3': doing push from 'RAID10-4TB-HPE/XCPVHDNFS' to 'RAID10-30TB/VHDXCPSTORAGE/PSS1/RAID10-4TB-HPE/XCPVHDNFS' of snapshot='auto-2024-12-17_17-00' incremental_base='auto-2024-12-17_16-00' include_intermediate=False receive_resume_token=None encryption=False
[2024/12/17 20:42:13] INFO     [replication_task__task_3] [zettarepl.replication.run] For replication task 'task_3': doing push from 'RAID10-4TB-HPE/XCPVHDNFS' to 'RAID10-30TB/VHDXCPSTORAGE/PSS1/RAID10-4TB-HPE/XCPVHDNFS' of snapshot='auto-2024-12-17_18-00' incremental_base='auto-2024-12-17_17-00' include_intermediate=False receive_resume_token=None encryption=False

As it is set to hourly replication, the next automated job (1st one was triggered manually when we started the replication, was at 21:00 last night. The logs indicate no snapshots to send:

root@PSS1[/var/log/jobs]# cat 322506.log
[2024/12/17 21:00:01] INFO     [Thread-27980] [zettarepl.paramiko.replication_task__task_3] Connected (version 2.0, client OpenSSH_9.2p1)
[2024/12/17 21:00:01] INFO     [Thread-27980] [zettarepl.paramiko.replication_task__task_3] Authentication (publickey) successful!
[2024/12/17 21:00:02] INFO     [replication_task__task_3] [zettarepl.replication.pre_retention] Pre-retention destroying snapshots: []
[2024/12/17 21:00:02] INFO     [replication_task__task_3] [zettarepl.replication.run] No snapshots to send for replication task 'task_3' on dataset 'RAID10-4TB-HPE'
[2024/12/17 21:00:02] INFO     [replication_task__task_3] [zettarepl.replication.run] No snapshots to send for replication task 'task_3' on dataset 'RAID10-4TB-HPE/XCPVHDNFS'

As seen above in the OP’s screenshots, this is not the case as new snapshots was created on the source (hourly). It’s as if the initial job ran through with the initial seed, but when the 2nd job started, it was unable to get the snapshots to send and ultimately created blank snapshots on the destination side.

It’s worth noting that:

  1. Replication is hourly
  2. Snapshot task is also hourly
  3. 7 day retention

We are baffled as to why this would happen?