Snapshots after Upgrade

Origin Problem, Snapshot Retention

We have two Truenas Servers. One of which was upgraded to Scale today Dragonfish 24.02.2 .2
For this scenario; Truenas 1 = Failover Server 1 and Truenas Server 2= Failover Server 2.
Failover Server 1 Scale Dragonfish 24.04.2.2
Failover Server 2 Core 13.0.3u

Prior to the upgrade we had a replication task created with a snapshot policy on Failover server 1, this then had to pull from our Destination server Failover server 2. This was done due to Core not wanting to replicate (Push) down to Scale Truenas Server.

That being said, we had snapshot policy on both servers, that were set to automatically destroy the snapshots on the local servers after 2 weeks.

This was working 100% for the time….

Now we had upgraded our Failover server 2 to scale Dragonfish 24.04.2.3

During the upgrade we had lost our Fibre Connectivity, believe this is due to Emulex Cards that are installed and not loading proper Drivers for the Scale Server. (this resulted in using the onboard Cards for Access to the Truenas Server GUI to further Troubleshoot and assess the network.

We also lost all our NFS shares that was composed, this results in us re-creating the shared NFS mount points for the respective servers.
We also had lost our Snapshot Retention Policy for the local Truenas Server (Failover server 2)

However we were able to import the Pool Without any problems and the dataset data is active and working.

Now we are restoring most of the functionality back to where it was prior to the upgrade, with the notice of our Snapshots Retention policy we don’t want to lose the Base snapshot with a new Policy.

The Goal we wish to achieve is to setup our snapshot task and with that setup we want o continue with our current snapshot history that we have already on the Server.
The idea we have is to recreate the Retention policy to default and apply to pool level with recursive set as enabled, leaving the naming convention for snapshot to auto-%Y-%m-%d_%H-%M would basically carry on the original Retention Policy. To give a visual of what the current naming scheme look like for comparison.

RAID10-30TB/VMVHDSTORAGE/RICK@auto-2025-01-25_08-00

with that said, with the replication source (failover server 1) we obviously have a set of Snapshots and Data from original Replication Task that was running from Failover server 1 that also has Dates and data tied to that set.
When we Fixed the Source dataset Snapshot policy on Failover server 2.

Would it be recommended to recreate the Replication Task on the failover server 2 that would then implement the Retention policy down to Failover server 1
Or do we reconstruct the original Replication task to Pull from source Failover server 2.

With the above Theory, is there any concerns or methods that would be worth looking into as a possible solution to this problem we are experiencing.