Backup (snapshot/replication) of a Backup TrueNAS Scale Server

Hello Community!
I am not totally new to TrueNAS but far away from being an expert - so I hope you can help me or give me some tips on my question/problem.

In my local Homelab I have following running:
TN Scale Prod → TN Scale BackupLocal (via Snapshots and Replication)

This runs flawlessly since several months. Now I wanted to implement a 2nd TN Scale Backup sever which I wanted to place remotely at my parents house (after initial backup had been done locally) - so a “backup of my backup server”:
TN Scale Prod → TN Scale BackupLocal → TN Scale BackupRemote

Ideally the snapshots policy for the “TN Scale BackupLocal → TN Scale BackupRemote” would be less frequent but with a longer retention period.
Reason for using the TN Scale BackupLocal and not the TN Scale Prod for backing up remotely is that Prod is quite busy while the BackupLocal is quite being bored.

Anyhow, I tried various snapshot and replication settings between BackupLocal and BackupRemote but never got this to work.
Usually there will be one snapshot on BackupLocal which will then be replicated to BackupRemote, but the next snapshot/replication from Prod to BackupLocal removes this on BackupLocal and I have to start from scratch.

I wonder if this is really not doable at all (again: I am not a ZFS snapshot/replication expert), or if I am just to stupid to get it to work.

Or I maybe to do it this way? Not tested though …
TN Scale Prod → TN Scale BackupLocal
TN Scale Prod → TN Scale BackupRemote

Any comments or ideas are welcome.
THANKS for your feedback and have a great day!
Bye,
CoolWolf

You shouldn’t be creating new snapshots on BackupLocal. Instead, use existing snapshots to send from BackupLocal → BackupRemote.

Here’s an illustration for reference.


Yes, this can work.

1 Like

I do this.

The way I do it is to take snapshots on the “middle” or secondary nas with a different schema to the parent/primary nas.

This snapshots get replicated to the ternary nas. The middle/secondary is responsible for deleting those snaps.

All nas need their own schema.

So for example “nasname-auto-hourly … Etc Etc”

1 Like

Thanks @winnielinnie and @Stux for your fast feedback - much appreciated.
I will look deeper into this in the next days as being a bit busy at the moment.

PS: I knew I was too stupid for this :wink:

Name: Wolfgang
Username: CoolWolf
Avatar: A howling wolf. (A howling cool wolf?)

You win the internet for today. (Just for today. The competition is fierce.)

1 Like

I made a video explaining how I setup my Tiered Snapshots schedule.

I like @Stux way but I also see not reason why this wouldn’t work too. Perhaps a new snapshot schedule that takes one every month and keeps it for a year gets sent offsite?

Yeah, the above will work fine…

BUT a snapshot taken on a replicated dataset (ie on the destination side) will get wiped the next time the dataset is replicated from the source.

So, you need to take the snapshot on the source side… and that could be monthly, and that could be replicated.

It appears that you can setup two replications as well, one with say hourly+daily with source controlled retention… and a second with the monthly, with permanent retention… and this seems to work… more testing required :slight_smile:

Also, Prod → BackupLocal → Backup Remote won’t work using a snapshot task initiated on BackupLocal… you can replicate the snaps from Prod to BackupRemote via BackupLocal, but you need to drive the replication task off a schedule, and then specify the naming schema to replicate.

1 Like

Yes. The key here is to create and assign separate snapshot schedules for each replication task from the source server.

1 Like

replication tasks, don’t think you need separate snapshot schedules.

you can gang two tasks off a snapshot schedule.

The way it seems to work is the snapshots all run… then any replication tasks run (serially I beleive, then when the final task runs, any expired and already replicated snaps get deleted…

and then the next time a replication runs, the destination is maintained to remove snaps removed from the source…

Yeah you don’t need separate schedules multiple replication tasks can use the same schedule however assuming you want a different schedule/retention for your offsite then it may make sense to have a separate schedule assign to that specific replication task.

1 Like