Hello! I’m migrating from btrfs where I used btrbk, like this:
transaction_log /var/log/btrbk.log
snapshot_dir _btrbk_snap
snapshot_preserve no
snapshot_preserve_min latest
target_preserve 10d 6w 6m 1y
target_preserve_min no
# Back up to external disk subvolume auto-mounted on /misc/btrbk
volume /data
subvolume one
target send-receive /misc/btrbk
subvolume two
target send-receive /misc/btrbk
This simple config takes a daily snapshot, and then pushes it to an external USB disk. One snapshot is retained locally (so I can quickly look at yesterday’s data while it’s still on solid-state storage). On the external disk, the snapshots are gradually thinned to match the target_preserve specification, so currently I have this:
Once I set this up, I pretty much stopped thinking about it. The backups run every night, with a dense recent set and sparse older set.
I’ve read the docs on TrueNAS snapshots and replication, but I’m struggling to find an easy path to a configuration like this. It seems like I have to set up numerous individual periodic replication tasks, or something like that. Is there something that I’m missing, or what are other people doing?
I guess that I’m still a bit uncertain about best practice between snapshots and replications. If I naively follow my btrbk setup, it’s something like this:
snapshot once per day, retain for two days
replication once per day, retain for 10 days
replication once per week, retain for 6 weeks
replication once per month, retain for 6 months
replication once per year, retain for 1 (or more) years
But you said “define four snapshot tasks” and I’m not sure if that’s literal and specific or if it translates to this snapshot/replication setup.
I don’t know btrbk - therefore I can’t comment on what setup would match your btrbk setup.
I would not recommend creating more than one replication task per dataset. Seems like a source of trouble, but I have never tried it. Please note that replication does not create any snapshots and will only replicate snapshots that already exists when the replication starts.
What you can do is setup tiered retention snapshots on the source. That would involve creating the four snapshot task I previously described. A single replication task can then be setup to continuously replicate all snapshots from the source to a remote host. If you select Same as Source in the Snapshot Retention Policy then the destination host will have the same tiered snapshots as the source.
In that setup you can easily recover deleted/modified files without having to restore from a backup, because you have the tiered snapshots available directly on the source.
What you described might work, but I’m concerned that I don’t really have capacity on the source volume to keep snapshots that long, whereas my replication target has more capacity for the longer retention.
It feels like there’s a lot of power and expressiveness available in the TrueNAS-provided options, and yet it seems like a small overlap with what I’m actually trying to do.
Interesting, I haven’t had to deal with a space constraint on the source itself.
TrueNAS itself uses its own zettarepl for snapshots and replication. Unfortunately no tiered retention options exist in zettarepl (other than the “workaround” of defining multiple tasks).
In your case, I would create a single daily snapshot task and one replication task with Snapshot Retention set to None. Then write a custom script which deletes snapshot to match the wanted retention policy. But I don’t know how other people would approach this.
Snapshot Task: This is your primary tool for point-in-time backups. You schedule it to take snapshots of your data (datasets/zvols) on the source system.
Replication Task: This task takes those already created snapshots (or the initial full dataset) and sends them to a destination (local or remote).
Initial Replication: The first run copies everything; subsequent runs copy only the changes (incremental replication) and so are much quicker.
The Process: Your source needs snapshots to replicate. If you use the wizard, it offers to set up the periodic snapshot task for you, creating the necessary base snapshots for the data you wish to replicate.
So If you do a local replication to the same server not to a remote server you just made a compete second copy of all the data you desired to be replicated. So yes you can run into space issues.
You can edit and set retention policies on snapshots. no need for command line. In fact the command line deletions is a good way to get them out of sync as the GUI may not properly be informed.
You can have basically whatever within reason retention and frequency you wish on snapshots.