Backup like btrbk

Hello! I’m migrating from btrfs where I used btrbk, like this:

transaction_log            /var/log/btrbk.log
snapshot_dir               _btrbk_snap
snapshot_preserve          no
snapshot_preserve_min      latest
target_preserve            10d 6w 6m 1y
target_preserve_min        no

# Back up to external disk subvolume auto-mounted on /misc/btrbk
volume /data
  subvolume one
    target send-receive /misc/btrbk
  subvolume two
    target send-receive /misc/btrbk

This simple config takes a daily snapshot, and then pushes it to an external USB disk. One snapshot is retained locally (so I can quickly look at yesterday’s data while it’s still on solid-state storage). On the external disk, the snapshots are gradually thinned to match the target_preserve specification, so currently I have this:

2025-12-23	2025-12-18	2025-12-13	2025-11-09	2025-07-06
2025-12-22	2025-12-17	2025-12-07	2025-11-02	2025-06-01
2025-12-21	2025-12-16	2025-11-30	2025-10-05	2025-01-05
2025-12-20	2025-12-15	2025-11-23	2025-09-07	2024-01-07
2025-12-19	2025-12-14	2025-11-16	2025-08-03

Once I set this up, I pretty much stopped thinking about it. The backups run every night, with a dense recent set and sparse older set.

I’ve read the docs on TrueNAS snapshots and replication, but I’m struggling to find an easy path to a configuration like this. It seems like I have to set up numerous individual periodic replication tasks, or something like that. Is there something that I’m missing, or what are other people doing?

Thanks!

1 Like

That is exactly how you have to do it in TrueNAS. It would be nice to have a easier way to define tiered retention backups.

If I understand your example correctly (10d 6w 6m 1y), you would have to define four snapshot tasks:

  1. Snapshot daily, lifetime 10 days
  2. Snapshot weekly, lifetime 6 weeks
  3. Snapshot monthly, lifetime 6 months
  4. Snapshot yearly, lifetime 1 year

Thanks for the confirmation. This is a bit disappointing, but I’ll manage. :smiley:

2 Likes

I guess that I’m still a bit uncertain about best practice between snapshots and replications. If I naively follow my btrbk setup, it’s something like this:

  • snapshot once per day, retain for two days
  • replication once per day, retain for 10 days
  • replication once per week, retain for 6 weeks
  • replication once per month, retain for 6 months
  • replication once per year, retain for 1 (or more) years

But you said “define four snapshot tasks” and I’m not sure if that’s literal and specific or if it translates to this snapshot/replication setup.

I don’t know btrbk - therefore I can’t comment on what setup would match your btrbk setup.

I would not recommend creating more than one replication task per dataset. Seems like a source of trouble, but I have never tried it. Please note that replication does not create any snapshots and will only replicate snapshots that already exists when the replication starts.

What you can do is setup tiered retention snapshots on the source. That would involve creating the four snapshot task I previously described. A single replication task can then be setup to continuously replicate all snapshots from the source to a remote host. If you select Same as Source in the Snapshot Retention Policy then the destination host will have the same tiered snapshots as the source.

In that setup you can easily recover deleted/modified files without having to restore from a backup, because you have the tiered snapshots available directly on the source.

Thanks, I really appreciate your replies.

What you described might work, but I’m concerned that I don’t really have capacity on the source volume to keep snapshots that long, whereas my replication target has more capacity for the longer retention.

It feels like there’s a lot of power and expressiveness available in the TrueNAS-provided options, and yet it seems like a small overlap with what I’m actually trying to do.

This is btrbk, if you’re curious: GitHub - digint/btrbk: Tool for creating snapshots and remote backups of btrfs subvolumes

It seems like sanoid/syncoid are supposed to be the zfs equivalent, but I haven’t dug into how I might use them on TrueNAS: GitHub - jimsalterjrs/sanoid: These are policy-driven snapshot management and replication tools which use OpenZFS for underlying next-gen storage. (Btrfs support plans are shelved unless and until btrfs becomes reliable.)

Interesting, I haven’t had to deal with a space constraint on the source itself.
TrueNAS itself uses its own zettarepl for snapshots and replication. Unfortunately no tiered retention options exist in zettarepl (other than the “workaround” of defining multiple tasks).

In your case, I would create a single daily snapshot task and one replication task with Snapshot Retention set to None. Then write a custom script which deletes snapshot to match the wanted retention policy. But I don’t know how other people would approach this.

That’s a great suggestion, thanks!

Just to be clear

  • Snapshot Task: This is your primary tool for point-in-time backups. You schedule it to take snapshots of your data (datasets/zvols) on the source system.
  • Replication Task: This task takes those already created snapshots (or the initial full dataset) and sends them to a destination (local or remote).
  • Initial Replication: The first run copies everything; subsequent runs copy only the changes (incremental replication) and so are much quicker.
  • The Process: Your source needs snapshots to replicate. If you use the wizard, it offers to set up the periodic snapshot task for you, creating the necessary base snapshots for the data you wish to replicate.
  • So If you do a local replication to the same server not to a remote server you just made a compete second copy of all the data you desired to be replicated. So yes you can run into space issues.
  • You can edit and set retention policies on snapshots. no need for command line. In fact the command line deletions is a good way to get them out of sync as the GUI may not properly be informed.
  • You can have basically whatever within reason retention and frequency you wish on snapshots.

How would you configure tiered retention on a replication task?

You can use pool.snapshot.delete API method in scripts if you worry about middleware getting out of sync.

I decided that if I’m going to implement the tiered retention algorithm, I might as well go all the way. So here is ztrbk: GitHub - agriffis/ztrbk: ZFS snapshot manager in the spirit of btrbk

I’m using this now on my TrueNAS system with this config:

{:datasets
   [{:source "puddle/space"
     :snapshot-preserve-min {:days 3}
     :targets
       [{:target "slush/space"
         :target-preserve-min :no
         :target-preserve {:days 10 :weeks 8 :months 14 :years 3}}]}
   ]}

It’s barely tested and there are probably bugs, but I’d rather debug this than bang my head against the GUI.

I’ll probably write a blog post about it. Let me know if there’s interest.

1 Like