Protect replication backup from bad actors

I have two Truenas’es that sync using a replication task over ssh.
The truenas that pushes stores some ssh credentials to the remote. It can also destroy snapshots on the remote.

This is convinient, but also a little scary. Is there any way I can secure this a little bit more?

  • Old snapshots deleted on a schedule on the remote itself
  • Master only having access to push new snapshots, not delete or overwrite

Goal is to protect the backup in a way so there is no feasable way for someone to destroy both of them

If a bad actor gains access to your authentication, then they can do whatever that user is permitted to do.

It might be possible to run the replications from the source server as a user with only enough permissions needed on both ends to send and receive snapshots.

Then you can have a seperate pruning schedule on the backup server that runs locally.

You can see how this adds complexity and could cause breakage with future incremental replications.

If these servers are only on your local network without any inbound traffic allowed from outside (the default on any home router), then I think your concern should be on who would even gain physical access.

Easiest would probably be as you suggest with a user that has minimal access. I found an option to set “Use Sudo For ZFS Commands” when I setup the replication task. So I could probably use sudo to only give it what it needs.

Question is, what does it need… :slight_smile: If everything goes via sudo, it might not be that hard to figure this out using sudo’es method of logging input.
I might dig into it and write the result here. But was hoping I was missing something from the initial setup and this was easier to setup…

What is your concern? That someone will hijack the replication task? If they gain any admin access, whether as the root user directly or with sudo, what’s stopping them from running zfs destroy?

What’s stopping someone from inserting a Linux USB, booting into a live session, and running zfs destroy?

What’s stopping someone from physically stealing or breaking the server or drives?


You can get granular by delegating permissions to specific users with zfs allow, but it still doesn’t prevent any of the above.

If you really want to prevent a worst case scenario, you can have a small server or DAS that receives occasional backups that gets unplugged to be stored somewhere safe.

1 Like

zpool checkpoints might also be another belt for those braces you could consider.

What I have understand in my little experience Is that using PULL from target to source Is bit way secure for many reason.
Is a well covered argument, also on old threads, give It a look

1 Like

I think this does depend on what services you’re running on your backup server however. The assumption is that PULL is more secure because you are not running the same amount of services as you are on your primary and I’m sure that’s often true however some people (myself included) use the backup/secondary system as an active standby meaning in the event of a disaster the secondary needs to take over services. For this to work seamlessly these services need to already be exposed and running thus making the backup system no more secure.

1 Like

Totally right, my intention was more to point the Op to this argument, sorry if my previous post was not so clear!

1 Like

The best way to protect your replication tasks is run them the other way (pull from the backup server), Tom Lawrence did a video on this 6 months ago or so.

This is what is already being said. However as I mention above it’s a bit more nuanced than that.