Today I was finally looking into replication task to backup to an offsite TrueNAS system to protect against ransomware.
I created a replication task, that (I think) uses root on the local TrueNAS and the user âbackupâ on the remote system. Owner of the remote datatset is the user backup.
Snapshot retention policy is set to none.
Side note: none is a little bit misleading IMHO, the better word for it would be âkeep foreverâ
That is all fine and well, but assuming the worst case happens and somebody gains root access to my local TrueNAS (which hopefully should not be easy, thanks to 2FA) would it not be easy for the attacker to just change the retention policy?
That is why I am wondering, how you guys do immutable backups?
Make Snapshots of the snapshots, on the remote?
Use pull, instead of push?
Your backups are more at risk of physical theft or destruction, rather than someone gaining access to your local network and TrueNAS root user account.
It makes sense to use a high quality, steel reinforced door and a lock that is hard to pick. After a certain point, you might want to consider someone breaking the back window, rather than continue to focus on the security features of your houseâs front door.
So I guess nobody thinks it is worth to go the extra mile and use immutable backups?
I was thinking about using Backblaze with lifecycle rules or rsync and WORM.
Or using pull instead of push, but with a user that has only read permissions on the source?
@etorix shared the easiest way to do this. Keep your backups unplugged and offline. Store them somewhere safe when they are not currently being used to receive a replication from the main pool.
If itâs not feasible, then yes, using âpullâ instead of âpushâ for your replications can help.
Donât forget that physical access trumps any kind of sophisticated software security.
I use many different backup methods, after all, a single copy might be corrupted, who knows. I donât put all my eggs into one backup. I donât want to lose my important data, ever. Thus, I use different tools for it too. And I do offsite too to protect against disasters like house destroyed (mine almost was recently, house got hit by lightning but fortunately not super powerful as it can be and I lost 20 some devices, took 6 weeks to fix all but one of them), theft, etc.
One of my backups is very simple. I replicate up to external USB, disconnect, and it goes to the bank safe deposit box every so often. I use kopia. I use truenas repllication to a vps. I use rsync for a few things even.
For me, Iâd never trust a single backup or method. Maybe zfs has a random rare error and when I go to restore from a replicated copy, bad things happen. Or, maybe my backup job does something stupid, a glitch caused by it or me, and I lose my backup and the my system dies. So many possibilities.
cheers for all your great inputs! I think I will not worry too much about a TrueNAS takeover and either add some rsync to Synology with versioning, or Backblaze to have another backup method.
Itâs a great question. Here are some of my thoughts:
PUSH vs PULL - PULL can give you another level of security IF your PULL system is running less services than your primary. The idea here is that the more services you run on your TrueNAS the more potential attack vectors. If you only need your PULL service to act as a backup target then you may only need the SSH service enabled.
SSH - Having said that SSH is probably the most likely way you are going to be hit so it makes sense to secure this as best as you can. Use key exchange and disable password authentication for the entire service. Keep your network secure and if possible use network segmentation to limit certain services to certain addresses/ranges for example VLANs and network ACLs.
As mentioned use 2FA for access to the WebUI and also the above suggested network segmentation to limit which addresses can access it. Disable the default truenas_admin user and create your own bespoke admin user.
Create a specific and dedicated replication user and limit their access to only what is required.
Donât forget IPMI (if applicable). If I had a pound every time I came across a server that had been locked down hard but the IPMI address was exposed to all and sundry with default login credentials.
Disable default console access without a password as others have mentioned physical access to your system is just as likely if not more so regardless of where itâs kept.
Explore the idea of creating periodic zpool checkpoints to protect against accidental deletion and malicious attacks.
Consider the use of ZFS holds for snapshots ensuring they cant be easily deleted.
My line of thinking was something else.
If I use push, I need to have rw permissions on both systems.
If I use pull, the remote system only has r permissions to the local system.
if my remote TrueNAS gets taken over, it canât destroy the data on local
if my local TrueNAS gets taken over, it canât initiate a connection to the remote TrueNAS from a firewall standpoint. So while it might pull the encrypted data, it wonât destroy itself.
Sure. Password auth I always disable for all SSH stuff. And also from a firewall perspective, the only device allowed to even connect over Port22 is the static IPv6 of local to the static IPv6 of remote.
Own VLAN.
7 and 8 sound very interesting, I will take a look into that.
You can use zfs allow to assign a given user ârightsâ to send and receive a given dataset. That would mean no sudo rights needed for that user and the given user would not have the rights to delete datasets on your pool.
zfs allow repuser send,receive pool/dataset
If you want the user to be able to create and destroy snapshots (if you are trying to keep the two systems in sync) then you can also add the snapshot element.