Hello everyone.
I would like to point out that I have recently approached TrueNAS and ZFS so my skills in the field are limited. I searched first on the Internet and also on the forum but I did not find much about what is happening to me.
I have two identical TrueNAS (SCALE ElectricEel-24.10.2), they are in two separate locations and connected to each other via VPN. The two NAS can be seen, I have a DataSet of about 400GB that I am replicating remotely (I used the wizard for ZFS replication with snapshot), I followed the guide and everything works properly.
It seems to me that the ZFS level compression is active (by default) and I have disabled encryption in the replication job (I am in VPN).
What is strange?
The snapshot is done daily, and the replication job is connected to this so it starts automatically.
New snapshots always take up about 14GB (Used) and Referenced are about 400GB, so I expect that data for about 14GB will be sent through the VPN. Instead they are always about triple, so about 40-50GB. Every day instead 14GB the transitted data was about 45GB… I see this data from firewall stats. No other task/networking activity is running in meanwhile. How is this possible? Am I doing something wrong?
Thx for reply.
It’s probably how you’re writing or modifying files on the dataset between snapshots.
The “used” property of a snapshot is only a measure of how much unique data it holds. (i.e, not shared with the live filesystem or any other snapshots)
Whenever you do an incremental replication, all the new blocks must be transferred. This means that a snapshot that shows 0 bytes “used” can still require many GB to be transferred to the destination.
This does not address your original problem, but this seems to indicate that you’re doing a lot of deletion (size) after creating a new snapshot.
Or just a lot of general churn
1 Like