Cloud Sync is resending all data

Hi,
I am setting up Cloud Sync jobs to backup 2 datasets to S3 buckets (provided by Mega S4 service). I am using encryption and filename encryption.
My broadband is slow (20Mbps upload) and ping times go to >1000ms when the transfer is running, so this could be a potential cause of my problems.

When the job is run for a 2nd time all the data is re-transferred.

Does this work correctly with other S3 providers or could it be a problem with the new Mega S4 service?
Alternatively, it could it be due to file modification times not being accurate enough. This can be resolved by an rclone setting which Truenas does not expose through the UI.

I tested without filename encryption and saw that an upload file test.avi appeared in the S3 storage as test.avi.bin. It seem like likely that this woudl cause issues!

I then tested with no encryption at all and found that the resync operation no longer uploads all the data again but it does updated the timestamp on every file. Is this expected?

To be clear, my preference is to use encryption :slight_smile:
Any suggestions?

Post your backup job settings here. I backup to Storj and Backblaze and both do only changed files for me.

Here are the job settings: Direction: Push
Transfer Mode: Sync
Directory/Files: /mnt/data-pool1/testing

Destination
Bucket: data-pool1-testing
Folder: /

Schedule: Disabled

Use Snapshot: Off
Create Empty Dirs: Off
Follow Symlinks: Off
Pre Script: blank
Post Script: blank

Exclude: /.Trash-1000/**

Storage Class: blank
Use Fast List: Off
Remote Encryption: On
Filename Encryption: On

Transfers: Low Bandwidth (4)
Bandwidth Limit: 2.5MiB/s


Here are the results of two successive Sync Jobs:

root@truenas[/home/truenas_admin]# cat /var/log/jobs/3345.log
2025/02/10 18:55:33 INFO : Starting bandwidth limiter at 2.500Mi Byte/s
2025/02/10 18:55:34 INFO :
Transferred: 2.001 MiB / 31.946 MiB, 6%, 0 B/s, ETA -
Transferred: 0 / 2, 0%
Elapsed time: 1.0s
Transferring:

  •                                Squeak.AVI:  6% /31.946Mi, 0/s, -
    
  •                                 test2.txt:1060% /5, 0/s, -
    

2025/02/10 18:55:35 INFO : test2.txt: Copied (new)
2025/02/10 18:55:57 INFO : Squeak.AVI: Copied (new)

root@truenas[/home/truenas_admin]# cat /var/log/jobs/3370.log
2025/02/10 19:06:05 INFO : Starting bandwidth limiter at 2.500Mi Byte/s
2025/02/10 19:06:06 INFO : test2.txt: Copied (replaced existing)
2025/02/10 19:06:06 INFO :
Transferred: 2.001 MiB / 31.946 MiB, 6%, 0 B/s, ETA -
Checks: 2 / 2, 100%
Transferred: 1 / 2, 50%
Elapsed time: 1.0s
Transferring:

  •                                Squeak.AVI:  6% /31.946Mi, 0/s, -
    

2025/02/10 19:06:22 INFO : Squeak.AVI: Copied (replaced existing)

All the above was supposed to be plain text but some of it was converted to bold etc.

I have tested the same settings with an Amazon S3 account and everything works correctly. The problem here seems to be with the Mega S4 storage service. I will raise a ticket with them.

I’ve also discussed this on the rclone forum: “rclone-sync-behaviour-with-encryption/50091”

It’s definitely due to my provider. I’ll update this ticket if I make any progress.

I have stopped using Mega S4 and moved to Hetzner Object Storage which works correctly with encryption enabled (including file name encryption) using the included settings of the TrueNAS cloud sync tasks.

Mega gave me the following reply as to why the features I was trying to use didn’t work and whether they are intending to fix it:

According to our developers, There are 3 possible solutions:

  1. Use aws cli with the sync command. Which uses last modification date + file size to check if the object is the same or not.

  2. Use --update --use-server-modtime, modification time becomes the time the object was uploaded y uploads files whose local modification time is newer than the time it was last uploaded. Files created with timestamps in the past will be missed by the sync.

rclone sync --update --use-server-modtime /path/to/source s3:bucket

  1. Use --size-only only checks the size of files, if the file doesn’t change size then rclone won’t detect it has changed

rclone sync --size-only /path/to/source s3:bucket


About the technical explanation for this:

rclone sync in S3 compares file modification times (mtime), sizes, and MD5 hashes (for objects below the --s3-upload-cutoff value) between the source and destination to determine which files need updating.

Since S3 does not natively store mtime, rclone preserves it using user-defined object metadata (X-Amz-Meta-Mtime).

User-defined metadata allows users to attach custom key-value pairs to objects when uploading them.

These metadata entries are stored with the object but are not processed by S3, meaning they can only be retrieved when explicitly requested.

With option 2, the difference is that rclone compares the object’s upload time in S3 with the local file’s modification time (time), instead of relying on user-defined metadata (X-Amz-Meta-Mtime)

Currently, since this is not a core feature of S3, it is not supported by S4. However, it is included in our backlog, and we are considering its implementation in the future.

Kind regards,
MEGA S4 Support

Moin!

Facing similar issues with TrueNAS Core’s Cloud Sync Task re-copying all files to MS OneDrive Personal over and over again instead of skipping existing (unchanged) files. grumpycat

Still wokring on it, but I fear that I need to update rclone to 1.69.0 or 1.69.1 while I am using rclone 1.68.1 as of today.

See hxxps://rclone.org/changelog/ :

** Bug Fixes*

    • accounting*
    • Fix global error acounting (Benjamin Legrand)*
    • Fix debug printing when debug wasn’t set (Nick Craig-Wood)*
    • Fix race stopping/starting the stats counter (Nick Craig-Wood)*
    • rc/job: Use mutex for adding listeners thread safety (hayden.pan)*
    • serve docker: Fix incorrect GID assignment (TAKEI Yuya)*
    • serve nfs: Fix missing inode numbers which was messing up ls -laR (Nick Craig-Wood)*
    • serve s3: Fix Last-Modified timestamp (Nick Craig-Wood)*
    • serve sftp: Fix loading of authorized keys file with comment on last line (albertony)*

Hi there,

there’s a new release due soon (fangtooth), maybe it has a newer version of rclone?

Alos suggest you open a new thread. :slight_smile:

Hi Felix,

this was just an information for you.

Btw: manually upgraded to rclone from 1.68.1 to 1.69.1 but didn’t Change anything.

Hallo, a bit offtopic. I am using AWS Cloud directly as for pure archive purposes it seems to be the cheapest option. Also it work quite nice and uncomplicated.

I don’t think it was the cheapest (or the easiest pricing to understand!) when I looked at it, but also I preferred a European provider.

i agree its not the easiest to understand but the cheapest by far. here is an example of mine form last month. most costly is the 1 time bulk upload and in case of recovery the 1 time bulk downliad. incremental uploads are less expensive but the most important thing, the storage itself on deep_glacier is very cheap compared to others.

The larger the files the “cheaper” the uplaod as it charges by PUT/GET Requests. I’ve uploaded approx 75k on Pictures in this example.