Trying to backup to external USB drive via Replication

(Reposted from the TrueNAS subreddit)

I am currently running a 12TB TrueNAS system, primarily as a media server but also to back up other systems in the house. I found a good deal on a 14TB external USB drive, so I decided a proper backup for the NAS was a good idea. I set it up according to recommendations in this thread: Reddit - Dive into anything (specifically the recommendation by konzty to use a single-disc pool on the backup drive, and then use Replication to regularly snapshot the changes and copy to the backup. Everything seems to be working, I get notifications that the replication tasks have succeeded, etc. Howeverā€¦

My dashboard for the backup pool says that there are over 12 TB free, and Iā€™ve effectively used 0% of available space. That definitely doesnā€™t seem right! If I go look at pools, for instance, it shows similar info. Any ideas? Is TrueNAS somehow not reporting the state of the backup drive correctly? Seems more likely that Iā€™ve set up something incorrectly and it never copied the initial image, but it seems relatively simple and Iā€™m not sure what I could have done wrong (I am in no way an expert on this stuff however!).

Iā€™m not even sure how to check the content on the backup drive outsid3e of TrueNAS, since I donā€™t think any of my other machines can understand a ZFS disk.

Any advice appreciated!

You probably replicated the top-level root dataset, without invoking ā€œrecursiveā€ or ā€œfull filesystem backupā€ in the Replication Taskā€™s options.

Without seeing any more information, we can only speculateā€¦

1 Like

I think you can share on SMB (or similar) the backup pool

1 Like

Output from this command can be helpful;
zpool list
Please post the output in CODE tags.

1 Like

So the zpool list shows all space free on the backup drive. But I think the suggestion above that neither Recursive nor Full Filesystem Backup were checked is correct, I just checked my task and indeed they are both unchecked.

Full Filesystem Backup seems pretty obvious, but Iā€™m not clear about the Recursive. Is that something that I should check for a backup replication task?

ā€œRecursiveā€ will include all child datasets that live underneath the selected source dataset. (Since you chose the root dataset as the ā€œsourceā€, you basically selected an empty ā€œplaceholderā€ dataset. Your data lives on children dataset.)

ā€œFull Filesystem Replicationā€ implies recursive (above) + all properties, volumes, and clones.

1 Like

OK, sorry for the newb questionsā€¦I checked the full file system replication and re-ran it, and quickly go the following error notification (Note: ā€œdeadpoolā€ is the main data pool Iā€™m trying to back up):

Replication ā€œdeadpool - NAS Backup Poolā€ failed: skipping snapshot deadpool@auto-2024-06-04_00-00 because it was created after the destination snapshot (auto-2024-06-03_00-00) skipping snapshot deadpool@auto-2024-06-05_00-00 because it was created after the destination snapshot (auto-2024-06-03_00-00) cannot send deadpool@auto-2024-06-03_00-00 recursively: snapshot deadpool/iocage@auto-2024-06-03_00-00 does not exist warning: cannot send ā€˜deadpool@auto-2024-06-03_00-00ā€™: backup failed cannot receive: failed to read from streamā€¦

2024-06-05 12:16:59 AM (America/New_York)

I have had some problem after editing an existing taskā€¦ I dont know if exist a better way but i just

  • delete task
  • delete snapshot
    And create new task from scratch
1 Like

Update: I recreated the whole thing over and itā€™s definitely creating snapshots every day like I scheduled, and the disk is now showing like 70% full, which is what I would expect from backing up the whole system. It is telling me that the backup pool is ā€œdegraded.ā€ Itā€™s a brand new drive so that seems odd. Not really sure what that means.

I still need to try to poke around on the backup drive and see if the files seem correct and uncorrupted.

Could you give us info about your backup poolā€™s layout? Please post the output of zpool status.

To me, it sounds like you used a VDEV type that expects multiple disks (e.g. mirror or Z-1) and is thus reported DEGRADED with a single disk.

1 Like