Hello
I have an issue with replication with recursive dataset
The problem happen each time in the same scenario, otherwise everything’s fine.
My setup is 2 truenas core, one replicating on the other each night.
I have a dataset named “EN COURS” containing multiple dataset (one level only) for each project we work on.
all this is snapshot-ed each morning at 4am locally and replicated on the other server
Yesterday we deleted some dataset as those project were over but “BUSSANG” cannot be deleted.
First of all it gives an error : “ Error deleting dataset BUSSANG.
[EFAULT] Failed to delete dataset: cannot unmount ‘/mnt/RZ_11X3_1S/EN COURS/BUSSANG’: pool or dataset is busy
Some other dataset has been deleted without issue yesterday but this one is impossible to delete. Maybe some file might be busy inside. It’s always a big amount of data so delete directly the dataset without emptying it as it would take ages to delete manually
Meanwhile all snapshot and shares seams to have been deleted as there is not any snapshot left for this dataset and share has disappeared.
First : Is there a way to force delete a dataset ? what is this error about ?
After that when the replication of “EN COURS” start with all its sub dataset, it fail and nothing is replicated. the other dataset with no error are not replicated also on the other server. whereas Local Snapshot has been done correctly (so i have snapshot for each dataset on the “Feb 18 4am” locally but on the replicated server they stop on “feb 17 4am”
Error in the notification tray :
CRITICAL
Replication “RZ_11X3_1S/EN COURS - RZ2_1X14_1S/MAGENTA_BACKUP/EN COURS” failed: No incremental base on dataset ‘RZ_11X3_1S/EN COURS/BUSSANG’ and replication from scratch is not allowed..
2026-02-18 11:14:13 (Europe/Paris)
If I try manually now :
Log :
[2026/02/18 11:14:09] INFO [Thread-1949] [zettarepl.paramiko.replication_task__task_2] Connected (version 2.0, client OpenSSH_8.8-hpn14v15)
[2026/02/18 11:14:09] INFO [Thread-1949] [zettarepl.paramiko.replication_task__task_2] Authentication (publickey) successful!
[2026/02/18 11:14:13] INFO [replication_task__task_2] [zettarepl.retention.calculate] Not destroying ‘auto-2025-05-14_04-00’ as it is the only snapshot left for naming schema ‘auto-%Y-%m-%d_%H-%M’
[2026/02/18 11:14:13] INFO [replication_task__task_2] [zettarepl.retention.calculate] Not destroying ‘auto-2025-05-14_04-00’ as it is the only snapshot left for naming schema ‘auto-%Y-%m-%d_%H-%M’
[2026/02/18 11:14:13] INFO [replication_task__task_2] [zettarepl.retention.calculate] Not destroying ‘auto-2025-05-14_04-00’ as it is the only snapshot left for naming schema ‘auto-%Y-%m-%d_%H-%M’
[2026/02/18 11:14:13] INFO [replication_task__task_2] [zettarepl.retention.calculate] Not destroying ‘auto-2025-05-14_04-00’ as it is the only snapshot left for naming schema ‘auto-%Y-%m-%d_%H-%M’
[2026/02/18 11:14:13] INFO [replication_task__task_2] [zettarepl.retention.calculate] Not destroying ‘auto-2025-05-14_04-00’ as it is the only snapshot left for naming schema ‘auto-%Y-%m-%d_%H-%M’
[2026/02/18 11:14:13] INFO [replication_task__task_2] [zettarepl.replication.pre_retention] Pre-retention destroying snapshots:
[2026/02/18 11:14:13] ERROR [replication_task__task_2] [zettarepl.replication.run] For task ‘task_2’ non-recoverable replication error NoIncrementalBaseReplicationError(“No incremental base on dataset ‘RZ_11X3_1S/EN COURS/BUSSANG’ and replication from scratch is not allowed”)
The problem now is that replication is broken. Last time I had to modify the replication task to backup from scratch but it is hundreds of terabytes in one run …
Does that mean that as long we have those undeletable dataset arround replication is broken ?
Is there a force delete dataset method ?
Thanks

