ZFS Replication with pre- and postscripts

Thanks!

Since I then use a more official pipeline - if I start the replication via midclt, will it than just use the normal error handling pipeline and send me an email / text message if it errors?

I just saw that my receiving folders weren’t read only. Could this be why the task failed? And is this normal that you get a gui error along the lines of

[EFAULT] Failed retrieving GROUP quotas for backup

when you click on said read-only dataset while it runs the backup?

yes!

i know nothing about your setup. the job as is should of course “work”.

Yeah it does work. Although it wouldn’t start because said datasets weren’t read-only. At that time I didn’t know you could change it to “set read-only after backup”.
But is there away to avoid that GUI error?

Edit: Ah! After the backup completed, the error went away. :slight_smile:

I have set up the first part just fine now. Your scripts have been super helpful to understand how this API works.

Sadly I’m hung up on how to read, whether the replication was successful or not.
I get a status message via mail or Telegram, but I wanted to react in the script as well.

I discovered that you can use

midclt subscribe replication.query

to catch every event concerning the replication. But I just can’t figure out how to process that .json stream. The most I achieved so far is piping the input to jq or writing to a temp file - but it is displayed / written only after I stop the listener.

What would be the correct approach to do this? Since what I’m currently trying isn’t working because I think the process is blocking output?

Using the event seems more elegant than just polling replication.query every couple of minutes. But I feel like I will do this, since using the event streaming is way above my shell skills.

one of my snippets is one which shows the “state” of it running/pending/failed/finished iirc

midclt call replication.query |
  jq -r ' 
     . [] |
        [ 
            .id,
            .source_datasets [] // empty?,
            .enabled, 
            .state.state?,
           ( 
                  .job.time_finished? // null? |
                         ."$date" | tostring |.[0:10]? | tonumber | todate
           ),
           .job.progress.description? 
        ] | @tsv '

made id readable!

which might return something like this

1       somepool-hdd/data/smb     true    FINISHED        2024-12-25T20:01:42Z
6       p1-25-6d-z2/esx/nfs     true    FINISHED        2024-12-25T17:45:09Z    Sending 1 of 1: p1-25-6d-z2/esx/nfs@auto-2024-12-25_18-40-4W-12H [total 121.36 GiB of 123.29 GiB]

this doesn’t read with it but if you call that every 15 minutes or so i think it should be enough