CLI commands of pool import and export

Hi
I’m trying to build a semi-automated (restic) backup process. Is there a way to use CLI to replicate the GUI steps of importing a pool from a connected drive and then export and disconnect it when the backup is done?
zpool import backup_pool
does import a pool, but makes it read-only and gives a warning on import

cannot mount ‘/backup_pool’: failed to create mountpoint: Read-only file system
Import was successful, but unable to mount some datasets

Exporting via zpool export backup_pool does export the pool, but in GUI it still has leftovers - it sees the pool as ready for import, but there are some tiles still left as if the pool is still live. Not disconnected maybe?

I guess I need to pass some additional parameters on import as well as on export. Any tips?

First of all, yes you are missing some flags in your zpool command (-R /mnt at least). But using zpool directly is not supported anyway.

Any GUI action can be done via the API. The GUI is an API client in itself. In the command line you would use midclt to call the API.

Importing:

Works via pool-guid

  • sudo midclt call pool.import_pool '{"guid":POOLGUID}'

Exporting

I think this requires at least two steps due to export command NOT using the pool guid.

  • sudo midclt call pool.query to find id of the pool (different from the pool GUID)
  • sudo midclt call pool.export POOLID

Disclaimer: I don’t use these commands, use them at your own risks.

1 Like

To import you have to first start the pool.import_find job so middleware will start an simple zpool import in the background.

The results of that can be found a bit later (depending on your system) via core.get_jobs

Best to set a filter on that like the following

midclt call core.get_jobs  |jq ' . [] | select ( .method == "pool.import_find" ) '

Okay. Once you give gave me a hint to go with the API calls, I’ve sat together with ChatGPT and here are two scripts we’ve ended up with:
import_pool.sh

#!/bin/bash

if [ $# -ne 1 ]; then
    echo "Usage: $0 <pool_name>"
    exit 1
fi

pool_name=$1

# Function to check if a pool is already imported
check_pool_imported() {
    local name=$1
    response=$(midclt call pool.query)
    echo "$response" | grep -q "\"name\": \"${name}\""
}

# Check if the pool is already imported
if check_pool_imported "$pool_name"; then
    echo "Pool '${pool_name}' is already imported and available."
    exit 0
fi

# Start the import_find job and get the job ID
job_id=$(midclt call pool.import_find)
#echo "Started import_find job with ID: $job_id"

# Poll until the job is complete
while true; do
    echo "Checking import job status..."
    job_status=$(midclt call core.get_jobs '[["id", "=", '"$job_id"']]' | jq -r '.[0]')
    state=$(echo $job_status | jq -r '.state')
    echo "Import job state: $state"
    if [ "$state" == "SUCCESS" ]; then
        echo "Inport job succeeded."
        break
    elif [ "$state" == "FAILED" ]; then
        echo "Import job failed"
        exit 1
    fi
    sleep 1
done

# Extract the job result once it's complete
pools=$(echo $job_status | jq -r '.result')

# Find and import the pool with the specified name
found_pool=""
imported_pool=""

# Process each pool result using jq and a while loop
echo "$pools" | jq -c '.[]' | while read -r pool; do
    name=$(echo "$pool" | jq -r '.name')
    guid=$(echo "$pool" | jq -r '.guid')
    if [ "$name" == "$pool_name" ]; then
        found_pool=$name
        import_job_id=$(midclt call pool.import_pool "{\"guid\": \"$guid\"}")

        # Poll until the import job is complete
        while true; do
            import_job_status=$(midclt call core.get_jobs '[["id", "=", '"$import_job_id"']]' | jq -r '.[0]')
            import_state=$(echo $import_job_status | jq -r '.state')
            if [ "$import_state" == "SUCCESS" ]; then
                break
            elif [ "$import_state" == "FAILED" ]; then
                exit 1
            fi
            sleep 1
        done
    fi
done

# Final check if the pool is imported
if check_pool_imported "$pool_name"; then
    echo "Successfully imported pool: $pool_name"
    exit 0
else
    echo "Pool with name $pool_name not found or not imported"
    exit 1
fi

and export/disconnect with export_pool.sh

#!/bin/bash

if [ $# -ne 1 ]; then
    echo "Usage: $0 <pool_name>"
    exit 1
fi

pool_name=$1

# Function to check if a pool is imported and available
check_pool_imported() {
    local name=$1
    response=$(midclt call pool.query)
    echo "$response" | grep -q "\"name\": \"${name}\""
}

# Function to get the pool ID by name
get_pool_id() {
    local name=$1
    midclt call pool.query '[["name", "=", "'"${name}"'"]]' | jq -r '.[0].id'
}

# Check if the pool is imported and available
if ! check_pool_imported "$pool_name"; then
    echo "Pool '${pool_name}' is not imported and available."
    exit 1
fi

echo "Starting export of pool '${pool_name}'..."

# Get pool ID for export
pool_id=$(get_pool_id "$pool_name")

# Start the pool export job
export_job_id=$(midclt call pool.export "$pool_id")
echo "Started pool export job with ID: $export_job_id"

# Poll until the export job is complete
while true; do
    echo "Checking export job status..."
    export_job_status=$(midclt call core.get_jobs "[[\"id\", \"=\", ${export_job_id}]]" | jq -r '.[0]')
    export_state=$(echo $export_job_status | jq -r '.state')
    echo "Export job state: $export_state"
    
    if [ "$export_state" == "SUCCESS" ]; then
        echo "Export job succeeded."
        break
    elif [ "$export_state" == "FAILED" ]; then
        echo "Export job failed"
        exit 1
    fi
    sleep 1
done

# After export, double-check if the pool is no longer available
if check_pool_imported "$pool_name"; then
    echo "Pool '${pool_name}' still found after export attempt. Error!"
    exit 1
else
    echo "Pool '${pool_name}' successfully exported and no longer available."
    exit 0
fi

I’m by no means an expert, but these two are working for my needs.
Thanks for pointing me to the API.

Your welcome. More people should know about that anyhow!.
It’s also available on your TrueNAS-System via → https://$NAS/api/docs/websocket

I even use it to look up some but not all of the calls with an tmux-fzf script.

Not a fan of ChatGpt but only CatGpt but now that it’s there i will at least take a look later :-P.

does this work on scale?

TrueNAS Scale 24.10.0.2 is the only version that I’ve tested it on. I have no idea if it works or not on any other train or version tbh

1 Like

thanks got it working but i also have another problem my apps are not showing up they are the official apps and the ix-application dataset also exists so idk whats the problem

It has nothing to do with the original post thus makes no sense to explore it in this thread. I suggest you ask for help in a new/different topic. I hope someone will be able too help you out.

2 Likes