Move Incus to new pool - 25.04-RC1

I am trying out Fangtooth and the instances feature. However I selected a temporary pool as Incus storage and now I would like to move the data over to another pool. I tried to stop all instances, replicated /.ix-virt to the new pool and changed to the new pool in Instances. The instances start but unfortunately they still seem to reference the old pool. If I remove .ix-virt from the old pool, instances fail.
Any suggestions?

Tried messing with this for a while but ended up exporting the instances, changing config to a new pool and then importing all instances again.

Shut down all instances.
My VMs had the system drive as a separate zvol on the correct pool already so I just deleted the disk from the VM first.
Export each instance with

incus export <instance_name> /mnt/vm/<instance_name>.tgz

Reconfigure incus to use the new pool
Import each instance with

incus import /mnt/vm/<instance_name>.tgz

Re-attach the zvols to each VM.
I had to set boot priority manually. Otherwise the machine would first fail to boot its default disk and then timeout PXE and HTTP boot on both IPv4 and IPv6 before trying the added disk.

incus config device set <instance_name> disk0 boot.priority=10

I hope this doesn’t break the next upgrade :slight_smile:

Curious if you upgraded to 25.04 release and had an issue.

I think you could technically create another storage pool and then incus move vmname -s newpool

A quick question, Any idea if the global pool setting is changed, will it mess with existing VMs?

1 Like

Yes, I upgraded and have not seen any issues yet.
When I tried to change the global pool settings it restarted β€œ!nstances” and everything was blank. The existing VMs did not show up. They were not deleted from disk though.

1 Like

Hi, I am interested in defining the boot priority between some instances (since with 25.04 you cannot do via webUI yet) but I am not sure is a good idea to keep user.autostart next to boot.autostart. Did you remove the flag on autostart from the webUI? (this will actually set to false user.autostart)

I’m also trying to migrate instance from one pool to another. I’m sure I’m doing something unnecessary, but I just can’t figure out a way to change the root device pool to the new one. (I also have the rest of the script which migrates all datasets and shares - after I’m done with instances, I’m going to apps).

@nasplz was right, we can use incus move. Just a slight change in the syntax:

sudo incus move <vmname> <vmname> -s <new_pool>

If you don’t put the second vmname, it will randomly rename the instance. If you put a different vmname, it can also be renamed in a single move.

Of course, this doesn’t migrate other disks, just the β€œroot” disk, which is pretty much source-less.

EDIT: Working version of the script in the next post. If anyone wants to contribute/improve it further, let me know. I’ll share the link on the GitHub for the future…hopefully. I’m sure there are some things in the script that are redundant, but it still works. So any feedback is appreciated for making it more streamlined.

Posting my final version of the script. Will probably upload to GitHub for simplicity in updating it in the future, but anyway, here you go.

#!/usr/bin/env bash
set -euo pipefail

# ──────────────────────────────────────────────────────────────
# 1) Resolve CLI paths
# ──────────────────────────────────────────────────────────────
declare -A CMD
for bin in incus zpool jq awk sed date sleep; do
  path="$(command -v "$bin" 2>/dev/null || true)"
  for alt in /usr/sbin/$bin /usr/bin/$bin /sbin/$bin /bin/$bin; do
    [[ -x "$alt" ]] && path="$alt" && break
  done
  [[ -x "$path" ]] || { echo "❌ '$bin' not found"; exit 1; }
  CMD[$bin]="$path"
done

# ──────────────────────────────────────────────────────────────
# 2) Prompt for Source & Destination Pools
# ──────────────────────────────────────────────────────────────
read -p "Source ZFS pool (e.g. SSD-Mirror): " SRC_POOL
read -p "Destination ZFS pool (e.g. SSD-Pool): " DST_POOL

sudo "${CMD[zpool]}" list "$SRC_POOL" &>/dev/null || { echo "❌ Source pool not found"; exit 1; }
sudo "${CMD[zpool]}" list "$DST_POOL" &>/dev/null || { echo "❌ Destination pool not found"; exit 1; }

# ──────────────────────────────────────────────────────────────
# 3) Detect all Incus volumes in source pool
# ──────────────────────────────────────────────────────────────
mapfile -t INCUS_VOLUMES < <(
  sudo "${CMD[incus]}" storage volume list "$SRC_POOL" --format json |
    "${CMD[jq]}" -r '.[] | select(.content_type == "block") | .name'
)

# ──────────────────────────────────────────────────────────────
# 4) Build VM:device β†’ volume map
# ──────────────────────────────────────────────────────────────
declare -A INSTANCE_DEVICE_MAP ORPHAN_MAP
declare -a INSTANCE_LIST RUNNING_INSTANCES

mapfile -t INSTANCE_LIST < <(sudo "${CMD[incus]}" list --format json | "${CMD[jq]}" -r '.[].name')
for vm in "${INSTANCE_LIST[@]}"; do
  [[ "$(sudo "${CMD[incus]}" info "$vm" | awk '/^Status/ {print $2}')" == "Running" ]] &&
    RUNNING_INSTANCES+=("$vm")
  cfg=$(sudo "${CMD[incus]}" config show "$vm" --expanded)
  for vol in "${INCUS_VOLUMES[@]}"; do
    mapfile -t devs < <(
      echo "$cfg" | awk -v V="$vol" '
        $0 ~ "^[ ]{4}source: " V "$" {
          for (i = NR; i > 0; i--) {
            if (match(lines[i], /^[ ]{2}[A-Za-z0-9_-]+:$/)) {
              d = substr(lines[i], 3, length(lines[i]) - 2); print d; exit
            }
          }
        }
        { lines[NR] = $0 }
      '
    )
    for dev in "${devs[@]}"; do
      INSTANCE_DEVICE_MAP["$vm:$dev"]="$vol"
    done
  done

  # Scan for orphaned root
  cfg_block=$(awk -v d="root" '
    $0 ~ "^[ ]{2}"d":$" { inb=1; next }
    inb && match($0,/^    /) { print; next }
    inb && match($0,/^[ ]{2}[^ ]/) { exit }
  ' <<<"$cfg")
  pool_now=$(awk '/^[ ]*pool:/ {print $2; exit}' <<<"$cfg_block")
  src_now=$(awk '/^[ ]*source:/ {print $2; exit}' <<<"$cfg_block")
  type_now=$(awk '/^[ ]*type:/ {print $2; exit}' <<<"$cfg_block")
  if [[ "$type_now" == "disk" && "$pool_now" == "$SRC_POOL" && -z "$src_now" ]]; then
    ORPHAN_MAP["$vm"]=1
  fi
done

# ──────────────────────────────────────────────────────────────
# 5) Stop running VMs
# ──────────────────────────────────────────────────────────────
for vm in "${RUNNING_INSTANCES[@]}"; do
  echo "πŸ›‘ Stopping $vm…"; sudo "${CMD[incus]}" stop "$vm" || true
done

# ──────────────────────────────────────────────────────────────
# 6) Register destination pool if missing
# ──────────────────────────────────────────────────────────────
if ! sudo "${CMD[incus]}" storage list --format json |
     "${CMD[jq]}" -e ".[] | select(.name==\"$DST_POOL\")" &>/dev/null; then
  echo "πŸ”§ Registering '$DST_POOL'…"
  sudo "${CMD[incus]}" storage create "$DST_POOL" zfs source="$DST_POOL"
fi

# ──────────────────────────────────────────────────────────────
# 7) Update default pool if needed
# ──────────────────────────────────────────────────────────────
current_default_pool=$(sudo "${CMD[incus]}" config get storage.default_pool)
[[ "$current_default_pool" == "$SRC_POOL" ]] && {
  echo "πŸ”§ Changing default_pool from '$SRC_POOL' to '$DST_POOL'"
  sudo "${CMD[incus]}" config set storage.default_pool "$DST_POOL"
}

# ──────────────────────────────────────────────────────────────
# 8) Confirm overwrites
# ──────────────────────────────────────────────────────────────
read -p "Overwrite existing volumes in '$DST_POOL'? (yes/no): " OVERWRITE
OVERWRITE="${OVERWRITE,,}"

# ──────────────────────────────────────────────────────────────
# 9) Migrate instances and volumes
# ──────────────────────────────────────────────────────────────
declare -A TEMP_SHARED_FLAG

for vm in "${INSTANCE_LIST[@]}"; do
  echo "🚚 Handling instance '$vm'…"

  echo "⏳ Ensuring $vm is stopped…"
  while [[ "$(sudo "${CMD[incus]}" info "$vm" | awk '/^Status/ {print $2}')" == "Running" ]]; do
    sleep 5
  done

  if [[ -n "${ORPHAN_MAP[$vm]:-}" ]]; then
    echo "♻️ Detected orphaned root β†’ Using incus move"

    # πŸ” Pre-check shared status on all attached volumes
    for entry in "${!INSTANCE_DEVICE_MAP[@]}"; do
      IFS=':' read -r vm2 dev <<<"$entry"
      [[ "$vm2" != "$vm" ]] && continue
      vol=${INSTANCE_DEVICE_MAP[$entry]}
      [[ "$dev" == "root" ]] && continue

      shared_flag=$(sudo "${CMD[incus]}" storage volume get "$SRC_POOL" "$vol" security.shared || echo "false")
      if [[ "$shared_flag" != "true" ]]; then
        echo "πŸ” Temporarily enabling security.shared on '$vol'"
        sudo "${CMD[incus]}" storage volume set "$SRC_POOL" "$vol" security.shared=true
        TEMP_SHARED_FLAG["$SRC_POOL:$vol"]=1
      fi
    done

    # 🚚 Attempt move
    if sudo "${CMD[incus]}" move "$vm" -s "$DST_POOL"; then
      echo "βœ… '$vm' moved via incus move"
    else
      echo "⚠️ Move failed for '$vm'. Falling back to export/import"
      backup_path="/tmp/incus_export_${vm}.tgz"
      sudo "${CMD[incus]}" export "$vm" "$backup_path"
      sudo "${CMD[incus]}" delete "$vm"
      sudo "${CMD[incus]}" import "$backup_path"
    fi

    # πŸ” Restore shared flag after move
    for key in "${!TEMP_SHARED_FLAG[@]}"; do
      IFS=':' read -r pool vol <<<"$key"
      echo "πŸ”’ Restoring security.shared=false on '$pool/$vol'"
      sudo "${CMD[incus]}" storage volume set "$pool" "$vol" security.shared=false
    done
  fi

  for entry in "${!INSTANCE_DEVICE_MAP[@]}"; do
    IFS=':' read -r vm2 dev <<<"$entry"
    [[ "$vm2" != "$vm" ]] && continue
    vol=${INSTANCE_DEVICE_MAP[$entry]}
    [[ "$dev" == "root" ]] && continue

    was_detached="no"

    echo "πŸ“¦ Checking device '$dev' for volume '$vol'"

    # πŸ•΅οΈ Get current pool and existence
    device_pool=$(sudo "${CMD[incus]}" config show "$vm" --expanded |
      awk -v dev="$dev" '
        $0 ~ "^  " dev ":$" { inb = 1; next }
        inb && $0 ~ "^    pool: " { print $2; exit }
        inb && $0 !~ "^    "      { inb = 0 }
      ')
    device_exists=$(sudo "${CMD[incus]}" config show "$vm" --expanded |
      awk -v dev="$dev" '$0 ~ "^  " dev ":$" { print "yes"; exit }')

    if sudo "${CMD[incus]}" storage volume show "$DST_POOL" "$vol" &>/dev/null; then
      if [[ "$OVERWRITE" == "yes" ]]; then
        if [[ "$device_exists" == "yes" && "$device_pool" == "$DST_POOL" ]]; then
          echo "πŸ”Œ Detaching '$dev' (on '$DST_POOL')"
          sudo "${CMD[incus]}" config device remove "$vm" "$dev" || true
          was_detached="yes"
        fi
        echo "🧹 Deleting volume '$vol' in '$DST_POOL'"
        sudo "${CMD[incus]}" storage volume delete "$DST_POOL" "$vol"
      else
        echo "πŸ“ Updating '$dev' to use pool '$DST_POOL'"
        sudo "${CMD[incus]}" config device set "$vm" "$dev" pool "$DST_POOL"
        continue
      fi
    fi

    echo "πŸ“€ Copying '$SRC_POOL/$vol' β†’ '$DST_POOL/$vol'"
    sudo "${CMD[incus]}" storage volume copy "$SRC_POOL/$vol" "$DST_POOL/$vol" --volume-only

    if [[ "$was_detached" == "yes" ]]; then
      echo "🧷 Reattaching '$vol' to '$vm' as '$dev'"
      sudo "${CMD[incus]}" config device add "$vm" "$dev" disk \
        pool="$DST_POOL" source="$vol" path="/mnt/$dev"
    else
      echo "πŸ“ Updating '$dev' to use pool '$DST_POOL'"
      sudo "${CMD[incus]}" config device set "$vm" "$dev" pool "$DST_POOL"
    fi

    attached=$(sudo "${CMD[incus]}" list --format json |
      "${CMD[jq]}" -r ".[] | select(.devices[]?.source==\"$vol\") | .name")
    [[ -z "$attached" ]] && sudo "${CMD[incus]}" storage volume delete "$SRC_POOL" "$vol"
  done
done


# ──────────────────────────────────────────────────────────────
# 10) Restart instances
# ──────────────────────────────────────────────────────────────
for vm in "${RUNNING_INSTANCES[@]}"; do
  echo "πŸ” Restarting $vm…"
  sudo "${CMD[incus]}" start "$vm"
done

echo "πŸŽ‰ All done! Migration complete."