I am trying out Fangtooth and the instances feature. However I selected a temporary pool as Incus storage and now I would like to move the data over to another pool. I tried to stop all instances, replicated /.ix-virt to the new pool and changed to the new pool in Instances. The instances start but unfortunately they still seem to reference the old pool. If I remove .ix-virt from the old pool, instances fail.
Any suggestions?
Tried messing with this for a while but ended up exporting the instances, changing config to a new pool and then importing all instances again.
Shut down all instances.
My VMs had the system drive as a separate zvol on the correct pool already so I just deleted the disk from the VM first.
Export each instance with
incus export <instance_name> /mnt/vm/<instance_name>.tgz
Reconfigure incus to use the new pool
Import each instance with
incus import /mnt/vm/<instance_name>.tgz
Re-attach the zvols to each VM.
I had to set boot priority manually. Otherwise the machine would first fail to boot its default disk and then timeout PXE and HTTP boot on both IPv4 and IPv6 before trying the added disk.
incus config device set <instance_name> disk0 boot.priority=10
I hope this doesnβt break the next upgrade
Curious if you upgraded to 25.04 release and had an issue.
I think you could technically create another storage pool and then incus move vmname -s newpool
A quick question, Any idea if the global pool setting is changed, will it mess with existing VMs?
Yes, I upgraded and have not seen any issues yet.
When I tried to change the global pool settings it restarted β!nstancesβ and everything was blank. The existing VMs did not show up. They were not deleted from disk though.
Hi, I am interested in defining the boot priority between some instances (since with 25.04 you cannot do via webUI yet) but I am not sure is a good idea to keep user.autostart next to boot.autostart. Did you remove the flag on autostart from the webUI? (this will actually set to false user.autostart)
Iβm also trying to migrate instance from one pool to another. Iβm sure Iβm doing something unnecessary, but I just canβt figure out a way to change the root device pool to the new one. (I also have the rest of the script which migrates all datasets and shares - after Iβm done with instances, Iβm going to apps).
@nasplz was right, we can use incus move. Just a slight change in the syntax:
sudo incus move <vmname> <vmname> -s <new_pool>
If you donβt put the second vmname, it will randomly rename the instance. If you put a different vmname, it can also be renamed in a single move.
Of course, this doesnβt migrate other disks, just the βrootβ disk, which is pretty much source-less.
EDIT: Working version of the script in the next post. If anyone wants to contribute/improve it further, let me know. Iβll share the link on the GitHub for the futureβ¦hopefully. Iβm sure there are some things in the script that are redundant, but it still works. So any feedback is appreciated for making it more streamlined.
Posting my final version of the script. Will probably upload to GitHub for simplicity in updating it in the future, but anyway, here you go.
#!/usr/bin/env bash
set -euo pipefail
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# 1) Resolve CLI paths
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
declare -A CMD
for bin in incus zpool jq awk sed date sleep; do
path="$(command -v "$bin" 2>/dev/null || true)"
for alt in /usr/sbin/$bin /usr/bin/$bin /sbin/$bin /bin/$bin; do
[[ -x "$alt" ]] && path="$alt" && break
done
[[ -x "$path" ]] || { echo "β '$bin' not found"; exit 1; }
CMD[$bin]="$path"
done
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# 2) Prompt for Source & Destination Pools
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
read -p "Source ZFS pool (e.g. SSD-Mirror): " SRC_POOL
read -p "Destination ZFS pool (e.g. SSD-Pool): " DST_POOL
sudo "${CMD[zpool]}" list "$SRC_POOL" &>/dev/null || { echo "β Source pool not found"; exit 1; }
sudo "${CMD[zpool]}" list "$DST_POOL" &>/dev/null || { echo "β Destination pool not found"; exit 1; }
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# 3) Detect all Incus volumes in source pool
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
mapfile -t INCUS_VOLUMES < <(
sudo "${CMD[incus]}" storage volume list "$SRC_POOL" --format json |
"${CMD[jq]}" -r '.[] | select(.content_type == "block") | .name'
)
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# 4) Build VM:device β volume map
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
declare -A INSTANCE_DEVICE_MAP ORPHAN_MAP
declare -a INSTANCE_LIST RUNNING_INSTANCES
mapfile -t INSTANCE_LIST < <(sudo "${CMD[incus]}" list --format json | "${CMD[jq]}" -r '.[].name')
for vm in "${INSTANCE_LIST[@]}"; do
[[ "$(sudo "${CMD[incus]}" info "$vm" | awk '/^Status/ {print $2}')" == "Running" ]] &&
RUNNING_INSTANCES+=("$vm")
cfg=$(sudo "${CMD[incus]}" config show "$vm" --expanded)
for vol in "${INCUS_VOLUMES[@]}"; do
mapfile -t devs < <(
echo "$cfg" | awk -v V="$vol" '
$0 ~ "^[ ]{4}source: " V "$" {
for (i = NR; i > 0; i--) {
if (match(lines[i], /^[ ]{2}[A-Za-z0-9_-]+:$/)) {
d = substr(lines[i], 3, length(lines[i]) - 2); print d; exit
}
}
}
{ lines[NR] = $0 }
'
)
for dev in "${devs[@]}"; do
INSTANCE_DEVICE_MAP["$vm:$dev"]="$vol"
done
done
# Scan for orphaned root
cfg_block=$(awk -v d="root" '
$0 ~ "^[ ]{2}"d":$" { inb=1; next }
inb && match($0,/^ /) { print; next }
inb && match($0,/^[ ]{2}[^ ]/) { exit }
' <<<"$cfg")
pool_now=$(awk '/^[ ]*pool:/ {print $2; exit}' <<<"$cfg_block")
src_now=$(awk '/^[ ]*source:/ {print $2; exit}' <<<"$cfg_block")
type_now=$(awk '/^[ ]*type:/ {print $2; exit}' <<<"$cfg_block")
if [[ "$type_now" == "disk" && "$pool_now" == "$SRC_POOL" && -z "$src_now" ]]; then
ORPHAN_MAP["$vm"]=1
fi
done
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# 5) Stop running VMs
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
for vm in "${RUNNING_INSTANCES[@]}"; do
echo "π Stopping $vmβ¦"; sudo "${CMD[incus]}" stop "$vm" || true
done
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# 6) Register destination pool if missing
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
if ! sudo "${CMD[incus]}" storage list --format json |
"${CMD[jq]}" -e ".[] | select(.name==\"$DST_POOL\")" &>/dev/null; then
echo "π§ Registering '$DST_POOL'β¦"
sudo "${CMD[incus]}" storage create "$DST_POOL" zfs source="$DST_POOL"
fi
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# 7) Update default pool if needed
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
current_default_pool=$(sudo "${CMD[incus]}" config get storage.default_pool)
[[ "$current_default_pool" == "$SRC_POOL" ]] && {
echo "π§ Changing default_pool from '$SRC_POOL' to '$DST_POOL'"
sudo "${CMD[incus]}" config set storage.default_pool "$DST_POOL"
}
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# 8) Confirm overwrites
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
read -p "Overwrite existing volumes in '$DST_POOL'? (yes/no): " OVERWRITE
OVERWRITE="${OVERWRITE,,}"
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# 9) Migrate instances and volumes
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
declare -A TEMP_SHARED_FLAG
for vm in "${INSTANCE_LIST[@]}"; do
echo "π Handling instance '$vm'β¦"
echo "β³ Ensuring $vm is stoppedβ¦"
while [[ "$(sudo "${CMD[incus]}" info "$vm" | awk '/^Status/ {print $2}')" == "Running" ]]; do
sleep 5
done
if [[ -n "${ORPHAN_MAP[$vm]:-}" ]]; then
echo "β»οΈ Detected orphaned root β Using incus move"
# π Pre-check shared status on all attached volumes
for entry in "${!INSTANCE_DEVICE_MAP[@]}"; do
IFS=':' read -r vm2 dev <<<"$entry"
[[ "$vm2" != "$vm" ]] && continue
vol=${INSTANCE_DEVICE_MAP[$entry]}
[[ "$dev" == "root" ]] && continue
shared_flag=$(sudo "${CMD[incus]}" storage volume get "$SRC_POOL" "$vol" security.shared || echo "false")
if [[ "$shared_flag" != "true" ]]; then
echo "π Temporarily enabling security.shared on '$vol'"
sudo "${CMD[incus]}" storage volume set "$SRC_POOL" "$vol" security.shared=true
TEMP_SHARED_FLAG["$SRC_POOL:$vol"]=1
fi
done
# π Attempt move
if sudo "${CMD[incus]}" move "$vm" -s "$DST_POOL"; then
echo "β
'$vm' moved via incus move"
else
echo "β οΈ Move failed for '$vm'. Falling back to export/import"
backup_path="/tmp/incus_export_${vm}.tgz"
sudo "${CMD[incus]}" export "$vm" "$backup_path"
sudo "${CMD[incus]}" delete "$vm"
sudo "${CMD[incus]}" import "$backup_path"
fi
# π Restore shared flag after move
for key in "${!TEMP_SHARED_FLAG[@]}"; do
IFS=':' read -r pool vol <<<"$key"
echo "π Restoring security.shared=false on '$pool/$vol'"
sudo "${CMD[incus]}" storage volume set "$pool" "$vol" security.shared=false
done
fi
for entry in "${!INSTANCE_DEVICE_MAP[@]}"; do
IFS=':' read -r vm2 dev <<<"$entry"
[[ "$vm2" != "$vm" ]] && continue
vol=${INSTANCE_DEVICE_MAP[$entry]}
[[ "$dev" == "root" ]] && continue
was_detached="no"
echo "π¦ Checking device '$dev' for volume '$vol'"
# π΅οΈ Get current pool and existence
device_pool=$(sudo "${CMD[incus]}" config show "$vm" --expanded |
awk -v dev="$dev" '
$0 ~ "^ " dev ":$" { inb = 1; next }
inb && $0 ~ "^ pool: " { print $2; exit }
inb && $0 !~ "^ " { inb = 0 }
')
device_exists=$(sudo "${CMD[incus]}" config show "$vm" --expanded |
awk -v dev="$dev" '$0 ~ "^ " dev ":$" { print "yes"; exit }')
if sudo "${CMD[incus]}" storage volume show "$DST_POOL" "$vol" &>/dev/null; then
if [[ "$OVERWRITE" == "yes" ]]; then
if [[ "$device_exists" == "yes" && "$device_pool" == "$DST_POOL" ]]; then
echo "π Detaching '$dev' (on '$DST_POOL')"
sudo "${CMD[incus]}" config device remove "$vm" "$dev" || true
was_detached="yes"
fi
echo "π§Ή Deleting volume '$vol' in '$DST_POOL'"
sudo "${CMD[incus]}" storage volume delete "$DST_POOL" "$vol"
else
echo "π Updating '$dev' to use pool '$DST_POOL'"
sudo "${CMD[incus]}" config device set "$vm" "$dev" pool "$DST_POOL"
continue
fi
fi
echo "π€ Copying '$SRC_POOL/$vol' β '$DST_POOL/$vol'"
sudo "${CMD[incus]}" storage volume copy "$SRC_POOL/$vol" "$DST_POOL/$vol" --volume-only
if [[ "$was_detached" == "yes" ]]; then
echo "π§· Reattaching '$vol' to '$vm' as '$dev'"
sudo "${CMD[incus]}" config device add "$vm" "$dev" disk \
pool="$DST_POOL" source="$vol" path="/mnt/$dev"
else
echo "π Updating '$dev' to use pool '$DST_POOL'"
sudo "${CMD[incus]}" config device set "$vm" "$dev" pool "$DST_POOL"
fi
attached=$(sudo "${CMD[incus]}" list --format json |
"${CMD[jq]}" -r ".[] | select(.devices[]?.source==\"$vol\") | .name")
[[ -z "$attached" ]] && sudo "${CMD[incus]}" storage volume delete "$SRC_POOL" "$vol"
done
done
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# 10) Restart instances
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
for vm in "${RUNNING_INSTANCES[@]}"; do
echo "π Restarting $vmβ¦"
sudo "${CMD[incus]}" start "$vm"
done
echo "π All done! Migration complete."