I just did the upgrade from 25.04.0 to 25.04.1 and after I logged back in all of instances are gone ie. nothing shows up in the instance frame. I suspect there is a setting or something that got clobbered along the way as all of my storage and datasets look fine.
For some additional info. I am running two storage sets (a set of NVME drives for apps etc. - fast_pool) and then a bunch of spinners (85 TB) in another pool … data_pool.
I haven’t tried rolling back (not sure how). Not using any NVidia products. Using an ASUS Z790-P mother board, using on board graphics, NIC, standard ram - etc…
Do your instances reside on an encrypted pool? If so, is the root path encrypted? Eventually, by password?
With 25.04.1 under menu item “instances”: What does the status field just left of the config tab say?
A colleague of mine dared to do the same you did with a base encrypted pool (admittedly, it was poorly designed) and apparently lost his instances. Version 25.04.1 was just unable to decrypt or mount things during the initialization stage. zfs list showed that everything was there. But no CLI wizardry (ZFS key loading) worked. He finally got it to work when he ditched the password encryption and converted his pool’s encryption to key-based.
Ok thanks. I did roll back and got the debug file and also went back to 25.04.1 which the instances for sure do not show up and created a debug file from there as well. Will create a ticket to see what they can find out.
I had the same thing. All my instances were mising after upgrade, which caused a minor panic.
It seems that there is now the capability in 25.04.1 to select the pool(s) for the images. As part of the upgrade process, it picked up multiple pools (including my backup pool), and then errored because of a “Duplicate name”. Easy fix for me was to go instances->configuration->global_settings and then select only a single pool. Then, my instances list went from empty to all my old instances, which started correctly as before.
Anyway, this is just my experiance that might help.
Another update. I logged into shell and started to do some digging in the log files.
The only thing I can actually deduct is there seems be a difference in networking once the incus service comes up. Here is a grep of the syslog of incus in 25.04.0:
25.04.0
root@nas[/var/log]# grep incus syslog
Jun 20 14:48:46 nas systemd[1]: Starting incus-user.socket - Incus - Daemon (user unix socket)…
Jun 20 14:48:46 nas systemd[1]: Starting incus.socket - Incus - Daemon (unix socket)…
Jun 20 14:48:46 nas systemd[1]: Listening on incus-user.socket - Incus - Daemon (user unix socket).
Jun 20 14:48:46 nas systemd[1]: Listening on incus.socket - Incus - Daemon (unix socket).
Jun 20 14:48:46 nas systemd[1]: Starting incus-startup.service - Incus - Startup check…
Jun 20 14:48:46 nas systemd[1]: Starting incus.service - Incus - Main daemon…
Jun 20 14:48:48 nas dnsmasq-dhcp[4422]: DHCPv6 stateless on incusbr0
Jun 20 14:48:48 nas dnsmasq-dhcp[4422]: DHCPv4-derived IPv6 names on incusbr0
Jun 20 14:48:48 nas dnsmasq-dhcp[4422]: router advertisement on incusbr0
Jun 20 14:48:48 nas dnsmasq-dhcp[4422]: DHCPv6 stateless on fd42:d1bf:b283:faa6::, constructed for incusbr0
Jun 20 14:48:48 nas dnsmasq-dhcp[4422]: DHCPv4-derived IPv6 names on fd42:d1bf:b283:faa6::, constructed for incusbr0
Jun 20 14:48:48 nas dnsmasq-dhcp[4422]: router advertisement on fd42:d1bf:b283:faa6::, constructed for incusbr0
Jun 20 14:48:48 nas dnsmasq-dhcp[4422]: DHCP, sockets bound exclusively to interface incusbr0
Jun 20 14:48:48 nas dnsmasq[4422]: using only locally-known addresses for incus
Jun 20 14:48:48 nas dnsmasq[4422]: using only locally-known addresses for incus
Jun 20 14:48:49 nas systemd[1]: Finished incus-startup.service - Incus - Startup check.
Jun 20 14:48:49 nas systemd[1]: Started incus.service - Incus - Main daemon.
Jun 20 14:48:50 nas systemd[1]: var-lib-incus-storage\x2dpools-fast_pool-containers-PiHole.mount: Deactivated successfully.
Jun 20 14:48:51 nas systemd[1]: var-lib-incus-storage\x2dpools-fast_pool-containers-Unifi.mount: Deactivated successfully.
Jun 20 14:48:51 nas systemd[1]: var-lib-incus-storage\x2dpools-fast_pool-containers-nginx.mount: Deactivated successfully.
Jun 20 14:49:19 nas incusd[4249]: time=“2025-06-20T14:49:19-07:00” level=warning msg=“Failed to update instance types: Get "https://images.linuxcontainers.org/meta/instance-types/.yaml\”: lookup images.linuxcontainers.org on 192.168.20.25:53: read udp 192.168.20.10:54339->192.168.20.25:53: i/o timeout"
Jun 20 14:49:19 nas incusd[4249]: time=“2025-06-20T14:49:19-07:00” level=error msg=“Failed updating instance types” err=“Get "https://images.linuxcontainers.org/meta/instance-types/.yaml\”: lookup images.linuxcontainers.org on 192.168.20.25:53: read udp 192.168.20.10:54339->192.168.20.25:53: i/o timeout"
root@nas[/var/log/incus]# ls -ltr
total 39
-rw-r–r-- 1 root incus-admin 0 Jun 20 15:13 dnsmasq.incusbr0.log
drwxr-xr-x 2 root incus-admin 6 Jun 20 15:13 nextcloud
drwxr-xr-x 2 root incus-admin 6 Jun 20 15:13 PiHole
drwxr-xr-x 2 root incus-admin 6 Jun 20 15:13 Unifi
drwxr-xr-x 2 root incus-admin 6 Jun 20 15:13 nginx
-rw------- 1 root incus-admin 1088 Jun 20 15:14 incus.lo
Here is the same from the 25.04.01 boot:
25.04.01
root@nas[/var/log]# grep incus syslog
Jun 20 14:59:21 nas systemd[1]: Starting incus-user.socket - Incus - Daemon (user unix socket)…
Jun 20 14:59:21 nas systemd[1]: Starting incus.socket - Incus - Daemon (unix socket)…
Jun 20 14:59:21 nas systemd[1]: Listening on incus-user.socket - Incus - Daemon (user unix socket).
Jun 20 14:59:21 nas systemd[1]: Listening on incus.socket - Incus - Daemon (unix socket).
Jun 20 14:59:21 nas systemd[1]: Starting incus-startup.service - Incus - Startup check…
Jun 20 14:59:21 nas systemd[1]: Finished incus-startup.service - Incus - Startup check.
Jun 20 14:59:24 nas systemd[1]: Starting incus.service - Incus - Main daemon…
Jun 20 14:59:25 nas systemd[1]: Started incus.service - Incus - Main daemon.
Jun 20 14:59:26 nas systemd[1]: var-lib-incus-storage\x2dpools-fast_pool-containers-Unifi.mount: Deactivated successfully.
Jun 20 14:59:55 nas incusd[4595]: time=“2025-06-20T14:59:55-07:00” level=warning msg=“Failed to update instance types: Get "https://images.linuxcontainers.org/meta/instance-types/.yaml\”: lookup images.linuxcontainers.org on 192.168.20.25:53: read udp 192.168.20.10:51632->192.168.20.25:53: i/o timeout"
Jun 20 14:59:55 nas incusd[4595]: time=“2025-06-20T14:59:55-07:00” level=error msg=“Failed updating instance types” err=“Get "https://images.linuxcontainers.org/meta/instance-types/.yaml\”: lookup images.linuxcontainers.org on 192.168.20.25:53: read udp 192.168.20.10:51632->192.168.20.25:53: i/o timeout"
root@nas[/var/log/incus]# ls -ltr
total 5
-rw------- 1 root root 544 Jun 20 14:59 incus.log