[RESOLVED] Boot issues caused by incus creating dozens of Virtual Instances

System: TrueNAS SCALE 25.04
Issue Started: After enabling a VM “instance” using Incus on 25.04 RC

Summary:
I attempted to create a VM instance via the new virtualization UI and unfortunately I dont remember exactly what happened but it was apparent something went wrong and the system hung up causing me to reboot. From that point on, rebooting the system will cause severe networking issues:

  • SSH access and web UI would fail after reboot
    Server would appear on the network briefly, then become unreachable
    Physical access with monitor/keyboard showed high packet loss and massive interface bloatip addr listed 20–40+ interfaces like:
    • br-xxxx, veth-xxxx, incusbr0, docker0, etc.
      Even after I stop Incus, these interfaces return on each reboot

This happened after attempting to enabe a VM, and persisted even after disabling/deleting the VM.

Troubleshooting steps:

  1. SSH / Web UI Unreachable
  • Could not ping known IP reliably
  • Found the hostname resolving to a different IP
  • Eventually connected via physical monitor & keyboard
  1. Inspected Interfaces
  • ip addr showed dozens of veth, br-, and docker0 interfaces
  • Many created by Incus even with no VM running (incus list was empty)
    3.Disabled Incus

systemctl stop incus incus.socket
systemctl disable incus incus.socket
systemctl mask incus incus.socket

  1. Tried to Delete incusbr0
  • Initially I was blocked because it was “in use”
  • Used ip link show master incusbr0 to identify attached veth-*
  • Deleted all interfaces with: ip link delete
  • Removed teh bridge with: ip link delete incusbr0
  1. Also Stopped Docker (temporarily)
    Docker was running apps I need, but for isolation/testing, I stopped and disabled it
    After this the system returned to normal, I can SSH and use the Web UI. Only enp1s0, lo, and docker0 interfaces are present.

Unfortunately the problem returns after reboot.

  • Huge list of interfaces recreated
  • System unresponsive via network until logging in locally
  • Had to stop both Incus and Docker manually again to regain control

I need to prevent Incus and/or Docker from recreating these interfaces at boot. Disabling and masking services appears to not persist cleanly across upgrades or reboots.

  • Is there a way to fully remove Incus if I’m not using VM instances?
  • Can Docker be limited so it doesn’t create veth-* interfaces unless needed?
  • Why is Incus re-injecting networking configs even when nothing is running?

Would love help cleaning this up for good — I want to keep Docker for apps, but avoid this massive interface mess on every boot and I would rather not start with a fresh install unless I could keep everything in my datasets.

In case youre curious, here is the Validation Error I get when going into the instances tab:

Validation Error:
Cannot connect to unix socket /var/lib/incus/unix.socket ssl:default [Connection refused]

Traceback (most recent call last):
File “/usr/lib/python3/dist-packages/aiohttp/connector.py”, line 1545, in _create_connection
_, proto = await self._loop.create_unix_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3.11/asyncio/unix_events.py”, line 259, in create_unix_connection
await self.sock_connect(sock, path)
File “/usr/lib/python3.11/asyncio/selector_events.py”, line 638, in sock_connect
return await fut
^^^^^^^^^
File “/usr/lib/python3.11/asyncio/selector_events.py”, line 646, in _sock_connect
sock.connect(address)
ConnectionRefusedError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/usr/lib/python3/dist-packages/middlewared/api/base/server/ws_handler/rpc.py”, line 323, in process_method_call
result = await method.call(app, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/api/base/server/method.py”, line 40, in call
result = await self.middleware.call_with_audit(self.name, self.serviceobj, methodobj, params, app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 883, in call_with_audit
result = await self._call(method, serviceobj, methodobj, params, app=app,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 692, in _call
return await methodobj(*prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/api/base/decorator.py”, line 88, in wrapped
result = await func(*args)
^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/plugins/virt/instance.py”, line 54, in query
results = (await incus_call(‘1.0/instances?filter=&recursion=2’, ‘get’))[‘metadata’]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/plugins/virt/utils.py”, line 105, in incus_call
r = await methodobj(f’{HTTP_URI}/{path}', **(request_kwargs or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/aiohttp/client.py”, line 663, in _request
conn = await self._connector.connect(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/aiohttp/connector.py”, line 563, in connect
proto = await self._create_connection(req, traces, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/aiohttp/connector.py”, line 1551, in _create_connection
raise UnixClientConnectorError(self.path, req.connection_key, exc) from exc
aiohttp.client_exceptions.UnixClientConnectorError: Cannot connect to unix socket /var/lib/incus/unix.socket ssl:default [Connection refused]

Just wanted to follow up and mark this as resolved in case it helps anyone else running into this after experimenting with Incus in TrueNAS SCALE 25.04.

Root Cause Summary
After enabling a VM “Instance” via the new Virtualization UI, the system entered a broken state where:

  • Dozens of virtual interfaces (br-, veth-, incusbr0, docker0) would be created on every boot
  • SSH and Web UI would fail shortly after boot
  • The system appeared on the network briefly, then vanished
  • Logging in locally showed high interface bloat and network instability

Even with no VMs running (incus list was empty), these interfaces would return after every reboot.

Fix

  1. Stopped and Disabled Incus Completely

systemctl stop incus incus.socket
systemctl disable incus incus.socket
systemctl mask incus incus.socket
(Important: masking prevented it from starting, but it still tried to inject network interfaces until I cleaned up more thoroughly.)

  1. Removed All Residual Interfaces

ip link show master incusbr0
ip link delete (Delete all attached veth-* interfaces)
ip link delete incusbr0 (Removed incusbr0 itself)

  1. Temporarily Disabled Docker
    To isolate variables, I also stopped Docker, since it was contributing to interface sprawl:

systemctl stop docker
systemctl disable docker

  1. Persistence Fix
  • Cleaned up systemd unit overrides if any were present
  • Ensured both Incus and Docker were fully disabled
  • Docker can later be selectively enabled once you’re confident it’s not clashing with Incus bridges

Final State
After reboot:

  • Only expected interfaces (enp1s0, lo, docker0) remain
  • SSH and Web UI are stable
  • No reappearance of Incus bridges or veth-* interfaces

Just as an FYI…
Incus retains bridge/network configs even when VMs are deleted
Disabling the service isn’t always enough; manual interface cleanup helps
Always back up before enabling experimental VM features on SCALE (Im an idiot)