Fangtooth imported zvolume and newly created VM results in ValidationError

Hi guys,

I upgraded my truenas from 24 to 25.
Then i cloned my zvol into custom storage volume.
After that I created a new VM. I cannot see the VM in den new “Instances” area.
If I go on the Instances page i get this error:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/api/base/server/ws_handler/rpc.py", line 323, in process_method_call
    result = await method.call(app, params)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/api/base/server/method.py", line 49, in call
    return await self._dump_result(app, methodobj, result)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/api/base/server/method.py", line 52, in _dump_result
    return self.middleware.dump_result(self.serviceobj, methodobj, app, result)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 791, in dump_result
    return serialize_result(new_style_returns_model, result, expose_secrets)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/api/base/handler/result.py", line 13, in serialize_result
    return model(result=result).model_dump(
           ^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/pydantic/main.py", line 212, in __init__
    validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 3 validation errors for VirtInstanceQueryResult
result.list[VirtInstanceQueryResultItem].0.storage_pool
  Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.9/v/string_type
result.VirtInstanceQueryResultItem
  Input should be a valid dictionary or instance of VirtInstanceQueryResultItem [type=model_type, input_value=[{'id': 'Homeassistant', ..., 'memory': 2147483648}], input_type=list]
    For further information visit https://errors.pydantic.dev/2.9/v/model_type
result.int
  Input should be a valid integer [type=int_type, input_value=[{'id': 'Homeassistant', ..., 'memory': 2147483648}], input_type=list]
    For further information visit https://errors.pydantic.dev/2.9/v/int_type
1 Like

I have the same issue here. Any pointers on how to troubleshoot this?

I’m not sure what’s happening with the TrueNAS UI, but here is more info. Using the incus command-line utility, I can see that the two VMs I migrated after the upgrade are actually running. I can connect to them using VNC. I had to change the network configuration in the VMs, as the NICs are named differently than before, but after that, I can at least get the VMs to run. It would be nice to get to the Web UI.

To clarify, this issue occurs immediately after creating a VM using a migrated vVol—no changes are made outside of WebUI. Found a similar issue reported here: [NAS-135338] LXC instance not booting or showing up - iXsystems TrueNAS Jira. Wanted to clarify the problem here, given the comments from ixSystems in that bug report.

I cannot pinpoint what configuration parameter breaks the UI, so for debug purposes here is the VM configuration from incus config show.

architecture: x86_64
config:
boot.autostart: “false”
limits.cpu: “8”
limits.memory: 32768MiB
raw.idmap: |-
uid XXX XXX
gid XXX XXX
raw.qemu: -object secret,id=vnc0,file=/var/run/middleware/incus/passwords/XXXXXXXXXXX
-vnc :0,password-secret=vnc0
security.secureboot: “false”
user.autostart: “true”
user.ix_vnc_config: ‘{“vnc_enabled”: true, “vnc_port”: 5900, “vnc_password”: “XXXXXXXXXXX”}’
volatile.cloud-init.instance-id: XXXXXXXXXXX-XXXXXXXXXXX-XXXXXXXXXXX-XXXXXXXXXXX
volatile.eth0.host_name: XXXXXXXXX
volatile.eth0.hwaddr: XX:XX:XX:XX:XX:XX
volatile.last_state.power: RUNNING
volatile.uuid: XXXXXXXXXXX-XXXXXXXXXXX-XXXXXXXXXXX-XXXXXXXXXXX
volatile.uuid.generation: XXXXXXXXXXX-XXXXXXXXXXX-XXXXXXXXXXX-XXXXXXXXXXX
volatile.vsock_id: “XXXXXXXXXXX”
devices:
XXXXXXXXXXX-XXXXX:
boot.priority: “1”
io.bus: virtio-blk
pool: default
source: XXXXXXXXXXX-XXXXX
type: disk
eth0:
name: eth0
nictype: bridged
parent: br1
type: nic
root:
io.bus: virtio-blk
path: /
pool: default
size: “10737418240”
type: disk
ephemeral: false
profiles:

  • default
    stateful: false
    description: “”

I had the luck to just export my homeassistant backup out of the container and then create a new docker app and import the backup in there.
Maybe this helps you :slight_smile:

I’ve got the same problem.

I have imported the zvols and created my first instance which is not running properly. I can see it in VNC “failed to load…”

When I go back to the instance interface I get the validationerror.

How can I manage the instances on the command line to delete any/all and start again? I want to retain the zvols I imported.

Thanks

So I found some Incus instructions online and after a few attempts have deleted the instance and recreated with different configuration options and now my old VM is booting.

Fingers crossed I can get this working

I then had the device issue of no virtio devices so I didn’t have a network adapter.

I’ve had to edit the config for the instance and add the -drive option to mount the virtio-win.iso to the instance in order to install the drivers.

Now my VM is (almost) back and working.

Just a SMB file share issue to troubleshoot now!