Dataset error 'path' after migrating from CORE to SCALE, Datasets not showing in GUI

I’ve finally gone the step of migrate my beloved TrueNAS CORE 13.0-U6.4 to SCALE ElectricEel-24.10.1. Almost everything is working as intended and as it did before. Pools are showing up, SMB-share is working, even users and permissions seem to be as they were. But when I try to access the Datasets tab in the GUI, I get this error: Failed to load datasets ‘path’

I verified that all of the datasets are actually there and mounted using zfs list, they’re just not showing in the GUI.

So far I’ve tried the following:

  • Exporting and re-importing the pool using the GUI
  • Upgrading the pool using the button in the GUI
  • Changing the system dataset from my data pool ‘Frog’ (how it was on my old system) to the boot-pool and rebooting
  • After comparing the mount points for the system dataset on my old CORE system to the SCALE system, I noticed that on the CORE system they actually say /mnt/Frog/.system whereas on the SCALE system they are mounted simply as legacy
  • I then tried manually changing the mount points for this, which failed 1. because I failed to stop the necessary services and 2. because any change was undone after reboot.

I’m pretty sure it’s to do with the system dataset, because /var/log/middlewared.log shows a few errors regarding an unexpected mount point with an exception that matches the error of the GUI:

[2024/12/28 12:35:12] (WARNING) SystemDatasetService.sysdataset_path():116 - Unexpected dataset mounted at /var/db/system, 'boot-pool/.system' present, but 'Frog/.system' expected. fsid: 7585016133716344
[2024/12/28 12:35:12] (WARNING) SystemDatasetService.sysdataset_path():116 - Unexpected dataset mounted at /var/db/system, 'boot-pool/.system' present, but 'Frog/.system' expected. fsid: 7585016133716344
[2024/12/28 12:35:16] (INFO) SystemDatasetService.__post_mount_actions():599 - Successfully ran post mount action 'nfs.setup_directories' endpoint for 'Frog/.system/nfs' dataset
[2024/12/28 12:35:16] (INFO) SystemDatasetService.__post_mount_actions():599 - Successfully ran post mount action 'reporting.post_dataset_mount_action' endpoint for 'Frog/.system/netdata-daeb7be0eae547028f28998beacf9023' dataset
[2024/12/28 12:37:17] (WARNING) application.call_method():247 - Exception while calling pool.dataset.details(*[])
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 211, in call_method
    result = await self.middleware.call_with_audit(message['method'], serviceobj, methodobj, params, self)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1529, in call_with_audit
    result = await self._call(method, serviceobj, methodobj, params, app=app,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1471, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1364, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 183, in nf
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 55, in nf
    res = f(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/dataset_details.py", line 207, in details
    info = self.build_details(mnt_info)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/dataset_details.py", line 321, in build_details
    vm['zvol'] = zvol_path_to_name(vm['attributes']['path'])
                                   ~~~~~~~~~~~~~~~~^^^^^^^^
KeyError: 'path'

Some advice on this would be greatly appreciated as I’ve sadly reached the point where my expertise doesn’t seem to get me any close to a solution.

Just a quick update, I’ve now exported my data pool again, completely reset the config and imported my data pool and everything, including my datasets, is working nicely. So I guess it must have just been something to do with my config file…

I’m still leaving this here in case anyone has any ideas or similar problems, since for some people ditching their configuration and reconfiguring everything from scratch might not be as easy of an option.

I know this is a very old thread, but I had this exact same issue, and couldn’t find any better solution than ‘start over’ as you did here

However, I was able to figure out how to solve it without having to reset the config

The issue boiled down to a bad virtual machine carryover. My conversion from CORE to SCALE was a while ago, so I don’t remember the details as to what I did exactly, but basically there was a DISK in vm_devices table that had an empty attributes column. Thus, the middleware code references a path key in the attributes JSON blob, as you see in the error you posted. Thus it’s crashing and you can’t see anything in the UI

I verified this by looking at VM devices:

sudo midclt call vm.device.query

It was showing me that VM #2 had a DISK device with attributes={}. I verified what exactly VM it was:

sudo midclt call vm.query '[["id", "=", 2]]'

This showed me some metadata about the VM (e.g. name), and I saw VM #2 was a throwaway VM that I didn’t care about, so I just tried to delete it:

sudo midclt call vm.delete 2

That command threw an error, though, because it couldn’t determine the status of the VM (of course). I was close to just manually deleting the offending rows from of the SQLite config DB (from the vm_devices and vm_vm tables), but thought to check the web UI. Indeed I saw the VM there, and was able to successfully delete it by checking the Force Delete checkbox

Hope this helps anyone coming across this issue in the future