Unable to set Instances Storage Pool

I seem to have gotten myself into a weird state after trying to delete a dataset that was attached to a new LXC container.

I was following this guide to set up a proxmox backup server container and neded to make some changes to the dataset. tried to delete it and start over but when I did and went back to the instances page my LXC container was no longer there and the instances configuration for the pool setting was broke when I try to readd the pool it was complaining about the missing data set and if I try a different pool it still wont work.

Is there any way to delete the existing LXC container’s config and start over with it?

So no one knows where the Incus configs are stored? The Container no longer shows up in the GUI and doesn’t allow me to set up the Instances config because one exists that cant find it resources.

I’m in the same boat. I don’t have any fear of losing anything I just need to get the new pool set. I had Incus pointed to a pool called ‘twos’ and that pool is now gone, and I just need to set Incus to point to a new pool called ‘ssd’ but when I try to set it I get this error:

 Error: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 515, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 560, in __run_body
    rv = await self.method(*args)
         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/api/base/decorator.py", line 88, in wrapped
    result = await func(*args)
             ^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/virt/global.py", line 158, in do_update
    verrors.check()
  File "/usr/lib/python3/dist-packages/middlewared/service_exception.py", line 72, in check
    raise self
middlewared.service_exception.ValidationErrors: [EINVAL] virt_global_update.storage_pools.0: twos: pool is not available for incus storage

I did the same thing. I just ended up re-creating the pool with the original name and that is working for now. but yeah seems to be a bug with the name of the original pool being set somewhere and not being able to reset it

I’m experiencing the exact same thing. This is absolutely horrible. I’ve deleted the dataset that was associated with Instances. At this point, I could not assign a new pool to the Instances. The software kept looking for the one I had deleted, the ghost dataset. I then recreated it and disassociated it from Instances. I thought that this was going to do it; however, even getting back to the “Pool is not selected state” when I go to associate a new pool, the damn software is still looking for the ghost dataset. The developers need to both chime in on this and fix it. I’ve recently migrated from Core to Scale 25.04, and the experience has been a POS.

@fonze98 and @dison4linux, I’ve discovered the solution at Instance to a new pool name - #2 by awalkerix. SSH into your TrueNAS instance, su root, and then execute the following:

midclt call virt.global.update '{"pool": null, "storage_pools": []}' -j