Can not Import Pool

I can not import:
root@truenas-1[/var/empty]# zpool import
pool: AMainSSD
id: 6308779876765907170
state: ONLINE
status: One or more devices were being resilvered.
action: The pool can be imported using its name or numeric identifier.
config:

AMainSSD                                    ONLINE
  raidz1-0                                  ONLINE
    replacing-0                             ONLINE
      96419374-33f9-4feb-a46f-2ff530110062  ONLINE
      84dc1d4f-7e6d-42bc-ab3a-6919a81750ca  ONLINE
    replacing-1                             ONLINE
      ab09bfad-d085-4263-b54d-956890eab2d6  ONLINE
      788667a2-dd94-4ce7-8697-572a610a9579  ONLINE
    replacing-2                             ONLINE
      7925e138-f516-48e4-a634-065f300475bd  ONLINE
      68dc5a9d-962a-41ac-aec7-f044463710bf  ONLINE
    replacing-3                             ONLINE
      908de61f-59eb-4158-9c00-7aa0b3f971ec  ONLINE
      cc228bce-4f6d-49fa-a59f-e2e0c5da524c  ONLINE

root@truenas-1[/var/empty]# zpool import -f AMainSSD
cannot import ‘AMainSSD’: insufficient replicas
Destroy and re-create the pool from
a backup source.

When I try to import through the GUI:
Error: concurrent.futures.process._RemoteTraceback:
“”"
Traceback (most recent call last):
File “/usr/lib/python3.11/concurrent/futures/process.py”, line 261, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/worker.py”, line 112, in main_worker
res = MIDDLEWARE._run(*call_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/worker.py”, line 46, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/worker.py”, line 34, in call
with Client(f’ws+unix://{MIDDLEWARE_RUN_DIR}/middlewared-internal.sock’, py_exceptions=True) as c:
File “/usr/lib/python3/dist-packages/middlewared/worker.py”, line 40, in call
return methodobj(*params)
^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 183, in nf
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs
/pool_actions.py", line 211, in import_pool
with libzfs.ZFS() as zfs:
File “libzfs.pyx”, line 534, in libzfs.ZFS.exit
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs
/pool_actions.py", line 231, in import_pool
zfs.import_pool(found, pool_name, properties, missing_log=missing_log, any_host=any_host)
File “libzfs.pyx”, line 1374, in libzfs.ZFS.import_pool
File “libzfs.pyx”, line 1402, in libzfs.ZFS.__import_pool
File “libzfs.pyx”, line 663, in libzfs.ZFS.get_error
File “/usr/lib/python3.11/enum.py”, line 714, in call
return cls.new(cls, value)
^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3.11/enum.py”, line 1137, in new
raise ve_exc
ValueError: 2095 is not a valid Error
“”"

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/usr/lib/python3/dist-packages/middlewared/job.py”, line 509, in run
await self.future
File “/usr/lib/python3/dist-packages/middlewared/job.py”, line 554, in _run_body
rv = await self.method(*args)
^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 179, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 49, in nf
res = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool
/import_pool.py", line 114, in import_pool
await self.middleware.call(‘zfs.pool.import_pool’, guid, opts, any_host, use_cachefile, new_name)
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1629, in call
return await self._call(
^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1468, in _call
return await self._call_worker(name, *prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1474, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1380, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1364, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: 2095 is not a valid Error

First of all, Welcome to the TrueNAS Forums.

Second, read Joes Rules link in my signature in order to provide us the needed information to help you. I’m certain when you wrote the posting, you assumed we would all understand what version of TrueNAS you have, your drives, what transpired between the last time your pools was operational and now.

I’m not a pool expert however it looks like you are replacing a lot of drives and thus have no redundant data. That is my uneducated guess. Someone will chime in and provide a much better answer but first read the simple rules, they are there for all of our benefit, post the additional information.

Again, Welcome!

Gotcha sorry. I was able to fix this issue. After fighting with it for 3 hours. I had a raidz1 pool with 4 2tb ssds. I replaced the drives with 4tb drives. I rebooted the system not realizing it would cause issues with 45 min left to the resilver. I then compounded the issue by exporting the pool. I was able to fix by removing the 4tb drives from the system and importing the pool just based on the 2tb drives.

My system is the following:
TrueNas Scale ElectricEel 24.10 virtualized in Proxmox
I am passing the following:
2x Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz 15 cores (30 total)
512gb DDR4 2400 MT/s
LSI 9300-8e card connected to 3 disk shelves with
31 - 22tb harddrives
Data VDEVs 3x RAIDZ2 | 10 wide
Spare VDEVs 1x | 1 drive
PERC H740P
4x 4tb SSD
4x 2tb SSD

Yikes! Sounds like a mess. In the future, complete one thing at a time, meaning replace one hard drive, wait until it has resilvered, then replace the next drive. You will save yourself a lot of grief. And 4TB SSD will resilver quickly.

Glad you are working again.

I rebooted the system not realizing it would cause issues with 45 min left to the resilver.

You, sir, seem to like living dangerously. :rofl: