Old TrueNAS Core 13 died, Installed Scale 24.10 and could only import 1 of 2 pools

On May 19, 2025 my TRUNAS became unresponsive and when I tried to reboot the boot would fail and try again in an endless boot loop
I don’t have any evidence of a power outage on 5/19/2025

Originally installed on FreeNAS which was upgraded to TrueNAS 13.0 U6.1 in March 2024

I downloaded TrueNAS Scale 24.10.2.1 and was able to install. I had 2 pools Pool1 and Pool2, Both of my pools showed up as available to import. The old 2TB pool (Pool1) imported with no problems, Pool2 gave the following error:

error
FAILED
[EZFS_BADDEV] Failed to import 'Pool2' pool: cannot import 'Pool2' as 'Pool2': one or more devices is currently unavailable

I have the detailed error report if needed.

Pool2 Pool consists of two(2) Seagate IronWolf Model: ST12000VN0008-2YS101
No history of S.M.A.R.T errors each with approximately 24,400 hours powered on which corresponds with when I bought those drives in July 2022

Additional info: from commands I have tried:

zpool import
  pool: Pool2
    id: 14223652181009904092
 state: ONLINE
status: One or more devices were being resilvered.
action: The pool can be imported using its name or numeric identifier.
config:
        Pool2     ONLINE
          mirror-0  ONLINE
            sdd2    ONLINE
            sdc2    ONLINE

zpool import Pool2
cannot import 'Pool2': pool was previously in use from another system.

Last accessed by JGTNAS.local (hostid=5101ad73) at Mon May 19 01:10:29 2025
The pool can be imported, use 'zpool import -f' to import the pool.

zpool import -f  Pool2
cannot import 'Pool2': pool was previously in use from another system.
Last accessed by JGTNAS.local (hostid=5101ad73) at Mon May 19 01:10:29 2025
The pool can be imported, use 'zpool import -f' to import the pool.
zsh: command not found: cannot
zsh: unknown file attribute: h
zsh: command not found: The
cannot import 'Pool2': one or more devices is currently unavailable

zpool import -F Pool2
cannot import 'Pool2': pool was previously in use from another system.
Last accessed by JGTNAS.local (hostid=5101ad73) at Mon May 19 01:10:29 2025
The pool can be imported, use 'zpool import -f' to import the pool.
root@truenas[/home/truenas_admin]#

I have searched many threads for information and have not found anything helpful

I would like to get my pool back and don’t know what else to try

Was the pool geli encrypted?

1 Like

No never encrypted

I have looked at many of the threads that involve not being able to import a pool and have tried many of the suggestions. I am curious if the fact that what I have run includes the line “One or more devices were being resilvered” might be making that volume unavailable. If that is the case, is there any way to fix it?

My advice is that you should NOT be trying commands from other unique issues in other threads in the hope that they will fix it.

It is just as possible that running commands that are right for the other circumstances but wrong for yours will make your pool corruption worse and reduce the chances of you ever getting it online again.

2 Likes

Then you may try:
zpool import -fFn -R /mnt Pool2
if the output looks promising, remove the -n option
zpool import -fF -R /mnt Pool2

When I ran with the -n option there was no output, so I ran without the -n and got the following:
cannot import ‘Pool2’: I/O error
Destroy and re-create the pool from
a backup source.

Can you share the device, partition, and label info? Not at my computer. It involves the lsblk command. Someone else can post the full syntax and parameters.

Here is what I have:

lsblk -o NAME,LABEL,MAJ:MIN,TRAN,ROTA,ZONED,VENDOR,MODEL,SERIAL,PARTUUID,START,SIZE,PARTTYPENAME
NAME LABEL MAJ:MIN TRAN   ROTA ZONED VENDOR MODEL SERIAL PARTUUID                               START   SIZE PARTTYPENAME
sdc          8:32  sata      1 none  ATA    ST120 ZV709L                                               10.9T
├─sdc1
│            8:33            1 none                      d8b6c724-ffd5-11ec-b16d-002590d59089     128     2G FreeBSD swap
└─sdc2
     Pool2
             8:34            1 none                      d8dba0c2-ffd5-11ec-b16d-002590d59089 4194432  10.9T FreeBSD ZFS
sdd          8:48  sata      1 none  ATA    ST120 ZRT05E                                               10.9T
├─sdd1
│            8:49            1 none                      d8a593a0-ffd5-11ec-b16d-002590d59089     128     2G FreeBSD swap
└─sdd2
     Pool2
             8:50            1 none                      d8c9a8d5-ffd5-11ec-b16d-002590d59089 4194432  10.9T FreeBSD ZFS
     boot-pool
             8:67            0 none                      d24a47c3-a035-42f6-8508-6a8e1770a509 1054720 111.3G Solaris /usr & Apple ZFS
sr0         11:0   sata      1 none  ASUS   ASUS  D7D0CL                                               1024M

Looks to be in order.

What about this?

zdb -l /dev/sdc2

zdb -l /dev/sdd2

SMART error and selftest logs:

smartctl -l error /dev/sdc

smartctl -l error /dev/sdd

smartctl -l selftest /dev/sdc

smartctl -l selftest /dev/sdd
zdb -l /dev/sdc2
------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'Pool2'
    state: 0
    txg: 17236783
    pool_guid: 14223652181009904092
    errata: 0
    hostid: 1359064435
    hostname: 'JGTNAS.local'
    top_guid: 8072051247587683063
    guid: 5515015067761482804
    vdev_children: 1
    vdev_tree:
        type: 'mirror'
        id: 0
        guid: 8072051247587683063
        metaslab_array: 132
        metaslab_shift: 34
        ashift: 12
        asize: 11997986160640
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 9582221612937590903
            path: '/dev/gptid/d8c9a8d5-ffd5-11ec-b16d-002590d59089'
            DTL: 12203
            create_txg: 4
            expansion_time: 1747642066
        children[1]:
            type: 'disk'
            id: 1
            guid: 5515015067761482804
            path: '/dev/gptid/d8dba0c2-ffd5-11ec-b16d-002590d59089'
            DTL: 12202
            create_txg: 4
            expansion_time: 1747642160
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3
zdb -l /dev/sdd2
------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'Pool2'
    state: 0
    txg: 17236778
    pool_guid: 14223652181009904092
    errata: 0
    hostid: 1359064435
    hostname: 'JGTNAS.local'
    top_guid: 8072051247587683063
    guid: 9582221612937590903
    vdev_children: 1
    vdev_tree:
        type: 'mirror'
        id: 0
        guid: 8072051247587683063
        metaslab_array: 132
        metaslab_shift: 34
        ashift: 12
        asize: 11997986160640
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 9582221612937590903
            path: '/dev/gptid/d8c9a8d5-ffd5-11ec-b16d-002590d59089'
            DTL: 12203
            create_txg: 4
            expansion_time: 1747642066
        children[1]:
            type: 'disk'
            id: 1
            guid: 5515015067761482804
            path: '/dev/gptid/d8dba0c2-ffd5-11ec-b16d-002590d59089'
            DTL: 12202
            create_txg: 4
            expansion_time: 1747642097
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3
smartctl -l error /dev/sdc
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.6.44-production+truenas] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Error Log Version: 1
No Errors Logged
smartctl -l error /dev/sdd
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.6.44-production+truenas] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Error Log Version: 1
No Errors Logged
smartctl -l selftest /dev/sdc
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.6.44-production+truenas] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%     24425         -
# 2  Short offline       Completed without error       00%     24319         -
# 3  Short offline       Completed without error       00%     24295         -
# 4  Short offline       Completed without error       00%     24271         -
# 5  Short offline       Completed without error       00%     24247         -
# 6  Short offline       Completed without error       00%     24223         -
# 7  Short offline       Completed without error       00%     24199         -
# 8  Short offline       Completed without error       00%     24175         -
# 9  Short offline       Completed without error       00%     24151         -
#10  Short offline       Completed without error       00%     24127         -
#11  Short offline       Completed without error       00%     24103         -
#12  Short offline       Completed without error       00%     24079         -
#13  Short offline       Completed without error       00%     24055         -
#14  Short offline       Completed without error       00%     24031         -
#15  Short offline       Completed without error       00%     24007         -
#16  Short offline       Completed without error       00%     23983         -
#17  Short offline       Completed without error       00%     23959         -
#18  Short offline       Completed without error       00%     23935         -
#19  Short offline       Completed without error       00%     23911         -
#20  Short offline       Completed without error       00%     23887         -
#21  Short offline       Completed without error       00%     23863         -
smartctl -l selftest /dev/sdd
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.6.44-production+truenas] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%     24425         -
# 2  Extended offline    Completed without error       00%     24416         -
# 3  Extended offline    Aborted by host               80%     24401         -
# 4  Short offline       Completed without error       00%     24319         -
# 5  Short offline       Completed without error       00%     24295         -
# 6  Short offline       Completed without error       00%     24271         -
# 7  Short offline       Completed without error       00%     24247         -
# 8  Short offline       Completed without error       00%     24223         -
# 9  Short offline       Completed without error       00%     24199         -
#10  Short offline       Completed without error       00%     24175         -
#11  Short offline       Completed without error       00%     24151         -
#12  Short offline       Completed without error       00%     24127         -
#13  Short offline       Completed without error       00%     24103         -
#14  Short offline       Completed without error       00%     24079         -
#15  Short offline       Completed without error       00%     24055         -
#16  Short offline       Completed without error       00%     24031         -
#17  Short offline       Completed without error       00%     24007         -
#18  Short offline       Completed without error       00%     23983         -
#19  Short offline       Completed without error       00%     23959         -
#20  Short offline       Completed without error       00%     23935         -
#21  Short offline       Completed without error       00%     23911         -

ZFS labels found on both partitions in the mirror.

No SMART errors have been logged.

All SMART selftests have passed.

The only other thing I could advise is to run a long selftest on both drives, to truly rule out a drive failure. Otherwise, I don’t know why you’re getting an I/O error on attempting to import.

Someone else with more knowledge might have other ideas.

Another thing you can try is to import the pool in Core 13.3, to see if it’s importable.

In the meantime, don’t do this. If you find something online, first ask in here if it’s safe to do.

I started the long selftest on both drives.
When they finish I will post results here.(may be quite a while)
I won’t run any commands from the internet.
I am not in a hurry to recreate the pool

If it hits an error an in early LBA, it will end sooner, since the test aborts once it finds a single error.

Selftests finished with no errors

smartctl -l error /dev/sdc
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.6.44-production+truenas] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Error Log Version: 1
No Errors Logged

smartctl -l error /dev/sdd
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.6.44-production+truenas] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Error Log Version: 1
No Errors Logged

smartctl -l selftest /dev/sdc
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.6.44-production+truenas] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%     24485         -
# 2  Short offline       Completed without error       00%     24425         -
# 3  Short offline       Completed without error       00%     24319         -
# 4  Short offline       Completed without error       00%     24295         -
# 5  Short offline       Completed without error       00%     24271         -
# 6  Short offline       Completed without error       00%     24247         -
# 7  Short offline       Completed without error       00%     24223         -
# 8  Short offline       Completed without error       00%     24199         -
# 9  Short offline       Completed without error       00%     24175         -
#10  Short offline       Completed without error       00%     24151         -
#11  Short offline       Completed without error       00%     24127         -
#12  Short offline       Completed without error       00%     24103         -
#13  Short offline       Completed without error       00%     24079         -
#14  Short offline       Completed without error       00%     24055         -
#15  Short offline       Completed without error       00%     24031         -
#16  Short offline       Completed without error       00%     24007         -
#17  Short offline       Completed without error       00%     23983         -
#18  Short offline       Completed without error       00%     23959         -
#19  Short offline       Completed without error       00%     23935         -
#20  Short offline       Completed without error       00%     23911         -
#21  Short offline       Completed without error       00%     23887         -

smartctl -l selftest /dev/sdd
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.6.44-production+truenas] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%     24485         -
# 2  Short offline       Completed without error       00%     24425         -
# 3  Extended offline    Completed without error       00%     24416         -
# 4  Extended offline    Aborted by host               80%     24401         -
# 5  Short offline       Completed without error       00%     24319         -
# 6  Short offline       Completed without error       00%     24295         -
# 7  Short offline       Completed without error       00%     24271         -
# 8  Short offline       Completed without error       00%     24247         -
# 9  Short offline       Completed without error       00%     24223         -
#10  Short offline       Completed without error       00%     24199         -
#11  Short offline       Completed without error       00%     24175         -
#12  Short offline       Completed without error       00%     24151         -
#13  Short offline       Completed without error       00%     24127         -
#14  Short offline       Completed without error       00%     24103         -
#15  Short offline       Completed without error       00%     24079         -
#16  Short offline       Completed without error       00%     24055         -
#17  Short offline       Completed without error       00%     24031         -
#18  Short offline       Completed without error       00%     24007         -
#19  Short offline       Completed without error       00%     23983         -
#20  Short offline       Completed without error       00%     23959         -
#21  Short offline       Completed without error       00%     23935         -

No errors, passed long selftests, partitions and labels are fine.

Can you try importing it in Core, at least?

From there, if it imports, someone might be able to figure out why you’re getting an I/O error on SCALE.

If you get the same I/O error on Core, then it looks like a much lower level issue.

I can try to import into Core 13.3, The only way I can do that is to install Core 13.3 on the same machine (I don’t have other spare hardware). Before I do that below is the full error message I received when I initially tried to import the pool which has more detailed information including module and line numbers where the error came from. Maybe this info can give a better idea of what is happening to the experts that better understand the code.

[EZFS_BADDEV] Failed to import 'Pool2' pool: cannot import 'Pool2' as 'Pool2': one or more devices is currently unavailable
remove_circle_outline
More info...
 Error: concurrent.futures.process._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 231, in import_pool
    zfs.import_pool(found, pool_name, properties, missing_log=missing_log, any_host=any_host)
  File "libzfs.pyx", line 1374, in libzfs.ZFS.import_pool
  File "libzfs.pyx", line 1402, in libzfs.ZFS.__import_pool
libzfs.ZFSException: cannot import 'Pool2' as 'Pool2': one or more devices is currently unavailable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.11/concurrent/futures/process.py", line 261, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 112, in main_worker
    res = MIDDLEWARE._run(*call_args)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 34, in _call
    with Client(f'ws+unix://{MIDDLEWARE_RUN_DIR}/middlewared-internal.sock', py_exceptions=True) as c:
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call
    return methodobj(*params)
           ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 183, in nf
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 211, in import_pool
    with libzfs.ZFS() as zfs:
  File "libzfs.pyx", line 534, in libzfs.ZFS.__exit__
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/pool_actions.py", line 235, in import_pool
    raise CallError(f'Failed to import {pool_name!r} pool: {e}', e.code)
middlewared.service_exception.CallError: [EZFS_BADDEV] Failed to import 'Pool2' pool: cannot import 'Pool2' as 'Pool2': one or more devices is currently unavailable
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 509, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 554, in __run_body
    rv = await self.method(*args)
         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 179, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 49, in nf
    res = await f(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/import_pool.py", line 114, in import_pool
    await self.middleware.call('zfs.pool.import_pool', guid, opts, any_host, use_cachefile, new_name)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1629, in call
    return await self._call(
           ^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1468, in _call
    return await self._call_worker(name, *prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1474, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1380, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1364, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
middlewared.service_exception.CallError: [EZFS_BADDEV] Failed to import 'Pool2' pool: cannot import 'Pool2' as 'Pool2': one or more devices is currently unavailable
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 509, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 554, in __run_body
    rv = await self.method(*args)
         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 179, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 49, in nf
    res = await f(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/import_pool.py", line 114, in import_pool
    await self.middleware.call('zfs.pool.import_pool', guid, opts, any_host, use_cachefile, new_name)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1629, in call
    return await self._call(
           ^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1468, in _call
    return await self._call_worker(name, *prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1474, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1380, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1364, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
middlewared.service_exception.CallError: [EZFS_BADDEV] Failed to import 'AriseLD' pool: cannot import 'AriseLD' as 'AriseLD': one or more devices is currently unavailable`

It will be tomorrow before I can try installing Core 13.3

That’s concerning. :worried: There seems to be a recent pattern lately of pools that cannot be imported following an upgrade, sidegrade, or new installation.

That’s why I posted the error log file (which I only got when I tried to import from the web ui). I have not studied the code, although this error log seems to point to the source files (file name/line #) where errors occurred. Seems like if there is a pattern then these error logs might point to where in the code to look for the problem. That’s just my thought, although it has been a whole, whole lot of years since I did any programming. I guess I am hoping that someone who is familiar with the source code would be able to trace that out.

OK, I installed core 13.3 U1.2 and it shows all the disks, although it does not show any importable pools as scale 24.10.2.1 did. In the webui it does show the smaqrt test history. Under the import disk option you can only select a single partition of the available disks. It does not show any available pools to import/