Pool won't Import

It doesn’t look like I successfully replaced a nvme drive like I thought I did. TN recognized both old nvme drives in the pool. I’m going to try to disconnect the ZR706A74 spinning drive that keeps reporting the uncorrectable error in the hope that is the cause of the I/O error.

Your mirror vdev is integral for the pool. Without it, you have no pool.

If only one device is missing from a two-way mirror vdev, your pool is still functional. If two devices are missing from the two-way mirror vdev, your pool is toast.

Even though your last remaining NVMe still exists, its TXG is way behind that it cannot be used during the import. In other words, it’s no different than if your two-way mirror is completely gone.

The 3 uncorrectable errors you’re seeing from the SMART data of one of your spinning HDDs might be a red herring.

EDIT: List out all the NVMe’s that were involved in this pool and the timeline in which they were added and removed.

The NVMe that reports a TXG of 23720432 is too far behind. This is the one that has a partition with a PARTUUID of 840bed05-be73-4e33-95bf-7d0374c1a70e. (Consider it useless for importing your pool.)

Where is the other NVMe that used to be in the mirror vdev? It’s the one that had a partition with a PARTUUID of a2b880e0-d3a5-4d15-99c0-bec7e5436d12.

2 Likes

Both are in the machine, the missing drive is the spinning one with the 3 errors, I just pulled its power cable.

root@truenas[~]# zpool import
  pool: FusionPool
    id: 7105278952023681001
 state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
config:

        FusionPool                                FAULTED  corrupted data
          raidz2-0                                DEGRADED
            f9a95cd1-86c4-4c2d-bef5-01e020f29abd  ONLINE
            a88e61ec-82c2-4255-92b0-298de1335798  ONLINE
            bebea15b-3341-4c5a-8cb5-2a04b726e64e  ONLINE
            8d3a4f69-a526-4bf0-ae6c-efc851d43bfe  ONLINE
            5b712775-ed2d-46cf-81cb-1936a6f27936  UNAVAIL
            a34b923d-b7ff-4aca-98e9-ef7ef22d28c7  ONLINE
          mirror-1                                ONLINE
            840bed05-be73-4e33-95bf-7d0374c1a70e  ONLINE
            a2b880e0-d3a5-4d15-99c0-bec7e5436d12  ONLINE
          raidz2-2                                ONLINE
            ef1bf023-d6db-438a-b9d1-7d2bdbb2ed83  ONLINE
            8e9793b2-8d5f-4fcb-9aef-2012536184e8  ONLINE
            63599f5d-7f6a-4f42-99ce-64517c6984ef  ONLINE
            4cb91005-5dc1-453e-942f-5ad6ecdddefe  ONLINE

I just ran zpool import -fFX FusionPool which seems to hang on pool.import_find at 0.00%.

Any last suggestions you have before I stop bothering you and trash the pool to start over?

That wasn’t necessary and you might cause further issues. You’re jumping ahead!

What I was going to ask for is to run a zdb -l against the other NVMe (a2b880e0-d3a5-4d15-99c0-bec7e5436d12) to check if it’s still viable.

root@truenas[~]# zdb -l a2b880e0-d3a5-4d15-99c0-bec7e5436d12
------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'FusionPool'
    state: 0
    txg: 23791957
    pool_guid: 7105278952023681001
    errata: 0
    hostid: 785381139
    hostname: 'truenas'
    top_guid: 8454115423432795979
    guid: 6333384889662361424
    vdev_children: 3
    vdev_tree:
        type: 'mirror'
        id: 1
        guid: 8454115423432795979
        whole_disk: 0
        metaslab_array: 256
        metaslab_shift: 32
        ashift: 12
        asize: 512105381888
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 6333384889662361424
            path: '/dev/disk/by-partuuid/a2b880e0-d3a5-4d15-99c0-bec7e5436d12'
            DTL: 395
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 11240343081374754018
            path: '/dev/disk/by-partuuid/d7411367-c719-4bfc-b938-477518f30b4e'
            whole_disk: 0
            DTL: 1301
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
        com.klarasystems:vdev_zaps_v2
        org.openzfs:raidz_expansion
    labels = 0 1 2 3 
root@truenas[~]# 

It’s not viable to use for import either. :confused:


Why is d7411367-c719-4bfc-b938-477518f30b4e being referenced by it? The only other place this NVMe is mentioned in this thread was when you did a lsblk of all your storage devices.

What is d7411367-c719-4bfc-b938-477518f30b4e then?

Probably the 1TB drive I tried migrating to. I’ll put that one and run the same command before I blow up the pool and start over

Putting in the new nvme drive did the trick, pool imported directly from the GUI.

Now I just need to fix the mirror by adding the new nvme drive to the Metadata vdev.
Rather than taking any chances with trial and error what’s safest way to do that?

1 Like

That’s good news! I’m actually surprised.

What does this show:

zpool status FusionPool
root@truenas[~]# zpool status FusionPool
  pool: FusionPool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: resilvered 292G in 04:31:23 with 0 errors on Fri Oct 24 00:50:49 2025
expand: expanded raidz2-0 copied 34.9T in 23:00:20, on Wed Apr 30 09:19:13 2025
config:

        NAME                                      STATE     READ WRITE CKSUM
        FusionPool                                DEGRADED     0     0     0
          raidz2-0                                ONLINE       0     0     0
            f9a95cd1-86c4-4c2d-bef5-01e020f29abd  ONLINE       0     0     0
            a88e61ec-82c2-4255-92b0-298de1335798  ONLINE       0     0     0
            bebea15b-3341-4c5a-8cb5-2a04b726e64e  ONLINE       0     0     0
            8d3a4f69-a526-4bf0-ae6c-efc851d43bfe  ONLINE       0     0     0
            5b712775-ed2d-46cf-81cb-1936a6f27936  ONLINE       0     0     0
            a34b923d-b7ff-4aca-98e9-ef7ef22d28c7  ONLINE       0     0     0
          raidz2-2                                ONLINE       0     0     0
            ef1bf023-d6db-438a-b9d1-7d2bdbb2ed83  ONLINE       0     0     0
            8e9793b2-8d5f-4fcb-9aef-2012536184e8  ONLINE       0     0     0
            63599f5d-7f6a-4f42-99ce-64517c6984ef  ONLINE       0     0     0
            4cb91005-5dc1-453e-942f-5ad6ecdddefe  ONLINE       0     0     0
        special
          mirror-1                                DEGRADED     0     0     0
            6333384889662361424                   UNAVAIL      0     0     0  was /dev/disk/by-partuuid/a2b880e0-d3a5-4d15-99c0-bec7e5436d12
            d7411367-c719-4bfc-b938-477518f30b4e  ONLINE       0     0     0

How on earth? Now it shows that you do in fact have a “special” vdev. Previously, the output was implying a “data” vdev for the mirror.

You need to take a fresh NVMe that you’re willing to wipe. Use it, as long as it’s large enough, to “replace” the missing device of mirror-1.[1] Hopefully it resilvers fast, because it’s an NVMe and a simple mirror.

After that, you need to run a full scrub and try not to use the pool. Let it finish the scrub completely for the entire pool.

I highly recommend you consider eventually making the “special” vdev a 3-way mirror, to at least match the redundancy of the other vdevs. This will also safeguard against a failing NVMe that might be in your possession.


  1. The missing device is what a2b880e0-d3a5-4d15-99c0-bec7e5436d12 used to be. If that NVMe is still good, then a quick wipe should make it usable again to “replace” the mirror’s “missing device”. TrueNAS will partition it and it’ll have a new PARTUUID generated. ↩︎

1 Like

It’s resilvering now, I’ll run a scrub once it’s done.
I’ll order a 3rd nvme, just need to confirm the BIOS can bifurcate that pcie slot for a multi-nvme card and its speed. I’ll also look at replacing the drive that was throwing the 3 uncorrectable errors, if it continues to be a problem.

root@truenas[~]# zpool status FusionPool
 pool: FusionPool
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
       continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scan: resilver in progress since Sun Oct 26 21:38:59 2025
       1.63T / 75.7T scanned at 17.5G/s, 0B / 74.1T issued
       0B resilvered, 0.00% done, no estimated completion time
expand: expanded raidz2-0 copied 34.9T in 23:00:20, on Wed Apr 30 09:19:13 2025
config:

       NAME                                        STATE     READ WRITE CKSUM
       FusionPool                                  DEGRADED     0     0     0
         raidz2-0                                  ONLINE       0     0     0
           f9a95cd1-86c4-4c2d-bef5-01e020f29abd    ONLINE       0     0     0
           a88e61ec-82c2-4255-92b0-298de1335798    ONLINE       0     0     0
           bebea15b-3341-4c5a-8cb5-2a04b726e64e    ONLINE       0     0     0
           8d3a4f69-a526-4bf0-ae6c-efc851d43bfe    ONLINE       0     0     0
           5b712775-ed2d-46cf-81cb-1936a6f27936    ONLINE       0     0     0
           a34b923d-b7ff-4aca-98e9-ef7ef22d28c7    ONLINE       0     0     0
         raidz2-2                                  ONLINE       0     0     0
           ef1bf023-d6db-438a-b9d1-7d2bdbb2ed83    ONLINE       0     0     0
           8e9793b2-8d5f-4fcb-9aef-2012536184e8    ONLINE       0     0     0
           63599f5d-7f6a-4f42-99ce-64517c6984ef    ONLINE       0     0     0
           4cb91005-5dc1-453e-942f-5ad6ecdddefe    ONLINE       0     0     0
       special
         mirror-1                                  DEGRADED     0     0     0
           replacing-0                             DEGRADED     0     0     0
             6333384889662361424                   UNAVAIL      0     0     0  was /dev/disk/by-partuuid/a2b880e0-d3a5-4d15-99c0-bec7e5436d12
             69069478-6cb0-4e3e-83c3-0c5980cf4ecf  ONLINE       0     0     0
           d7411367-c719-4bfc-b938-477518f30b4e    ONLINE       0     0     0

I really appreciate your help this weekend. Never would’ve resolved it without you help and patience. I only know enough to get myself in trouble.

1 Like

Yes, that is me

Glad you were able to get your pool going again