Can't find the pool, please help me

root@nas[~]# zpool import
pool: Pool32TB
id: 11087683370020324377
state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
see: Message ID: ZFS-8000-EY — OpenZFS documentation
config:

Pool32TB    UNAVAIL  insufficient replicas
  raidz2-0  ONLINE
    sdg2    ONLINE
    sdb2    ONLINE
    sdc2    ONLINE
    sdh2    ONLINE
    sde2    ONLINE
    sdd2    ONLINE
    sda2    ONLINE
    sdf2    ONLINE
  nvme0n1   FAULTED  corrupted data

root@nas[~]# zpool import Pool32Tb -f
cannot import ‘Pool32Tb’: no such pool available
root@nas[~]# zpool import Pool32Tb -fF
cannot import ‘Pool32Tb’: no such pool available

Hello everyone, this is because of my own operation error. After upgrading from Core to Scale, an nvme cache hard disk error occurred. After deleting the cache disk, I added it again as dedupe Vedv, but an error occurred. Then I exported the Pool32TB pool. After that, it cannot be imported again.
Please help me. I have tried various methods, including reinstalling a new Scale, but it still cannot be imported.

Why? Why? What is going on? Why would you do that?

Furthermore, you’re not even using the same pool name. Check your “case”. (Lowercase vs uppercase.) Not that it matters too much, considering the above actions you took.

root@nas[~]# sudo ZPOOL_IMPORT_PATH="/dev/disk/by-id" zpool import -a
cannot import 'Pool32TB': pool was previously in use from another system.
Last accessed by nas (hostid=36633733) at Thu Jan  1 08:00:00 1970
The pool can be imported, use 'zpool import -f' to import the pool.
root@nas[~]# zdb -l /dev/sda2
------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'Pool32TB'
    state: 0
    txg: 12841188
    pool_guid: 11087683370020324377
    errata: 0
    hostid: 912471859
    hostname: 'nas'
    top_guid: 16709771434219341007
    guid: 18073560941974425900
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 16709771434219341007
        nparity: 2
        metaslab_array: 256
        metaslab_shift: 34
        ashift: 12
        asize: 31989077901312
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 17678588834249919388
            path: '/dev/sdf2'
            phys_path: 'id1,enc@n3061686369656d31/type@0/slot@3/elmdesc@Slot_02/p2'
            DTL: 101862
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 750809547831537342
            path: '/dev/sda2'
            phys_path: 'id1,enc@n3061686369656d30/type@0/slot@1/elmdesc@Slot_00/p2'
            DTL: 101861
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 12560954956952091476
            path: '/dev/sdd2'
            phys_path: 'id1,enc@n3061686369656d30/type@0/slot@2/elmdesc@Slot_01/p2'
            DTL: 101860
            create_txg: 4
        children[3]:
            type: 'disk'
            id: 3
            guid: 3292963007683430410
            path: '/dev/sdh2'
            phys_path: 'id1,enc@n3061686369656d31/type@0/slot@4/elmdesc@Slot_03/p2'
            DTL: 101859
            create_txg: 4
        children[4]:
            type: 'disk'
            id: 4
            guid: 13886919577704762301
            path: '/dev/sde2'
            phys_path: 'id1,enc@n3061686369656d31/type@0/slot@1/elmdesc@Slot_00/p2'
            DTL: 101858
            create_txg: 4
        children[5]:
            type: 'disk'
            id: 5
            guid: 12872069886951889765
            path: '/dev/sdc2'
            phys_path: 'id1,enc@n3061686369656d30/type@0/slot@4/elmdesc@Slot_03/p2'
            DTL: 101857
            create_txg: 4
        children[6]:
            type: 'disk'
            id: 6
            guid: 18073560941974425900
            path: '/dev/sdb2'
            phys_path: 'id1,enc@n3061686369656d30/type@0/slot@3/elmdesc@Slot_02/p2'
            DTL: 101856
            create_txg: 4
        children[7]:
            type: 'disk'
            id: 7
            guid: 2571007088336562502
            path: '/dev/sdg2'
            phys_path: 'id1,enc@n3061686369656d31/type@0/slot@2/elmdesc@Slot_01/p2'
            DTL: 101855
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
        com.klarasystems:vdev_zaps_v2
    labels = 0 1 2 3 

zdb -l /dev/sd[a-h]2 All are normal.

Because after upgrading to Scale, the nvme cache hard disk reported an error, so I wanted to delete it and re-add it to the Pool32TB pool as a cache Vedv.

It doesn’t look like you added the NVMe as a cache vdev - it looks like it was added to the pool itself.

I think all the other errors stemmed from that.

IMO, you now need expert help from a ZFS expert if you want any hope of recovering the pool back to a working state.

My advice: Do not take ANY further actions until you are sure that they will make the situation better rather than worse.

2 Likes

Exactly. You have a RAIDZ vdev and a second single disk vdev. If that one is not available for some reason the pool is toast.

1 Like

I agree that the pool is currently toast however it may still be (mostly) recoverable.

Most of the data will be on the raidz2 vdev. Any files added or changed since the nvme vdev were added are probably lost.

The metadata may be irretrievably lost too - in which case the entire pool is toast.

BUT…

IF you can recover the metadata back to the last checkpoint BEFORE you added the NVME disk, then (especially if you have a recent snapshot) you may be able to recover the pool to that point of time (except for files deleted or replaced after that), and those may then be recoverable from a snapshot.

But as I said before, this is expert-level stuff, and you should try to get a ZFS expert to help you try to recover it.