Old TrueNAS Core 13 died, Installed Scale 24.10 and could only import 1 of 2 pools

Did not work!

# zpool import -fd /dev/gptid -R /mnt 14223652181009904092
cannot import 'AriseLD': I/O error
        Destroy and re-create the pool from
        a backup source.

Should I try using the other disk that I had disconnected?

I’d wait until @HoneyBadger replies.

I’m not too familiar with how finicky ZFS can be when (intentionally) attempting to import a degraded pool. (The only time I ever intentionally imported a degraded array was with mdadm on Linux, and that was a while ago.)

Here’s another case of something similar happening.

My hunch is that this same problem is what affected two other users as well, but they had since moved on and wiped the drives, so it’s too late to know what the underlying problem was with them.

We saw some success with the other user.

Can you try again, same ZFS member like before (with the same drive unplugged), but use -F instead of -f

zpool import -F -d /dev/gptid -R /mnt 14223652181009904092

Same result

# zpool import -F -d /dev/gptid -R /mnt 14223652181009904092
cannot import 'Pool2': I/O error
        Destroy and re-create the pool from
        a backup source.

Let’s wait for @HoneyBadger then.

The other case had success, and their pool is back to normal. A notable difference is that their TXGs were only off by 3. Yours are off by 5.

Either way I don’t know what caused this in the first place. Mr. Badger might have a better idea.

EDIT: To make sure. This is when you attempted to import with the “newer” TXG device (17236783) unplugged, correct?

No, @HoneyBadger recommended the older txg: 17236778 would be preferable
That is the only disk of Pool2 currently connected and what I reported results with both the -f and then the -F import options

We’re saying the same thing. I wanted to make sure that the newer one of 17236783 is unplugged. :wink:

Yes, I missed the “unplugged” so we are saying the same thing

1 Like

You’re welcome to try with the other drive, but I think we’re dealing with some other oddity here rather than just a pool with mismatched txg numbers.

After trying an import, can you capture the contents of the file /proc/spl/kstat/zfs/dbgmsg and paste it in codeblocks?

1 Like

Thank you for your help. I will try that this afternoon and report the results!

Didn’t work with other drive either

# zpool import -f -d /dev/gptid /mnt 14223652181009904092
cannot import '/mnt': no such pool available
# zpool import
   pool: Pool1
     id: 7263808314950553810
  state: ONLINE
status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        Pool1                                            ONLINE
          mirror-0                                      ONLINE
            gptid/49fc0b49-2885-11e7-9614-002590d59089  ONLINE
            gptid/4add9034-2885-11e7-9614-002590d59089  ONLINE

   pool: Pool2
     id: 14223652181009904092
  state: FAULTED
status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
        devices and try again.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C
 config:

        Pool2                                         FAULTED  corrupted data
          mirror-0                                      DEGRADED
            gptid/d8c9a8d5-ffd5-11ec-b16d-002590d59089  ONLINE
            gptid/d8dba0c2-ffd5-11ec-b16d-002590d59089  UNAVAIL  cannot open

You typed the command with the wrong syntax.

You forgot the -R before /mnt

zpool import -f -d /dev/gptid -R /mnt 14223652181009904092

Whether you should try both ways (-f or -F, or even -fF together) I’m not sure.

-F seems relevant, since I believe -f only applies if the pool is “active” or “used”.

# zpool import -f -d /dev/gptid -R /mnt 14223652181009904092
cannot import 'Pool2': I/O error
        Destroy and re-create the pool from
        a backup source.

same result as other disk
the zpool import command above showed the pool as FAULTED corrupted data

Where do you see this?


Did you do this?

EDIT: That only applies to Linux. No sure what the FreeBSD equivalent is. @HoneyBadger ?

Bah. So used to fishing the kernel debugs out from that path that I’ll need to check on what it is under CORE.

Both together please @JGordonT so that would be

zpool import -fF -d /dev/gptid -R /mnt 14223652181009904092

We can also try -fFX for the “more aggressive rewind”

Under systen → Advanced there is a Save Debug button that saved a lot of files
I have no idea which file would be helpful, but I have them

I will now try with the -fF option

# zpool import -fF -d /dev/gptid -R /mnt 14223652181009904092
cannot import 'Pool2': insufficient replicas
        Destroy and re-create the pool from
        a backup source.

Can you make sure the label is still there? Not trying to change the subject. For sanity’s sake:

zdb -l /dev/gptid/d8c9a8d5-ffd5-11ec-b16d-002590d59089

I started with the -fFX option and so far no response. All the previous gave an almost instant answer. I assume it is running

My name is Winnie. I did not edit this post. There is nothing to see here.