HELP : Can't import pool! (OFFLINED drive)

I cant import pool after ONE drive failed, and another drive was OFFLINED.

(The issue is, I wanted to replace a drive, but offlined a wrong one, and took away the other one. It is Z1 , so it cant tolerate 2 drive failures, but there were no two drive failures, the one removed is ereased, but the other one, is functioning, just marked as OFFLINED. I tried everything in terminal, but nothing worked…

Any ideas how to fix this? Basically I am unable to unmark the OFFLINED drive as OFFLINED, it is working and connected to the system. It is spinning me in circles, telling me that it cant change drive status due to pool being ejected, but also cant import pool since the drive is missing…

:roll_eyes:

I’d try zpool status, find the offlined drive’s name, then zpool online poolname device

Does that do the needful? If not output of zpool status -v might be of assistance to us.

I dont see my ‘ZFS’ pool in the list

here’s a list of bad steps I took to get here

  1. OFFLINED the wrong drive
  2. REMOVED the other drive
  3. ZFS Z1 pool became unstable giving me - unable to access IO
  4. EXPORTED ZFS pool

So now, my ZFS pool is exported, and has one drive marked as offlined, inside itself… It wont import again, since the drive is offlined, … ?

It is recognised as drives with exported pools in UI - showing 3 drives, but it wont import it using UI…

Z1 tolerates one drive failure, and essentialy this pool has one drive failure, but also has another drive marked as offlined, which it is refusing to change… !

now when I think about it, I wonder if I messed up, when I removed the second drive, while the first one was offlined. This , in a way, is two drive failures as far as the pool is concerned…

But, in the same time, I dont think any data was written in the meintime, and, shouldn’t it be able to atleast reconstruct whatever was the state, one minute before I ejected the drive *basically there was around one minute of operation of the pool between ejecting the wrong drive…

should this destroy the pool, or is it salveagabel?

I dont really need this pool, as it was junk data, – basically testing ZFS and TrueNAS scale, but since I did this purposefully for testing purposes, I do care about recovering the problem - as a means of learning how to do it, in some other event, which hopefully wont ever come, but in case it does, I can be ready , and have experience on how to fix it…

Run the commands in Fleshmauler post above and post the results using Preformatted Text (Ctrl+e) or (</>) on post toolbar.

1 Like