HELP! Can't import a Pool, what do i do?

Ok, here’s all the info I can provide:

I did a motherboard swap, backed up my config, added some drives.

I originally had a 16Tb drive on my server to start, put a lot of data in it, and decided to mirror 2x8Tb drives later, when I moved everything. I was really careful for a year until I was able to get the drives installed.

But it wouldn’t let me mirror it, you needed a new pool! OK, so I export the current pool, Wipe the 2x8Tb drives I added, aaand… the exported pool is gone, but my drives list shows the 16Tb drive having a exported pool…

What the hell do I do? I even reinstalled a previous config, and the pool is back, but the drive is offline. When I try to export the pool it vanishes, with no option to import.

I seriously need some help. My data is there, but I can’t access it! I can’t even back it up to a cold drive. i’ve been doing that for a while, but it’s just too big!

Paste the results back using Preformatted text (</> or Ctrl+e)

These should help us know your current pool status and the import status.

    sudo zpool import
    sudo zpool status -v

OK, so it says that the data is corrupted and damaged, so it’s unavailable. The 16Tb drive is online, but the 2 8Tb drives are missing, since I wiped them… labeled as “insufficient replicas”…

Wait, did I corrupt my main storage drive by wiping the drives added to the pool? How do I repair the drive? The data is still intact, the pool is just corrupted, so how do I get it back?

and how do I upload images to posts?

It seems like you

  • first added both drives to the pool as separate vdevs without redundancy
  • then wiped the drives

This means your pool is toast. Sorry. Recreate and restore from backup.

Kind regards,
Patrick

As @pmh points out, this looks bad.

But if the option is to wipe everything and start over (from your backup, which presumably doesn’t exist?), then you may have a few other options to try first.

Yeah, I just realized that… I went to fast, and didn’t read the full instructions on exporting. I wasn’t completely awake when I finished my server stuff yesterday…

But I only wiped the added drives, the pool is still saved to the main 16 Tb drive, it’s just in a corrupted status.

Thank you, I’m already looking into file recovery. That’s probably my best option.

I’ll look into it it’s possible to allow missing devices.

Thanks for all the help. I’ve been so careful not to mess with any server related stuff since I set this NAS up, I don’t know what happened to me yesterday. I just remember feeling hazy for a couple of hours up to doing this.

I need to slow down next time.

@joeschmuck can you help importing the pool with a transaction group prior to addition of the two new disks - if possible?

I already tried to help you on Reddit.

As I said there, I suspected you had created a stripe of single drives rather than a mirror and the screenshot confirms that this was the case. A stripe of single drives is a non-redundant pool where losing any one of the disks means that you lose the entire pool.

Then you overwrote the partitions on the 8TB drives (without attempting to remove the vDevs) and you broke the pool. I suspect that it is unlikely that your data is recoverable without using an expensive Klennet software or employing an expensive data recovery company.

@pmh I would love to help but I just stopped to get fuel and eat lunch. Then I have another 7 hours to drive. I will be offline most of the time for the next few days while helping my daughter move. She needs to find a few strong guys with a truck.

This might be a job for @Arwen @HoneyBadger @etorix , sorry.

Yeah, I just realized this when I saw the pool list In shell… I went to fast and corrupted the pool.

I’m looking into recover software already. I’m going to try reclaimme before attempting to save up for Klennet.

This is what I get for not being careful, and forgetting my experiences a year ago on blank drives…

Just following along here:

  1. You had a single 16T disk in a pool
  2. You added two more 8T disks as additional stripe members
  3. You exported the pool and then wiped the two 8T disks (which were stripe members)

Did you create a new mirror pool out of the 2x8T drives again, or have they only been “wiped”?

We’re way, way into the weeds here.

I didn’t export the pool on it’s own before. Even loading a previous config with the old pool doesn’t detect it.

It’s alright, I’m only mad about my storage drive, since I have no idea what I had on there.

If I can see the Folder paths in some recovery software, than I should be able to get a majority of the data back.

Here we go again.

I have one quick question:

The reason I was doing all this was to Add a mirror to a already existing pool.

How would I do that in the future? I don’t have my server on much, but I want redundancy at least.

If you have a pool consisting of a single drive and then add another mirrored vdev to the pool, you will still have the single unmirrored drive and lose the pool if that one fails.

Now if the question is rather how to add a mirror to a single disk vdev - that would be the attach operation.

I think it’s “expand” in the UI, but I am not entirely sure.

Correct - notably, you have to “expand” with the original single disk highlighted. But in this case, that would require an identically sized 16T drive and not a pair of 2x8T.

@Phatoume if you have not attempted to create a new pool from the new 2x8T disks you may have a slim chance at recovery if the labels were somehow not completely overwritten by the wipe operation (but it’s likely they were) - and you may also be able to use the zfs_max_missing_tvds tunable in combination with a read-only mount, but none of these are guaranteed measures.

You can run the Klennet recovery in read-only mode without cost, to see if it can identify anything from your single 16T disk.

2 Likes

Good News:

I didn’t lose anything important.

Bad News: I have no idea what I lost in my storage folder, or how I organized any of it…

the only thing I know I need to recover that would be difficult is a graphic novel Series, but I can find that myself.

Lesson Learned: Once it’s set in TrueNas, there is no undoing it. Move the Data somewhere else, delete the old pool and now you have your drives back.

Never again, and no touching the Servers when your not completely Lucid.

You can set a checkpoint before doing major changes and in many cases this allows you to undo the change - like reverting a Snapshot on steroids.