Drive dropped out of pool, reappeared as unassigned disk. No SMART failures. How do I add it back?

Is it possible to add a disk back to an existing pool without wiping all of the data? The drive shows up on the unassigned disks list, but the pool column says “N/A”. When I go to the existing pool on my storage tab, it shows “VDEVs not assigned”. If I try to add a VDEV to that pool, the name of the pool can’t be changed, but when I click Next, it tells me “Name not added”. It lets me select the old disk on the data tab, and when I click Next there, it says “At least 1 data VDEV is required”.

It’s acting as if the pool doesn’t actually exist any longer, But if I try to add the disk to a new pool, it wants to erase it.

Any ideas? Is the only way around this to erase the disk and restore from backups?

FYI - I know it isn’t recommended…but this disk was a single disk stripe. I’m probably paying for that configuration right now because of the way it’s acting.

You are indeed paying for your lack of resiliency.

What happens if you type zpool import at the commandline?

I get “No pools available to import”.

The thing I don’t quite understand…If I run “zpool list”, the pool that the drive was originally a part of doesn’t show up at all. But, in the web UI Storage tab, the pool is there. It just shows all of the vdev types as “VDEVs not assigned”. Manage devices is blank.

I’m fairly certain I’m going to have to wipe the drive to get anywhere with this, I was just hoping to not have to spend all the time needed to restore it from backup.

did you export any config for truenas before the drive dropped out? and did you suffer any kind of power interruption or system crashes? It’s interesting to me what caused it, if the drive’s metadata is corrupted and truenas is no longer recognizing it as a pool or vdev i think the chance of recovery is kinda low specially when you are running on stripe; but if it’s something wrong with truenas itself, then it’s worth a try to re-install / or try import the pool on another truenas os.

also i think single disk stripe is basically just single disk, not even raid0. I think worst case you can still pull the data out since the data is not stripped and splitted across multiple disks.

I have a fairly recent configuration file for Truenas, but I wasn’t sure that it would help. Plus I didn’t want to risk the rest of the pools I have configured since they’re working perfectly fine.

Even though the pool reports do not show any sort of warnings, just out of curiosity, I ran a smart test from the disks screen. It failed the smart tests, so obviously the drive is dying.

The thing I still don’t get is that the system is seeing the drive, but it’s treating it as if it’s blank. I’m not sure how of if it’s even possible to determine if there’s any data on that drive at all at this point. I’m still learning a lot about Truenas and Linux both, so I’m not sure exactly what to try that wouldn’t be destructive to the data (if there is any).

Oh, and to finish answering your questions, there haven’t been any power interruptions (it’s on a UPS), unexpected reboots, crashes, or anything unusual at all going on. I just looked at my dashboard one day, and it said I had an unused disk, with that pool being non-functional.

Small followup…I was doing some research into some Linux commands to figure out if the system was actually reading that drive at all. I ran “sudo parted -l” to get a list of all of the disk partitions in the system. When it cycled through the drives and reached the one in question, I got an I/O error with retry/ignore/cancel. So…that drive is dead, even though it’s showing up as an empty “new” disk on the storage screen. At least that’s the theory I’m working with right now.

Thanks the update about the smart failure. If the drive is dying and it’s only single drive; then it make perfect senses the system is able to see the drive but can’t see the vdev or pool because likely the metadata which tells truenas “Hey I have a vdev or pool on me” is corrupted or inaccessible, but hard drive controller still tells truenas “hey im hard drive, but i dont have any valid pool / vdev info on me, or it will just not allow read / write for truenas.”. This is most likely the case, but it also could be very low possibility truenas has issue which can be tested on another truenas system or VM passing your undetected drive in. If it’s not catastrophic failure it’s likely recoverable, just no longer detected by truenas as valid pool /vdev.

one way you can try is to do a bit-by-bit foresenic clone of your dying / dead drive, see if the cloned drive can be detected by truenas ( it should if the prior drive had issue with I/O like u mentioned) but if not, if the meta is corrupted, then you can still attempt to pull the data out via other methods, for instance try use zdb or something to ignore errors and just pull out as much data as possible. Or you can attempt to fix the meta and try to get truenas recognize the pool again.