Did I just lose all my data?!

Hello!

So, I added a new drive to my setup and was greeted with this when It restarted it.

Here is the hardware specs of my PC:

i5 4690k
16GB RAM
1 M2 240gb drive
1 SSD PNY 240gb drive (boot pool I believe)
1 4tb WD drive
1 3tb WD Drive
AMD r920x

I found a few forum posts online about using Shell to run zpool import. Here is the screenshot from that.

image_2024-09-19_214233993

Also a screenshot from zpool status

image_2024-09-19_214327653

Did I just lose all my data? I know that I made the mistake of setting up my nas as a striped setup, but I wasn’t aware that adding a single drive could kill my whole setup? What do I do?!

Thank you!

1 Like

What the heck did you do? :flushed: HOW did you manage that?

You created a pool from four drives… in striped, non-redundant vdev?

Did the GUI not try to stop you?

I doubt it, but did you happen to create a checkpoint before you “added” the new drive?

Either way, your “Storage” pool is a haugepauge of different sized drives slapped together with no redundancy.

What exactly is this “UNVAILABLE” drive that is missing from your storage pool?

Surely you didn’t borrow (“re-use”) one of these four? :point_down:


EDIT: There are actually 5 drives, not 4, as corrected in a later post:

:rofl:

I don’t know what I did! All I did was install a drive, then go into the Storage tab and add the drive to the current pool, restart the server and BAM - this happened!

All of the drives installed to the PC are listed. I just noticed I mistyped the quantity of them though.

x1 M2 240gb drive
x1 SSD PNY 240gb drive (boot pool I believe)
x1 4tb WD drive
x2 3tb WD Drive

It sounds like you may have made a pool. It sounds like you filled that pool with all different random sized drives. It sounds like you put them all together in a single VDEV with no redundancy. This is bad.

The good news is that ZFS is telling us exactly what the problem is. One of the original drives you added to the very bad pool was mistakenly unplugged? We see one drive listed as UNAVAIL. I think you may have bumped a wire or something when you installed the new drive.

Maybe read this and start over from scratch
ZFS 101—Understanding ZFS storage and performance | Ars Technica

EDIT: also RIP
End of the Road: An AnandTech Farewell

2 Likes

Thank you for the response. I’ll check the cables again in the morning. I know having a single vdev with no redundancy is stupid. If I can recover my pool, I think this will be a lesson for me. I’m going to backup everything and add redundant drives. I’m laying in bed right now and I can think about is ALL OF MY FAMILY PHOTOS (uploaded from old phones and synced daily with Nextcloud from my phone and my wife’s!) and important information just gone!

I’m going to toss and turn all night!

1 Like

I’m pretty sure the GUI gives you a scary warning when you try that. Did you just dismiss the warning and continue?

Your “M2” drive may in fact be keyed for SATA, and by installing it in an m.2 slot, it “knocks off” another SATA port.

You need to check your motherboard’s manual. You might find a note that says something to the effect of “Using an m.2 drive keyed for SATA in the slots M2_0 or M2_1 will disable SATA_3 or SATA_5”.

3 Likes

So. Uninstall the new drive…

1 Like

That still doesn’t line up to four single-drive vdevs in the pool (so FOUR vdevs), unless you threw the 240 GB drive in—but then it didn’t disable another drive.
camcontrol devlist and/or gpart list to know what’s what.

Not true, I think you have already learned the lesson. Any important data should be placed where it would be safe, and that means redundancy as others have said. Hopefully you have some sort of backup sitting around. The big thing is, if you recover your data, will you be building a new TrueNAS machine that has the proper hardware specs and include some redundancy.

The folks on the interweb are wrong to suggest slapping together any old computer and whala, you have a NAS. Well it isn’t a very safe NAS if you do it wrong.

Best of luck to you and I hope you are able to recover your data.

2 Likes

Thank you all for the replies. I understand a single striped setup wasn’t smart. I initially built this NAS with intentions of upgrading it and properly setting up redundancies. I installed x2 3TB drives and added them to the current pool. I restarted the machine a few days later and well… both of them failed. Totally my fault I suppose. These were used HDD’s from a presuming good source, but I guess not. I should have verified the drive health and should have added them as redundant drives instead of expanding the pool.

I exported/disconnected my current pool in hopes of recovering it, but no amount of zpool commands are working. I guess I’m screwed!

What would you guys try next? Is the data really gone? I only had the new drives in there for a few days… does adding new drives immediately start moving old data from the other drives to the new drives? I thought maybe the new drives would only see striped data from new data? I’m sorry, I’m a novice at this sort of thing!

Exactly which commands have you tried, and what was the result?

Adding new drives does not move the data. Data added before expansion is still fully on the old drives; data added after expansion is striped all over, but probably most on the new drives—and is lost if any drive is irrecoverable.

That depends on what the situation is. What is the hardware condition of each drive?
Potentially, it might be possible to roll back to a state where the pool can be imported, discarding some data. Else, you could attach the drives to a Windows machine and try Klennet ZFS Recovery—scanning is free, but if it finds something to recover it will cost you $500 for the license.
In any case: Buy some good drives, to clone the drives before attempting some potentially destructive operations, and/or to copy any data you might recover. And to make a new pool that is well-designed and safer from the start.

Exactly which commands have you tried, and what was the result?

I tried zpool import (“pool name/id”) with various arguments like -f and -F - no dice. Basically the ones I linked in the original post up there.

Adding new drives does not move the data. Data added before expansion is still fully on the old drives; data added after expansion is striped all over, but probably most on the new drives—and is lost if any drive is irrecoverable.

Okay well this is great news because I added the drives and they essentially died in a day or two without any new data being added to the server. (maybe a few nextcloud photos, but who cares. They are still on my phone and can be added later if this issue is fixed)

That depends on what the situation is. What is the hardware condition of each drive?
Potentially, it might be possible to roll back to a state where the pool can be imported, discarding some data. Else, you could attach the drives to a Windows machine and try Klennet ZFS Recovery—scanning is free, but if it finds something to recover it will cost you $500 for the license.

All of them work except the two new ones I added. They both were sold to me as DOA apparently. One is recognized by my MOBO but doesn’t seem to functionally work. The other is completely dead. Not recognized by mobo and makes horrible sounds. I know, this is my fault. I should have checked the drives but I trusted the seller.

So, If I remove the SSD and the 4TB drive ( the ones that were in my NAS originally with ALL of my important data on them ) and install them in my PC, I should be able to just clone them onto new drives for backups? I assume I can use Macrium or something?

In any case: Buy some good drives, to clone the drives before attempting some potentially destructive operations, and/or to copy any data you might recover. And to make a new pool that is well-designed and safer from the start.

So, I essentially just clone the data from my old drives to new ones ( that way I have a backup incase something gets messed up ). How do I go about re-importing the pool? If I disconnect the faulty drive, it still shows up as UNAVAILABLE on the zpool import command even though it isn’t connected to the NAS - also when I try to do zpool import (enter name of pool) it says “pool does not exist” or something along those lines. Is that because I exported it?

I advise you against doing that. Put those drives away somewhere safe until you can reinstall them in a known-good-functioning TrueNAS system and attempt an import.
“I can think about is ALL OF MY FAMILY PHOTOS (uploaded from old phones and synced daily with Nextcloud from my phone and my wife’s!) and important information just gone!” Don’t experiment with those drives!

1 Like

I’ve tried MANY things thus far. The last thing I did in a desperate attempt was to Export/Disconnect the pool. I did that but now I am unable to import it. The exported pool still shows up under zpool import but when trying to actually import it or force an import, it just says the pool doesn’t exist. When you export a pool, where does it go?

Sorry that you are still having trouble.

Did you put those two saved drives into a known good-functioning TrueNAS system and attempt an export, then import, or did you do something else with them??

I only have one PC with truenas setup. I exported them, and then tried to import them again into the same system hoping to be able to ‘zpool remove’ the two dead drives (they died within 24hr of install. They have no new data on them, so IDC about them at all. I just want my old setup back!)

Nowhere. Exporting just marks the pool as “not in use” so it can be safely imported into another system (no ‘-f’ option).
You’ve learned the hard way that redundancy is a must, as well as backups, and now the enterprise roots of ZFS are working against you, as ZFS will NOT mount a pool with missing vdevs and will NOT return any partial or partially corrupted data—it’s complete-and-known-good or nothing at all, “get that tape archive…”

I don’t know if it is possible to revert the pool to a lower number of drives by forcing import at a specific transaction group.

You best option is to mount the drives in a Windows system (do NOT accept the “friendly” suggestion of formatting the “unknown” drives) and have Klennet ZFS Recovery scan them. It should be able to retrieve all the data that was there before the last two single drive vdevs were added. Then, of course, scanning is free but actual recovery will cost you a $399 licence—and the price of new drives.

Else, you’d have to perform successful necromancy on the two dead drives.
As known to students of Miskatonic University, cold might maintain life into the dying. I once managed to recover data from a defective drive in my sister’s iMac. The process involved taking the drive out (hello, suction cups…), plugging the drive into a USB adapter and running the bare drive on the windowsill (it was winter, outside temperature just above freezing; the summer alternative is to wrap the drive in a bag and put it in the fridge, squeezing the adapter cable through the edge of the door). Even then, the drive only worked for a few minutes at a time before heating to its agony again: Over a few runs that was enough to copy the most important data to my MacBook Pro, but would not have allowed to make a raw copy of the full drive to a new device with dd, as you’d possibly need.

5 Likes