How to recover from rm -rf /* ? (lost everything)

Yes, I’m hoping very little time passed between the erasure and the shutdown, and very little data was written to the pool post erasure too.

I guess the best thing to do would be to pull the boot disk and use a clean installed TrueNAS boot disk so that it won’t mount the pool auotmatically, and then try from that, with the -Tn option

2 Likes

AFAIK, a TXG is written every 5 seconds and each of these will write a uberblock. So it should take c. 160 second to rotate through 32 uberblocks - so lets say after 150s or 2.5 mins of the rm command it might be too late.

There are no timestamps in the shell output, but my guess is that the rm command was running for less than that.

P.S. Once when I was working for an outsource company I refused to take on a production system which had zero operational documentation, and the developers (a very well known US company) were thus forced into running it using manual operating instructions scribbled on a crumpled piece of paper (I kid you not) and they made literally exactly this mistake and deleted the entire directory system including the root directory and kernel followed by a reboot and then found that they had no backups either. So it can happen in large enterprises and not just on small family servers.

I would love to say that I have never personally made this sort of mistake myself, but I can’t. One time I made a similar mistake removing all the ACLs on a Netware system drive. But apparently I was too stupid to learn from that and a few years later then did something similar on Windows.

Today, I am much more cautious. But I still can’t promise with absolute certainty that it won’t happen again in the future.

3 Likes

With that same disk I installed few days ago a fresh install of TrueNAS (the reason I didn’t take time to set snapshot, when I plugged the drives back after the install, TrueNAS mounted the drives, as If they were always part of the fresh intall. It seems that TrueNAS Scale EE automatically loads them

Ideally, nothing was written at all. Txg do not move up when there are no writes, do they?

So there’s a free command-line way to possible recovery, but @Berboo must be very, very cautious with commands. Possibly check every step here before proceeding. (zpool import commands with -n are “dry run”: Nothing will be done so no damage will happen.)
Burned once…

And there’s an automated, but not free, way out with Klennet. Buy a large drive.
Unplug the pool drives. Install Windows somewhere; format the large drive. Then, and only then, plug in the pool drives, do not accept the “gentle suggestion” from Windows to format these unknown devices, and run Klennet. If it can recover, buy the licence and proceed.

Either way should actually begin by buying enough storage space: A backup is needed.

2 Likes

I hope you meant “with the same boot drive” and did not touched or formatted your pool drives. But if you imported the pool into EE and spent some time there looking around, you might have burnt these precious 160 seconds.
Then Klennet is your last hope.

1 Like

This mistake I made was few hours ago maybe 30min before posting here.

I just wanted to point out that even with a fresh install it seems that TrueNAS mount pools automatically.

So if I wanted to try this solution I will have at some point plug those drives to check them (I will not do that without your help guys).

What is the secured way to plug those drives, check the cammand lines and then go to the Klennet solution if this one is unsuccessful ?

Thanks for you support and your understanding. It helps a lot.

I think it took me arround 2 minutes before shutting down the server. Maybe more, maybe less. In these situations 2 minutes can go really fast…

If you still have the drives installed, with Electric Eel on the boot drive and the NAS powered off, wait for @HoneyBadger to confirm the commands to list the txgs and import the pool. Then power on the NAS and act calmly and carefully… :crossed_fingers:
(And have that backup storage at hand if all goes well.)

Important to note that while I do know a little bit about ZFS, I’m not the panacea or ultimate authority here.

@Berboo I would suggest using Klennet first from a Windows PC, as the “scanning” functionality is free. But you will need additional separate drives to recover your files to.

If you are wanting to try this from within TrueNAS, the very first thing you would want to do is export the pool (without destroying data) if it does get mounted at boot-time. You would need to be comfortable with the command line, connect over SSH, and run the sudo zdb -eul command against each of the physical pool devices - paste the results here as text file attachments.

We would start with the uberblocks - the “entry point” of the ZFS tree - and see if there is one that is dated prior to the unfortunate rm -rf command.

2 Likes

@etorix I do prefer to test the TueNAS solution, it’a gonna cost less in drives and software. But if the HoneyBadger solution doesn’t work, I will of course have no choices left.

@HoneyBadger I will install Windows on that 60 Gigs SSD drive have them scanned by Kelennet to at least confirm that it can see something.

After doing that, I will install TrueNAS and export the pool if mounted.
But what scares me is that if something could happen between the first boot of trueNAS et the moment I execute the zpool export poolname command.

Anyway, since this is time consuming I will probably do that this week-end.

Thanks a lot for the time you guys are taking trying to help. :pray:

1 Like

This is your data, your money, and your choice. Nobody is going to blame you for going one way over the other.
But in my opinion the cost for drives is the same irrespective of the path: You’re having a big scare, and you do need extra drive(s) for a backup.

1 Like

If you have a controller and case that supports hotplug, you can connect the drives after the initial boot - but if you’re unsure, then you will just have to boot up and run the export command as soon as possible.

1 Like

Well, I would buy a large drive and take an image of 2 drives and store it there then boot TN with 2 of the 3 old drives.

You get a degraded pool but who cares?

Hello everyone,

I wanted to provide some updates on where I am at.

After installing Windows on the 60GB SSD, I scanned the three drives using Klennet Recovery. The scan took approximately 10 hours. I’m not entirely sure how Klennet operates, but the results seem concerning :cold_sweat:. Here are two screenshots of the recovered JPEG files:

Green files :

Red files :

I have numerous files which are red, I assume that it signifies corruption.

I hope this doesn’t bode ill for the upcoming TrueNAS attempt. I plan to install it in the next few days and will come back here to seek some help.

I’m really scared…

Now decide if the money for Klennet Recovery is worth the green files recovery. A little over 1/3 of your data.

It appears that Klennet (at least for this case) is unable to retrieve the needed filesystem metadata, which is needed to restore the original paths, filenames, and modification times.

@winnielinnie Oh so that is now the normal output of Klennet results ? Indeed I found strange that there were no paths, no files names etc.

@SmallBarky even the green files are weird : very low resolution, I can’t even recognize what they are for some of them.

Not only that, did you notice how every JPEG file supposedly has a modification date of November 24? That’s obviously not true, but the needed filesystem metadata is required to restore the original modification times as well.

1 Like

Maybe this is Klennet’s “preview”, and not indicative of the file itself? (Unless this is the file itself, and it’s perhaps from a software catalog that stores “preview thumbnails” of your photos.)

The “4 KiB” file size seems to suggest that these are preview thumbnails or EXIF thumbnails.

3 Likes