Accidental "Reset configuration to defaults" has locked the zpool

Good day all,
I recently moved and was having difficulty with my truenas-scale box connecting to the network properly. I figured there must be some sort of issue with the configuration, so the most logical conclusion I could draw was the #5 option on the local command prompt, “Reset configuration to defaults”.
Big mistake on my part and I am dreadfully embarrassed.
When I got everything back up I saw that the main pool was missing. I used the GUI to import the pool, and everything seemed to check out, but the pool was now locked.
I figured, “Ok, no problem, I’ve got the key saved around here somewhere”. Dug around in my other computers and laptops and found 2 of them, both named “dataset_share_key.json”. The Dataset is named Share, so this made sense, but when I tried to unlock it, neither of them worked.
My next steps were to look to the internet. Previous help articles all have basically lead to the same place, but haven’t had my issue of the keys not working.
Another poor soul didn’t even have the key backed up, so his solution involved finding the .bak files for the freenas database and the pwenc_secret and taring them together to make a working config backup.
This was also unsuccessful, just brought me back to the unconfigured state.
Finally tried to use grub to boot into a previous installation. I have a upgrade and a post setup (where the key file was originally generated). I tried booting into the but strangely ran into the same problem. The share pool was not present and showed as locked after importing it.
I’m at my wits end at 1041PM on a Tuesday because I may have just irrecoverably toasted my main document storage and I could really use some guidance as to how I can recover this debacle.
Any helpful advice would be a blessing.

This sounds like a whole lot of pain. Have you tried importing the backup config? It should have exported your keys. I believe that if you don’t find the keys your experience will be anything short of pleasant.


Open up this .json file in a text editor.

Do you see a 64-length HEX string? If so, copy and paste this string (only the string of 64 characters) into the unlock prompt when you import your pool. Make sure to unselect the “keyfile” option, since you want to manually paste the string.

Can you elaborate? Do you mean it outright said “incorrect key”?

1 Like

I saved the key but didn’t back up the config :man_shrugging: I did back it up on the last upgrade, but the comp I saved it to has since been wiped.

I’ll give that a shot, thx for the idea.
Yes, when I go to unlock with the key file, the error detail says “Incorrect Key”.


1 Like

Encrypted pools and datasets turn to self-inflicted ransomware million times more than protecting against anything.

Unfortunately, your case seems to be another proof of that…

Second way-too-common reality is that people do not have proper backups of their data… Is that your case also ? If you do have backups, now would be a great time to use them…


And manually with the HEX string?

No dice, looks like they’re both wrong, even when manually input.

Do you have a backup of pwenc_secret in your /data/ directory? (Usually named pwenc_secret.bak.)

ls -l /data | grep pwenc

Do you have a history of your auto-generated config files?

ls -l /var/db/system/configs-*/

If you’re lucky, you might be able to combine either pwenc_secret or pwenc_secret.bak with a config file from around the time you remember last accessing (successfully) your encrypted dataset(s).

The combination would be a simple tar file that contains both files. (If you use pwenc_secret.bak, you’ll need to rename it to pwenc_secret before creating the tarball.)

I have to disagree with that statement. Encryption is a tool, that generally is very useful. Has data been lost due to misplaced passwords / encryption keys, etc? Sure but that is what backups are for. No different than losing a pool due to ignoring SMART alerts, followed by disk failures, etc.

Encryption means I never have to worry about sending a drive back for a warranty replacement even if it was in a pool and I couldn’t wipe it before returning it due to its malfunction.

If I replicate to another NAS, the data on the remote server remains in an encrypted state, ensuring that nothing “over there” contains useful information.

Encryption also imposes no overhead on modern hardware re: throughout.

Because use case varies, I would not broadly dismiss encryption, just as I also secure the WebGUI on my NAS thanks to the excellent help of @dan.

My only gripe here is that such a devastating system wipe is on the front of the options menu at boot. Even if I had the key, setting up all my configuration would take a minute. I 100% take responsibility here, but I think it would be prudent to put that option somewhere else or put a bigger confirmation there. Like, in the GUI, if I want to delete a pool, I have to type out the whole pool name. In the current model, a cat could jump on a keyboard and cause a system wipe.

1 Like

You should not leave you server with a keyboard attached; you shouldn’t need to use a keyboard or a monitor since server-grade hardware has IPMI which allows you to remotely operate it; you should use server-grade hardware; you should have backups of your system’s configuration in order to upload it and have everything ready with a few clicks; you should have backups of your passphrases/seeds if you use encryption; you should have backup of your data.
But I agree that a warning should be inserted.


Having the console open with an attached keyboard is not the default operation. Typically you would get there after logging in via IPMI or the likes.

Additionally the system is not wiped, just the configuration. A boot drive can fail any time, it’s the users responsibility to have a current config backup. In that case it’s nothing more than a slight inconvenience to reinstall and restore the config.

When you used keys to encrypt the pool you were displayed a message that told you to securely store the keys.

I don’t want to pour salt or anything, just clarify that this is, in my opinion, no major oversight on the way things work.


Yes, I 100% agree and don’t normally leave keyboards attached. There is a warning, but it’s not very verbose and you just have to hit y and enter. Big config wipes shouldn’t be that easy, imo.

Then, a more verbose warning should be inserted.

1 Like

You have to distinguish between encryption and pool / dataset encryption. I do use encryption too, but at file level instead. The data also remains encrypted when replicated. The difference is that my pool will never get locked like this one.

Also, your worry about broken disk sent back before wiping, that is beyond paranoia level and closer to insanity level…
1-The disk is broken and not readable as it is
2-The manufacturer has neither time, money nor interest to extract the content of every broken drive they receive from all of their clients
3-The disk being part of a ZFS pool, there are not complete data on it
4-There is a reason why there are no file recovery tools for ZFS. It is already difficult enough to extract data from a damaged system without any kind of encryption

In all cases, should your data be that critical and that valuable, know that the proper handling for such a situation is to physically destroy the drive instead. The data worth million more than a regular drive and the RMA value is insignificant.

By posting something like this, you encourage people to use something they do not control, they do not understand, that threaten their data directly, all of that for no serious gain.

So sure you can have your opinion but I would suggest you consider the impact of that one on new and unexperienced users. What will benefit them the most ? Pool / dataset encryption to go above and beyond paranoia or file level / no encryption to reduce the risk or loosing it all like here ?


In my opinion that does not make a difference. An encrypted file needs to be decrypted, personally it doesn’t matter for me if I lose an encrypted dataset or if I keep the dataset but all file contents are encrypted, the result is the same.

Why do I use encryption? In case the server gets physically stolen, in that case the password for the GUI can be reset and unencrypted data may be at risk.
Even if the chances for that are low. Same reason I use veracrypt on my other machines. In case of physical theft I don’t need to think about any data that is possible accessible by a third party.
The chances about a compromised system is probably higher.

Counter measures to get locked out? Regarding my reason for encryption, keys are not an option. I use pass phrases, this also enables me to store them in my password manager and have a printed copy of the most important passwords stored somewhere else. Aside from the possibility that I could also remember my password.

I completely agree here, but I think if you properly think about the encryption and possible downsides, I wouldn’t steer away from encryption at dataset/pool level.

I’m gladly corrected / called out.


I’ve been using encryption for all my data for a very long time now (from file-level, to LUKS, to TrueCrypt, to VeraCrypt, to GELI, and now with ZFS encryption.)

I’ve never once locked myself out of my own data.

Not to sound insensitive, but it comes down to the user’s responsibility, just like with anything else. This is not the fault of encryption itself.


Indeed, but that is the same logic as the RFA keeps repeating to ensure a maximum number of guns keep circulating everywhere. After each and every tragedy, they repeat it loud and proudly : Guns do not kill, it is the people mis-handling them who does. But at the end, what good can a gun do ?

So I prefer a million time a system where things that are that dangerous are restricted / discouraged rather than banalized or encourage to even people without the proper knowledge and experience.

So there should be no encryption whatsoever until proper and complete backups are in place and tested regularly. That means already no encryption for 99%.

One should get used to key management and recovery with file level solution first like @chuck32 using Veracrypt before going to pool level / dataset level.

Once there, these tools can keep data encrypted in the system even when the system is live, as opposed to pool / dataset encryption. So they may offer even more and better security.

So only after all of that should pool / dataset encryption be considered. At the end, that leaves an almost non-existent use case.

1 Like