Accidental "Reset configuration to defaults" has locked the zpool

I don’t see giving fellow users the pro/cons of encryption as any different than giving forum advice about whether a Z2 is better than a Z3, whether to implement a sVDEV vs. a L2ARC, or whatever topic. ZFS is a swiss army knife, it can do a lot of things, and some of them may cut you the wrong way if the proper care is not taken. That’s why we make backups. :grinning:

Your and my opinions of the benefit vs. the cost of encryption are very different and we’ll simply have to agree to disagree on that point. That said, I consider the casual use of “paranoid” to describe someone because you don’t agree with their opinion as unnecessarily inflammatory.

Bringing guns into the discussion is not only inappropriate, it also further ratchets up the rhetoric and creates a straw-man argument that does not apply to the question of whether file-level, data-set level, or even pool level encryption should be used by themselves, in combination, or whatever. Guns serve a very different purpose than a NAS, encrypted or not.

A good password manager should be standard issue with every type of encryption. Ditto a plan to get to said passwords if a catastrophy affects one physical location. No different than having off-site and ideally off-line+offsite backups for the same reason. The minute some data needs to be encrypted, the need for a password manager and the careful management of its access / backups / alternative sources / etc. arises.

For example, how many among us store their password keys offsite in a way that your estate executors and/or kids can access them, if necessary? There is a lot to think about when it comes to encryption and going through what it would take to recover a locked pool/dataset/veracrypt/archive should be part of every recovery plan.

Quite a lot, really, but that’s getting wildly off-topic.

You’re right that encryption carries a not-insignificant risk of data loss, as we’ve seen (once again) in this very topic. It’s less fragile than it was before native ZFS encryption, but it’s still pretty easy to give yourself unintended ransomware, as you’ve said.

But it remains a tool for which people perceive a need. And just as other people don’t get to tell you how to use your NAS, neither do you get to tell them how to use theirs.

2 Likes

LET’S DO THIS! @kris will be SO THRILLED to create a “Politics” category for the forum! He’s been waiting for the right opportunity! :smiling_face_with_three_hearts:

3 Likes

Let me reach for my paddle…

4 Likes

5 Likes

Full disclosure, I use veracrypts full disk encryption on my windows machines, so comparable to pool level encryption.

My TrueNAS backups are replication tasks that preserve the dataset encryption.

My B2 Cloud Backup is comparable with file level encryption. And for the offline copy (encrypted with veracrypt) I use the same passphrase I use daily to unlock my main PC.

Agreed! I prefer the comfort of at rest encryption for now.

As long as you don’t lose your encryption key / passphrase I consider it okay. But I agree, one should only dive into that if the proper base is laid. Proper backups should be in place before introducing another point of failure.

1 Like

Oh God, I just wanted some advice about an issue and now I’ve started WWIII!

6 Likes

I stumbled upon this post, I have no idea if it may help in recovering the keys from the files you found:

3 Likes

I’ll have to give this a try later today, thanks!

1 Like

Nope, no dice, the issue would require that I have a .db file that has the encrypted key in it, but I somehow do not

Not even in here? :point_down:

1 Like

nada, zero files of any kind in either directory

Do you have an unmounted, residual System Dataset by chance?

Compare these two outputs:

zfs list -r -t filesystem | grep -e "\.system"
zfs mount | grep -e "\.system"

And what about the “backup” secret seed? :point_down:

2 Likes

Sorry for the late reply on this one, I appreciate the help but I had to step away from it for a little while and clear my head.
Firstly, the output of the two commands are as follows:

root@truenas [~]# zfs list -r -t filesystem | grep -e "\.system"
boot-pool/.system                                            1.37G   203G     1.35G  legacy
boot-pool/.system/configs-e31904440bf4429a9f451dd56dc297b9     96K   203G       96K  /mnt/configs/
boot-pool/.system/configs-f1f5036a6e4448d09a9ddb3c45165866     96K   203G       96K  legacy
boot-pool/.system/cores                                        96K  1024M       96K  legacy
boot-pool/.system/ctdb_shared_vol                              96K   203G       96K  legacy
boot-pool/.system/glusterd                                     96K   203G       96K  legacy
boot-pool/.system/rrd-e31904440bf4429a9f451dd56dc297b9       9.17M   203G     9.17M  /mnt/rrd
boot-pool/.system/rrd-f1f5036a6e4448d09a9ddb3c45165866       13.3M   203G     13.3M  legacy
boot-pool/.system/samba4                                      312K   203G      148K  legacy
boot-pool/.system/services                                     96K   203G       96K  legacy
boot-pool/.system/syslog-e31904440bf4429a9f451dd56dc297b9     476K   203G      476K  /mnt/syslog/
boot-pool/.system/syslog-f1f5036a6e4448d09a9ddb3c45165866    2.02M   203G     2.02M  legacy
boot-pool/.system/webui                                        96K   203G       96K  legacy
share/.system                                                1.63G  1.15T     1.35G  /mnt/share/.system
share/.system/configs-f1f5036a6e4448d09a9ddb3c45165866       22.1M  1.15T     21.0M  /mnt/share/.system/configs-f1f5036a6e4448d09a9ddb3c45165866
share/.system/cores                                           352K  1024M      256K  /mnt/share/.system/cores
share/.system/ctdb_shared_vol                                 245K  1.15T      234K  /mnt/share/.system/ctdb_shared_vol
share/.system/glusterd                                        378K  1.15T      282K  /mnt/share/.system/glusterd
share/.system/rrd-f1f5036a6e4448d09a9ddb3c45165866            196M  1.15T     22.1M  /mnt/share/.system/rrd-f1f5036a6e4448d09a9ddb3c45165866
share/.system/samba4                                         3.04M  1.15T      682K  /mnt/share/.system/samba4
share/.system/services                                        234K  1.15T      234K  /mnt/share/.system/services
share/.system/syslog-f1f5036a6e4448d09a9ddb3c45165866        68.2M  1.15T     21.2M  /mnt/share/.system/syslog-f1f5036a6e4448d09a9ddb3c45165866
share/.system/webui                                           234K  1.15T      234K  /mnt/share/.system/webui
root@truenas [~]# zfs mount | grep -e "\.system"
boot-pool/.system               /var/db/system
boot-pool/.system/cores         /var/db/system/cores
boot-pool/.system/samba4        /var/db/system/samba4
boot-pool/.system/syslog-f1f5036a6e4448d09a9ddb3c45165866  /var/db/system/syslog-f1f5036a6e4448d09a9ddb3c45165866
boot-pool/.system/rrd-f1f5036a6e4448d09a9ddb3c45165866  /var/db/system/rrd-f1f5036a6e4448d09a9ddb3c45165866
boot-pool/.system/configs-f1f5036a6e4448d09a9ddb3c45165866  /var/db/system/configs-f1f5036a6e4448d09a9ddb3c45165866
boot-pool/.system/webui         /var/db/system/webui
boot-pool/.system/services      /var/db/system/services
boot-pool/.system/glusterd      /var/db/system/glusterd
boot-pool/.system/ctdb_shared_vol  /var/db/system/ctdb_shared_vol
boot-pool/.system/cores         /var/lib/systemd/coredump

On a hunch, I had previously made new mount points and mounted the configs/rrd/syslog for those vols labeled with e31904440bf4429a9f451dd56dc297b9 to no avail, nothing in them.

Secondly, yes there is a pwenc_secret.bak in the /data/ directory.

Lastly, I had the boot-pool hdd start throwing SMART test failures a while back, so I replaced the drive with a new one. I found the old drive squirreled away somewhere and it appears to still have the partitions on it, but I can’t seem to figure out how to import the boot-pool.
Tried to do it on a laptop with Ubuntu 23.10 first. I could get zdb -l to show the label, but nothing would show up when I tried to import or otherwise mount it. The Gnome Disk Utility and fdisk both show the proper drive structure, which matches the fdisk output from the truenas box, but if I try to mount it with the utility, it says the kernel doesn’t have the ZFS module loaded.
I then tried plugging it back into the Truenas-Scale box in the same sata slot as the current boot-pool disk, but it doesn’t seem to read it as a bootable disk, I just get the “Insert Boot Medium” prompt.

Your output suggests that you have a non-mounted (old?) System Dataset that resides on your share pool.

You might be able to check within here for old configs:

# CREATE TEMP SNAPSHOT
zfs snap share/.system/configs-f1f5036a6e4448d09a9ddb3c45165866@temporary-hold

# PROTECT TEMP SNAPSHOT / DATASET
zfs hold temporary share/.system/configs-f1f5036a6e4448d09a9ddb3c45165866@temporary-hold

# MOUNT THE RESIDUAL DATASET
zfs mount  share/.system/configs-f1f5036a6e4448d09a9ddb3c45165866

# LIST THE CONTENTS
ls -l  /mnt/share/.system/configs-f1f5036a6e4448d09a9ddb3c45165866

I have a hunch your old configs live inside there. (It’s over 20 MiB, which means you likely have old configs in there.)

If so, you might be able to tarball one of the “old’ish” configs with pwenc_secret.bak or pwenc_secret, to upload and reboot with an unlocked pool.

Or, alternatively, you can decode the HEX for the encrypted dataset(s) by using the method in this post by @chuck32, this time with a relevant config file that you found in this residual dataset.

no such luck unfortunately, looks like that config is locked inside the encrypted pool

root@truenas [~]# zfs mount  share/.system/configs-f1f5036a6e4448d09a9ddb3c45165866
cannot mount 'share/.system/configs-f1f5036a6e4448d09a9ddb3c45165866': encryption key not loaded

I don’t know what else to say at this point. You may have indeed ransomed your own data, with no means of retrieving it.

You final options are:

  1. Try to locate the exported config file. Maybe you had saved it at some point in time? The filetype is either .tar or .db, with truenas somewhere in the filename. Maybe you’ll get lucky and remember “I did make a backup of this config!”
  2. Try to locate the exported keys file. Maybe you had saved it at some point in time? The filetype is .json, with your pool’s name (or dataset’s name) somewhere in the filename. Maybe you’ll get lucky and remember “I did make a backup of the encryption keys!”
  3. Restore from backup.
  4. If no backups exist, then you know what’s implied…

EDIT: I’m pretty sure the GUI prompts you to save your keyfile upon creating a pool with an encrypted root dataset. Did you ignore it? Click cancel? Save it somewhere that you forgot about?


On a somewhat related note, this is why I wish they implemented “multi keyslots”[1] for encrypted ZFS datasets, in the same way that LUKS manages it. You would be able to store multiple user keys to access the encrypted data; including a combination of keys and passphrases. If you lose or forget one, you can still access your data with the other(s).


  1. Don’t you just love it when there’s a feature request from 7 years ago, and it’s still open and still being discussed? ↩︎

3 Likes

But encryption landed like yesterday, the ticket is from 2017, that’s only… oh…

2 Likes