25.10 corrupted encrypted dataset?

Decided to try 25.10 today everything went smoothly, then I went to unlock an encrypted data set and was greeted with some ZFS errors and couldn’t access the data set (i encrypted/decrypted this same dataset shortly before the upgrade with no issues). zpool status shows corrupted files in the encrypted dataset only (including all snapshots)

I rolled back the boot environment to 25.04.2.4 and now running a scrub and now my data is accessible again. Was this just a coincidence or is it somehow related to the update?

I have several datasets and had no issues going from 24 to 25.04.2.5. I have yet to move to 25.10, but do keep us updated with your findings on this issue if possible.

1 Like

What were the errors?


Without more information, we can’t say either way.

Below is the zpool status information (i removed a lot of the snapshots it listed). The data was not actually corrupted but zpool pool status shows everything only on that specific encrypted dataset having “Permanent errors have been detected in the following files”.

I had a backup of this dataset, so i deleted it and the pool showed healthy again. I went back to 25.10 and i am recreating the dataset again.

root@truenas:\~# zpool status -v BIGNAS

pool: BIGNAS

state: ONLINE

status: One or more devices has experienced an error resulting in data

corruption. Applications may be affected.

action: Restore the file in question if possible. Otherwise restore the

entire pool from backup.

see: [https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A](https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A)

scan: scrub repaired 0B in 16:10:03 with 0 errors on Mon Nov 3 10:03:50 2025

expand: expanded raidz1-0 copied 48.8T in 2 days 10:11:43, on Mon Oct 14 10:27:0 0 2024

config:

NAME STATE READ WRITE CKSUM

BIGNAS ONLINE 0 0 0

raidz1-0 ONLINE 0 0 0

40f69a95-4d44-470c-b864-5e04326a6182 ONLINE 0 0 0

2f5d77b5-d325-4b8e-98e8-beb700cbf6fe ONLINE 0 0 0

e1efd761-c5ce-4e6b-b2b7-61a6895efb86 ONLINE 0 0 0

f5bb2315-0bb4-4903-9ea8-ed04cd3ab4dc ONLINE 0 0 0

8d314d1b-745a-42d9-adad-9dd2db3126c1 ONLINE 0 0 0

raidz1-1 ONLINE 0 0 0

f139e068-854a-43d1-ac15-96dd45f6a071 ONLINE 0 0 0

8f0a078a-0209-4567-b75e-e0fe4673d59f ONLINE 0 0 0

9f154144-62c4-4b0c-b9d0-9af480c38455 ONLINE 0 0 0

a7e9709b-02f5-4d4e-b40d-fc5d16fa5e39 ONLINE 0 0 0

41b39341-f876-4203-86e4-2882aea90c25 ONLINE 0 0 0

errors: Permanent errors have been detected in the following files:

BIGNAS/personal@auto-2025-10-15_00-00:<0x1>

BIGNAS/personal@auto-2025-10-25_00-00:<0x1>

BIGNAS/personal@auto-2025-09-18_00-00:<0x1>

BIGNAS/personal@auto-2025-09-11_00-00:<0x1>

BIGNAS/personal:<0x1>

BIGNAS/personal@auto-2025-09-17_00-00:<0x1>

BIGNAS/personal@auto-2025-09-08_00-00:<0x1>

BIGNAS/personal@auto-2025-09-05_00-00:<0x1>

Those are likely metadata errors.

How are you drives connected?

they are connected via sata. all the connections are seated well. ive had checksum errors in the past due to a bad connection but this seems different in that there were no Checksum errors and all the files effected were specific to the one dataset.

I would keep a close eye on this.

It’s concerning when you see ZFS metadata errors. Not only is there redundancy provided by the vdev itself (mirror, RAIDZ), but metadata is stored multiple times for extra safety, since it is more important to protect its integrity than file data.

Theoretically, metadata corruption should be less common than file data corrptuon.

1 Like