Checksum errors not showing affected files

I have a raidz2 pool that has been experiencing checksum errors. However, when I run zpool status -v, it does not list any erroneous files.

I have performed multiple zfs clear and zfs scrub, each time resulting in 18 CKSUM errors for every disk and “repaired 0B with 9 errors”.

Despite these errors, the zpool status -v command for my pool does not display any specific files with issues. Here are the details of my pool configuration and the error status:

zpool status -v home-pool
  pool: home-pool
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 1 days 16:20:56 with 9 errors on Fri Mar 14 15:02:37 2025
config:

        NAME                                      STATE     READ WRITE CKSUM
        home-pool                                 ONLINE       0     0     0
          raidz2-0                                ONLINE       0     0     0
            db91f778-e537-46dc-95be-bb0c1d327831  ONLINE       0     0    18
            b3902de3-6f48-4214-be96-736b4b498b61  ONLINE       0     0    18
            3e6f9c7e-bf9a-41d1-b37c-a1deb4b9e776  ONLINE       0     0    18
            295cd467-cce3-4a81-9b0a-0db1f992bf37  ONLINE       0     0    18
            984d0225-0f8e-4286-ab07-f8f108a6a0ce  ONLINE       0     0    18
            f70d7e08-8810-4428-a96c-feb26b3d5e96  ONLINE       0     0    18
        cache
          748a0c72-51ea-473b-b719-f937895370f4    ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

Sometimes I can get a “errors: No known data errors” output, but still with 18 CKSUM errors.

zpool status -v home-pool
  pool: home-pool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: scrub repaired 0B in 1 days 16:20:56 with 9 errors on Fri Mar 14 15:02:37 2025
config:

        NAME                                      STATE     READ WRITE CKSUM
        home-pool                                 ONLINE       0     0     0
          raidz2-0                                ONLINE       0     0     0
            db91f778-e537-46dc-95be-bb0c1d327831  ONLINE       0     0    18
            b3902de3-6f48-4214-be96-736b4b498b61  ONLINE       0     0    18
            3e6f9c7e-bf9a-41d1-b37c-a1deb4b9e776  ONLINE       0     0    18
            295cd467-cce3-4a81-9b0a-0db1f992bf37  ONLINE       0     0    18
            984d0225-0f8e-4286-ab07-f8f108a6a0ce  ONLINE       0     0    18
            f70d7e08-8810-4428-a96c-feb26b3d5e96  ONLINE       0     0    18
        cache
          748a0c72-51ea-473b-b719-f937895370f4    ONLINE       0     0     0

errors: No known data errors

And when I run zpool events, I can find some “ereport.fs.zfs.checksum”


Mar 11 2025 16:32:28.610303588 ereport.fs.zfs.checksum
        class = "ereport.fs.zfs.checksum"
        ena = 0x8bc9037aabb07001
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0xb85e01d1d3ace3bb
                vdev = 0x6d1d5a4549645764
        (end detector)
        pool = "home-pool"
        pool_guid = 0xb85e01d1d3ace3bb
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x6d1d5a4549645764
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/295cd467-cce3-4a81-9b0a-0db1f992bf37"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x348bc903872f2
        vdev_delta_ts = 0x1a38cd4
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x4
        vdev_delays = 0x0
        dio_verify_errors = 0x0
        parent_guid = 0xbe381bdf1550a88
        parent_type = "raidz"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x34
        zio_flags = 0x2000b0 [SCRUB SCAN_THREAD CANFAIL DONT_PROPAGATE]
        zio_stage = 0x400000 [VDEV_IO_DONE]
        zio_pipeline = 0x5e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS CHECKSUM_VERIFY DONE]
        zio_delay = 0x0
        zio_timestamp = 0x0
        zio_delta = 0x0
        zio_priority = 0x4 [SCRUB]
        zio_offset = 0xc2727307000
        zio_size = 0x8000
        zio_objset = 0xc30
        zio_object = 0x6
        zio_level = 0x0
        zio_blkid = 0x1f2526
        time = 0x67cff51c 0x24607e64 
        eid = 0x9c68

Mar 11 2025 16:32:28.610303588 ereport.fs.zfs.checksum
        class = "ereport.fs.zfs.checksum"
        ena = 0x8bc9037aabb07001
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0xb85e01d1d3ace3bb
                vdev = 0x661aa750e3992e00
        (end detector)
        pool = "home-pool"
        pool_guid = 0xb85e01d1d3ace3bb
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x661aa750e3992e00
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/3e6f9c7e-bf9a-41d1-b37c-a1deb4b9e776"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x348bc90106906
        vdev_delta_ts = 0x5aef730
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x4
        vdev_delays = 0x0
        dio_verify_errors = 0x0
        parent_guid = 0xbe381bdf1550a88
        parent_type = "raidz"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x34
        zio_flags = 0x2000b0 [SCRUB SCAN_THREAD CANFAIL DONT_PROPAGATE]
        zio_stage = 0x400000 [VDEV_IO_DONE]
        zio_pipeline = 0x5e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS CHECKSUM_VERIFY DONE]
        zio_delay = 0x0
        zio_timestamp = 0x0
        zio_delta = 0x0
        zio_priority = 0x4 [SCRUB]
        zio_offset = 0xc2727307000
        zio_size = 0x8000
        zio_objset = 0xc30
        zio_object = 0x6
        zio_level = 0x0
        zio_blkid = 0x1f2526
        time = 0x67cff51c 0x24607e64 
        eid = 0x9c69

Mar 11 2025 16:32:28.610303588 ereport.fs.zfs.checksum
        class = "ereport.fs.zfs.checksum"
        ena = 0x8bc9037aabb07001
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0xb85e01d1d3ace3bb
                vdev = 0x27addaa7620a5f3e
        (end detector)
        pool = "home-pool"
        pool_guid = 0xb85e01d1d3ace3bb
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x27addaa7620a5f3e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/b3902de3-6f48-4214-be96-736b4b498b61"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x348bc8f9f5e17
        vdev_delta_ts = 0x42d97
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x4
        vdev_delays = 0x0
        dio_verify_errors = 0x0
        parent_guid = 0xbe381bdf1550a88
        parent_type = "raidz"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x34
        zio_flags = 0x2000b0 [SCRUB SCAN_THREAD CANFAIL DONT_PROPAGATE]
        zio_stage = 0x400000 [VDEV_IO_DONE]
        zio_pipeline = 0x5e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS CHECKSUM_VERIFY DONE]
        zio_delay = 0x0
        zio_timestamp = 0x0
        zio_delta = 0x0
        zio_priority = 0x4 [SCRUB]
        zio_offset = 0xc2727307000
        zio_size = 0x8000
        zio_objset = 0xc30
        zio_object = 0x6
        zio_level = 0x0
        zio_blkid = 0x1f2526
        time = 0x67cff51c 0x24607e64 
        eid = 0x9c6a

Mar 11 2025 16:32:28.610303588 ereport.fs.zfs.checksum
        class = "ereport.fs.zfs.checksum"
        ena = 0x8bc9037aabb07001
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0xb85e01d1d3ace3bb
                vdev = 0x32f2d10d0eb7e000
        (end detector)
        pool = "home-pool"
        pool_guid = 0xb85e01d1d3ace3bb
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x32f2d10d0eb7e000
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/db91f778-e537-46dc-95be-bb0c1d327831"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x348bc8f9f763b
        vdev_delta_ts = 0x343c3
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x4
        vdev_delays = 0x0
        dio_verify_errors = 0x0
        parent_guid = 0xbe381bdf1550a88
        parent_type = "raidz"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x34
        zio_flags = 0x2000b0 [SCRUB SCAN_THREAD CANFAIL DONT_PROPAGATE]
        zio_stage = 0x400000 [VDEV_IO_DONE]
        zio_pipeline = 0x5e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS CHECKSUM_VERIFY DONE]
        zio_delay = 0x0
        zio_timestamp = 0x0
        zio_delta = 0x0
        zio_priority = 0x4 [SCRUB]
        zio_offset = 0xc2727307000
        zio_size = 0x8000
        zio_objset = 0xc30
        zio_object = 0x6
        zio_level = 0x0
        zio_blkid = 0x1f2526
        time = 0x67cff51c 0x24607e64 
        eid = 0x9c6b

Mar 11 2025 16:32:28.610303588 ereport.fs.zfs.checksum
        class = "ereport.fs.zfs.checksum"
        ena = 0x8bc9037aabb07001
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0xb85e01d1d3ace3bb
                vdev = 0x4e86f9eec21f5e19
        (end detector)
        pool = "home-pool"
        pool_guid = 0xb85e01d1d3ace3bb
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x4e86f9eec21f5e19
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/f70d7e08-8810-4428-a96c-feb26b3d5e96"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x348bc902e5afa
        vdev_delta_ts = 0x7523e
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x4
        vdev_delays = 0x0
        dio_verify_errors = 0x0
        parent_guid = 0xbe381bdf1550a88
        parent_type = "raidz"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x34
        zio_flags = 0x2000b0 [SCRUB SCAN_THREAD CANFAIL DONT_PROPAGATE]
        zio_stage = 0x400000 [VDEV_IO_DONE]
        zio_pipeline = 0x5e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS CHECKSUM_VERIFY DONE]
        zio_delay = 0x0
        zio_timestamp = 0x0
        zio_delta = 0x0
        zio_priority = 0x4 [SCRUB]
        zio_offset = 0xc2727306000
        zio_size = 0x8000
        zio_objset = 0xc30
        zio_object = 0x6
        zio_level = 0x0
        zio_blkid = 0x1f2526
        time = 0x67cff51c 0x24607e64 
        eid = 0x9c6c

Mar 11 2025 16:32:28.610303588 ereport.fs.zfs.checksum
        class = "ereport.fs.zfs.checksum"
        ena = 0x8bc9037aabb07001
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0xb85e01d1d3ace3bb
                vdev = 0x164dd4545a3f6709
        (end detector)
        pool = "home-pool"
        pool_guid = 0xb85e01d1d3ace3bb
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x164dd4545a3f6709
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/984d0225-0f8e-4286-ab07-f8f108a6a0ce"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x348bc8faabb1e
        vdev_delta_ts = 0x1ae37
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x4
        vdev_delays = 0x0
        dio_verify_errors = 0x0
        parent_guid = 0xbe381bdf1550a88
        parent_type = "raidz"
        vdev_spare_paths = 
        vdev_spare_guids = 
        zio_err = 0x34
        zio_flags = 0x2000b0 [SCRUB SCAN_THREAD CANFAIL DONT_PROPAGATE]
        zio_stage = 0x400000 [VDEV_IO_DONE]
        zio_pipeline = 0x5e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS CHECKSUM_VERIFY DONE]
        zio_delay = 0x0
        zio_timestamp = 0x0
        zio_delta = 0x0
        zio_priority = 0x4 [SCRUB]
        zio_offset = 0xc2727306000
        zio_size = 0x8000
        zio_objset = 0xc30
        zio_object = 0x6
        zio_level = 0x0
        zio_blkid = 0x1f2526
        time = 0x67cff51c 0x24607e64 
        eid = 0x9c6d

How can I determine which file is causing the problem, or how can I fix the errors. Or should I just let these 18 errors exists ?

What are your hardware specs please.

I have 6 ST16000NM000J-2TW103 16TB disks for the raidz2 and one ZHITAI Ti600 for the cache.

Connected directly to the mobo via SATA cables?

Yes, and I don’t think it’s a connection issue because I ran scrub multiple times and it was the exactly same 18 errors each time, and my pool is also doing some reads and writes at the same time, no other read and write errors were reported.