Met an error while expanding raidz1 pool, but my disk is ok

Hello.
I met an error:

One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.

https://zfsonlinux.org/msg/ZFS-8000-4J/

I was trying to attach a new disk to my raidz1 pool using TrueNAS scale’s extend button, and for the first 7 days, it worked fine (it said it will take about 17 days to expand).
And then, a scrub task has been triggered and my pool faced the degraded state:

  pool: RAID5
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: scrub in progress since Sun Dec  1 00:42:17 2024
        19.5T / 25.0T scanned at 263M/s, 18.5T / 25.0T issued at 249M/s
        0B repaired, 73.81% done, 07:39:25 to go
expand: expansion of raidz1-0 in progress since Sat Nov 23 12:59:50 2024
        10.4T / 25.0T copied at 15.1M/s, 41.56% done, paused for resilver or clear
config:

        NAME                                      STATE     READ WRITE CKSUM
        RAID5                                     DEGRADED     0     0     0
          raidz1-0                                DEGRADED     0     0     0
            be64f67f-81d8-459c-a563-daffe3078fed  ONLINE       0     0     0
            259a0b75-6c49-4561-8319-c920dbd33bbc  ONLINE       0     0     0
            59048391-9c54-4d6b-8c7d-54f4fe8b5bdf  ONLINE       0     0     0
            0da0b4a6-16fd-4a86-9141-821f846a3134  UNAVAIL      0     0     0

I expected that after the scrub task has been finished, it will continue the expand.
But it didn’t.

  pool: RAID5
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: scrub repaired 0B in 1 days 02:15:56 with 0 errors on Mon Dec  2 02:58:13 2024
expand: expansion of raidz1-0 in progress since Sat Nov 23 12:59:50 2024
        10.4T / 25.0T copied at 14.2M/s, 41.56% done, paused for resilver or clear
config:

        NAME                                      STATE     READ WRITE CKSUM
        RAID5                                     DEGRADED     0     0     0
          raidz1-0                                DEGRADED     0     0     0
            be64f67f-81d8-459c-a563-daffe3078fed  ONLINE       0     0     0
            259a0b75-6c49-4561-8319-c920dbd33bbc  ONLINE       0     0     0
            59048391-9c54-4d6b-8c7d-54f4fe8b5bdf  ONLINE       0     0     0
            0da0b4a6-16fd-4a86-9141-821f846a3134  UNAVAIL      0     0     0

I checked the device (that UNAVAIL one) in a several way:

since it is located on /dev/sdb1 and /dev/disk/by-id/ata-WDC_WD140EDGZ-11B2DA2_3WJGG38K, I ran smartctl -x /dev/disk/by-id/ata-WDC_WD140EDGZ-11B2DA2_3WJGG38K and and it returned no error.
Still, zpool online says:

$ zpool online RAID5 0da0b4a6-16fd-4a86-9141-821f846a3134
warning: device '0da0b4a6-16fd-4a86-9141-821f846a3134' onlined, but remains in faulted state
use 'zpool replace' to replace devices that are no longer present

On the manual (ZFS Message ID: ZFS-8000-4J), it explained only the case of zpool import and zpool export. Since I started the expanding task on GUI (dashboard), I don’t know whether or not this is the zpool import task.

Since I don’t want to loose my new disk, I want to try zpool clear RAID5 0da0b4a6-16fd-4a86-9141-821f846a3134.
Will it help? Or are there some more alternatives I could try without using zpool replace (buying more disks for NAS is now too hard)? Or is it just a bug?

Thank you.


Intel(R) N100 CPU
16 GiB RAM (samsung, ddr5)
SK Hynix P31 gold 500G - boot
SK Hynix P31 gold 500G - ix-applications pool
Seagate Ironwolf NAS 12TB x 2, Toshiba N300 7200/256M (HDWG21C, 12TB) - RAIDZ1

Can you post the output of the following commands:

  • sudo lsblk -o NAME,SERIAL,MODEL,TYPE,UUID,PARTUUID,LABEL
  • sudo zdb -l /dev/disk/by-id/ata-WDC_WD140EDGZ-11B2DA2_3WJGG38K

Here is the result:

sudo lsblk ~

NAME        SERIAL            MODEL                 TYPE UUID                                 PARTUUID                             LABEL
sda         21R0A012FP8G      TOSHIBA HDWG21C       disk
└─sda1                                              part 7774998829693339681                  be64f67f-81d8-459c-a563-daffe3078fed RAID5
sdb         3WJGG38K          WDC WD140EDGZ-11B2DA2 disk
└─sdb1                                              part
sdc         ZTN09NQ5          ST12000VN0008-2PH103  disk
└─sdc1                                              part 7774998829693339681                  259a0b75-6c49-4561-8319-c920dbd33bbc RAID5
sdd         ZL2G3DZW          ST12000VN0008-2PH103  disk
└─sdd1                                              part 7774998829693339681                  59048391-9c54-4d6b-8c7d-54f4fe8b5bdf RAID5
nvme1n1     FJD4N526311905D0K SHGP31-500GM          disk
└─nvme1n1p1                                         part 12671844538716595072                 160079d5-e1f6-419f-bf09-a638629bf052 k8s engine
nvme0n1     FJD4N526312905D3O SHGP31-500GM          disk
├─nvme0n1p1                                         part                                      f0c5003c-678f-4c25-8c38-5dc6bef11ec8
├─nvme0n1p2                                         part 8242-AA78                            a72994ef-da12-44a0-beac-0ee566c111d9 EFI
├─nvme0n1p3                                         part 15921543888243381634                 e58f1410-1215-407a-988e-d050f8e2a963 boot-pool
└─nvme0n1p4                                         part                                      072560bc-e4c4-4c2d-892d-7a92bdbe4e78

sudo zdb ~

failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
failed to unpack label 3

Second output is my due to my fault, should have used the partition instead of the whole disk.

From your lsblk output it looks like you might be a victim of TrueNAS - Issues - iXsystems TrueNAS Jira.

Please post output of the following commands:
sudo blkid --probe /dev/sdb1
sudo wipefs /dev/sdb1

Oh… so there was an opened jira dashboard that shows current issues. Thank you. If I have a sufficient amount of time to dig in that (not only for my specific case) issues, I will try.

Anyways, here’s the result.

sudo blkid ...

sudo blkid --probe /dev/sdb1
/dev/sdb1: VERSION="5000" LABEL="RAID5" UUID="7774998829693339681" UUID_SUB="7857615370036839650" BLOCK_SIZE="4096" TYPE="zfs_member" USAGE="filesystem" PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="data" PART_ENTRY_UUID="0da0b4a6-16fd-4a86-9141-821f846a3134" PART_ENTRY_TYPE="6a898cc3-1dd2-11b2-99a6-080020736631" PART_ENTRY_NUMBER="1" PART_ENTRY_OFFSET="2048" PART_ENTRY_SIZE="27344760832" PART_ENTRY_DISK="8:16"

sudo wipefs /dev/sdb1

DEVICE OFFSET        TYPE       UUID                LABEL
sdb1   0x3f000       zfs_member 7774998829693339681 RAID5
sdb1   0x3e000       zfs_member 7774998829693339681 RAID5
sdb1   0x3d000       zfs_member 7774998829693339681 RAID5
sdb1   0x3c000       zfs_member 7774998829693339681 RAID5
sdb1   0x3b000       zfs_member 7774998829693339681 RAID5
sdb1   0x3a000       zfs_member 7774998829693339681 RAID5
sdb1   0x39000       zfs_member 7774998829693339681 RAID5
sdb1   0x38000       zfs_member 7774998829693339681 RAID5
sdb1   0x37000       zfs_member 7774998829693339681 RAID5
sdb1   0x36000       zfs_member 7774998829693339681 RAID5
sdb1   0x35000       zfs_member 7774998829693339681 RAID5
sdb1   0x34000       zfs_member 7774998829693339681 RAID5
sdb1   0x33000       zfs_member 7774998829693339681 RAID5
sdb1   0x32000       zfs_member 7774998829693339681 RAID5
sdb1   0x31000       zfs_member 7774998829693339681 RAID5
sdb1   0x30000       zfs_member 7774998829693339681 RAID5
sdb1   0x2f000       zfs_member 7774998829693339681 RAID5
sdb1   0x2e000       zfs_member 7774998829693339681 RAID5
sdb1   0x2d000       zfs_member 7774998829693339681 RAID5
sdb1   0x2c000       zfs_member 7774998829693339681 RAID5
sdb1   0x2b000       zfs_member 7774998829693339681 RAID5
sdb1   0x2a000       zfs_member 7774998829693339681 RAID5
sdb1   0x29000       zfs_member 7774998829693339681 RAID5
sdb1   0x28000       zfs_member 7774998829693339681 RAID5
sdb1   0x27000       zfs_member 7774998829693339681 RAID5
sdb1   0x26000       zfs_member 7774998829693339681 RAID5
sdb1   0x25000       zfs_member 7774998829693339681 RAID5
sdb1   0x24000       zfs_member 7774998829693339681 RAID5
sdb1   0x23000       zfs_member 7774998829693339681 RAID5
sdb1   0x7f000       zfs_member 7774998829693339681 RAID5
sdb1   0x7e000       zfs_member 7774998829693339681 RAID5
sdb1   0x7d000       zfs_member 7774998829693339681 RAID5
sdb1   0x7c000       zfs_member 7774998829693339681 RAID5
sdb1   0x7b000       zfs_member 7774998829693339681 RAID5
sdb1   0x7a000       zfs_member 7774998829693339681 RAID5
sdb1   0x79000       zfs_member 7774998829693339681 RAID5
sdb1   0x78000       zfs_member 7774998829693339681 RAID5
sdb1   0x77000       zfs_member 7774998829693339681 RAID5
sdb1   0x76000       zfs_member 7774998829693339681 RAID5
sdb1   0x75000       zfs_member 7774998829693339681 RAID5
sdb1   0x74000       zfs_member 7774998829693339681 RAID5
sdb1   0x73000       zfs_member 7774998829693339681 RAID5
sdb1   0x72000       zfs_member 7774998829693339681 RAID5
sdb1   0x71000       zfs_member 7774998829693339681 RAID5
sdb1   0x70000       zfs_member 7774998829693339681 RAID5
sdb1   0x6f000       zfs_member 7774998829693339681 RAID5
sdb1   0x6e000       zfs_member 7774998829693339681 RAID5
sdb1   0x6d000       zfs_member 7774998829693339681 RAID5
sdb1   0x6c000       zfs_member 7774998829693339681 RAID5
sdb1   0x6b000       zfs_member 7774998829693339681 RAID5
sdb1   0x6a000       zfs_member 7774998829693339681 RAID5
sdb1   0x69000       zfs_member 7774998829693339681 RAID5
sdb1   0x68000       zfs_member 7774998829693339681 RAID5
sdb1   0x67000       zfs_member 7774998829693339681 RAID5
sdb1   0x66000       zfs_member 7774998829693339681 RAID5
sdb1   0x65000       zfs_member 7774998829693339681 RAID5
sdb1   0x64000       zfs_member 7774998829693339681 RAID5
sdb1   0x63000       zfs_member 7774998829693339681 RAID5
sdb1   0x62000       zfs_member 7774998829693339681 RAID5
sdb1   0x61000       zfs_member 7774998829693339681 RAID5
sdb1   0x60000       zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdbf000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdbe000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdbd000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdbc000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdbb000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdba000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdb9000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdb8000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdb7000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdb6000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdb5000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdb4000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdb3000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdb2000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdb1000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdb0000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdaf000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdae000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdad000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdac000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdab000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdaa000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfda9000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfda8000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfda7000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfda6000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfda5000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfda4000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfda3000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfda2000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfda1000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfda0000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdff000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdfe000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdfd000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdfc000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdfb000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdfa000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdf9000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdf8000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdf7000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdf6000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdf5000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdf4000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdf3000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdf2000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdf1000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdf0000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdef000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdee000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfded000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdec000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdeb000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfdea000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfde9000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfde8000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfde7000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfde6000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfde5000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfde4000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfde3000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfde2000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfde1000 zfs_member 7774998829693339681 RAID5
sdb1   0xcbbbfde0000 zfs_member 7774998829693339681 RAID5

Sorry, you can ignore that issue. It is strange to not have ZFS metadata for sdb1 in the lsblk output.

Just to check, can you post the following:

  • sudo blkid
  • sudo blkid -c /dev/null

Also now the correct zdb command:

  • sudo zdb -l /dev/sdb1

When was the last reboot of the device? Especially, was there a reboot during the expansion/resilver?

Thanks. Regarless of the relevance, that issue dashboard is the new one that I could collect the information.

sudo blkid: it shows sdb1 with the label.

/dev/nvme0n1p3: LABEL="boot-pool" UUID="15921543888243381634" UUID_SUB="2192066460917774291" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="e58f1410-1215-407a-988e-d050f8e2a963"
/dev/nvme0n1p2: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="8242-AA78" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="a72994ef-da12-44a0-beac-0ee566c111d9"
/dev/sdd1: LABEL="RAID5" UUID="7774998829693339681" UUID_SUB="3727794912456345434" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="59048391-9c54-4d6b-8c7d-54f4fe8b5bdf"
/dev/sdb1: LABEL="RAID5" UUID="7774998829693339681" UUID_SUB="7857615370036839650" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="data" PARTUUID="0da0b4a6-16fd-4a86-9141-821f846a3134"
/dev/sdc1: LABEL="RAID5" UUID="7774998829693339681" UUID_SUB="10185011408681292326" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="259a0b75-6c49-4561-8319-c920dbd33bbc"
/dev/nvme1n1p1: LABEL="k8s engine" UUID="12671844538716595072" UUID_SUB="13595563673830217021" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="160079d5-e1f6-419f-bf09-a638629bf052"
/dev/sda1: LABEL="RAID5" UUID="7774998829693339681" UUID_SUB="383533775857449246" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="be64f67f-81d8-459c-a563-daffe3078fed"
/dev/nvme0n1p1: PARTUUID="f0c5003c-678f-4c25-8c38-5dc6bef11ec8"
/dev/nvme0n1p4: PARTUUID="072560bc-e4c4-4c2d-892d-7a92bdbe4e78"

sudo blkid -c /dev/null

/dev/nvme0n1p3: LABEL="boot-pool" UUID="15921543888243381634" UUID_SUB="2192066460917774291" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="e58f1410-1215-407a-988e-d050f8e2a963"
/dev/nvme0n1p1: PARTUUID="f0c5003c-678f-4c25-8c38-5dc6bef11ec8"
/dev/nvme0n1p4: PARTUUID="072560bc-e4c4-4c2d-892d-7a92bdbe4e78"
/dev/nvme0n1p2: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="8242-AA78" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="a72994ef-da12-44a0-beac-0ee566c111d9"
/dev/sdd1: LABEL="RAID5" UUID="7774998829693339681" UUID_SUB="3727794912456345434" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="59048391-9c54-4d6b-8c7d-54f4fe8b5bdf"
/dev/sdb1: LABEL="RAID5" UUID="7774998829693339681" UUID_SUB="7857615370036839650" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="data" PARTUUID="0da0b4a6-16fd-4a86-9141-821f846a3134"
/dev/sdc1: LABEL="RAID5" UUID="7774998829693339681" UUID_SUB="10185011408681292326" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="259a0b75-6c49-4561-8319-c920dbd33bbc"
/dev/nvme1n1p1: LABEL="k8s engine" UUID="12671844538716595072" UUID_SUB="13595563673830217021" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="160079d5-e1f6-419f-bf09-a638629bf052"
/dev/sda1: LABEL="RAID5" UUID="7774998829693339681" UUID_SUB="383533775857449246" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="be64f67f-81d8-459c-a563-daffe3078fed"

sudo zdb -l /dev/sdb1

------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'RAID5'
    state: 0
    txg: 2451580
    pool_guid: 7774998829693339681
    errata: 0
    hostid: 313658624
    hostname: (REDACTED)
    top_guid: 11643318824247454892
    guid: 7857615370036839650
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 11643318824247454892
        nparity: 1
        raidz_expanding
        metaslab_array: 256
        metaslab_shift: 34
        ashift: 12
        asize: 36000392282112
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 383533775857449246
            path: '/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed'
            whole_disk: 0
            DTL: 393
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 10185011408681292326
            path: '/dev/disk/by-partuuid/259a0b75-6c49-4561-8319-c920dbd33bbc'
            whole_disk: 0
            DTL: 392
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 3727794912456345434
            path: '/dev/disk/by-partuuid/59048391-9c54-4d6b-8c7d-54f4fe8b5bdf'
            whole_disk: 0
            DTL: 391
            create_txg: 4
        children[3]:
            type: 'disk'
            id: 3
            guid: 7857615370036839650
            path: '/dev/disk/by-partuuid/0da0b4a6-16fd-4a86-9141-821f846a3134'
            whole_disk: 0
            DTL: 18
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
        com.klarasystems:vdev_zaps_v2
        org.openzfs:raidz_expansion
    labels = 0 1 2 3

When was the last reboot? Was there a reboot during expansion/resilver?

Now the uptime is: Uptime: 9 days 4 hours 15 minutes as of 17:10
Since I started expansion 8 days before (11/24), there was no reboot from the beginning of the expansion to now.

Looks like the lsblk thing might just be a red herring.

If there was a drive disconnect, your zfs online should have worked. But we can still check the log files and look for hardware related errors:

  • sudo dmesg (look for I/O related errors)
  • sudo zpool events -v RAID5 (look at the latest few entries)

Okay.

sudo dmesg (before 127013, it seems most identical among logs)

[127013.588345] br-9683c841433a: port 1(veth85cddb3) entered blocking state
[127013.588350] br-9683c841433a: port 1(veth85cddb3) entered disabled state
[127013.588436] veth85cddb3: entered allmulticast mode
[127013.589323] veth85cddb3: entered promiscuous mode
[127013.590448] br-9683c841433a: port 1(veth85cddb3) entered blocking state
[127013.590454] br-9683c841433a: port 1(veth85cddb3) entered forwarding state
[127013.590504] br-9683c841433a: port 1(veth85cddb3) entered disabled state
[127013.923856] eth0: renamed from veth220061a
[127013.951416] br-9683c841433a: port 1(veth85cddb3) entered blocking state
[127013.951422] br-9683c841433a: port 1(veth85cddb3) entered forwarding state
[142588.676359] igc 0000:01:00.0 enp1s0: NIC Link is Down
[142591.660664] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[142592.692499] igc 0000:01:00.0 enp1s0: NIC Link is Down
[142595.804574] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[142600.196286] igc 0000:01:00.0 enp1s0: NIC Link is Down
[142602.428699] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[155765.017114] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8534 of 11377 items, 6553600 file size, 767 bytes per hash table item), suggesting rotation.
[155765.017125] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[186091.392476] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8534 of 11377 items, 6553600 file size, 767 bytes per hash table item), suggesting rotation.
[186091.392488] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[210930.462250] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8535 of 11377 items, 6553600 file size, 767 bytes per hash table item), suggesting rotation.
[210930.462258] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[228986.816041] igc 0000:01:00.0 enp1s0: NIC Link is Down
[228989.800514] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[228990.828201] igc 0000:01:00.0 enp1s0: NIC Link is Down
[228993.776269] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[228998.332094] igc 0000:01:00.0 enp1s0: NIC Link is Down
[229000.564405] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[239476.687109] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8535 of 11377 items, 6553600 file size, 767 bytes per hash table item), suggesting rotation.
[239476.687133] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[268297.521689] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8534 of 11377 items, 6553600 file size, 767 bytes per hash table item), suggesting rotation.
[268297.521710] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[293523.686298] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8534 of 11377 items, 6553600 file size, 767 bytes per hash table item), suggesting rotation.
[293523.686307] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[315386.761365] igc 0000:01:00.0 enp1s0: NIC Link is Down
[315389.221517] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[315390.781144] igc 0000:01:00.0 enp1s0: NIC Link is Down
[315393.725407] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[315398.289027] igc 0000:01:00.0 enp1s0: NIC Link is Down
[315400.569337] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[328375.119237] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8535 of 11377 items, 6553600 file size, 767 bytes per hash table item), suggesting rotation.
[328375.119260] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[367346.664083] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8534 of 11377 items, 6553600 file size, 767 bytes per hash table item), suggesting rotation.
[367346.664105] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[399792.105820] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8533 of 11377 items, 6553600 file size, 768 bytes per hash table item), suggesting rotation.
[399792.105835] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[401785.701125] igc 0000:01:00.0 enp1s0: NIC Link is Down
[401788.161397] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[401789.717274] igc 0000:01:00.0 enp1s0: NIC Link is Down
[401792.217383] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[401797.221194] igc 0000:01:00.0 enp1s0: NIC Link is Down
[401799.457483] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[434226.578378] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8533 of 11377 items, 6553600 file size, 768 bytes per hash table item), suggesting rotation.
[434226.578400] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[468293.376402] br-835017b8585c: port 1(veth8d19067) entered blocking state
[468293.376408] br-835017b8585c: port 1(veth8d19067) entered disabled state
[468293.376418] veth8d19067: entered allmulticast mode
[468293.376464] veth8d19067: entered promiscuous mode
[468293.558233] eth0: renamed from veth1ac2392
[468293.585330] br-835017b8585c: port 1(veth8d19067) entered blocking state
[468293.585347] br-835017b8585c: port 1(veth8d19067) entered forwarding state
[469803.390326] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8533 of 11377 items, 6553600 file size, 768 bytes per hash table item), suggesting rotation.
[469803.390350] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[488183.878830] igc 0000:01:00.0 enp1s0: NIC Link is Down
[488186.822876] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[488187.894593] igc 0000:01:00.0 enp1s0: NIC Link is Down
[488191.014856] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[488195.402500] igc 0000:01:00.0 enp1s0: NIC Link is Down
[488200.942928] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[495906.103641] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8535 of 11377 items, 6553600 file size, 767 bytes per hash table item), suggesting rotation.
[495906.103651] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[522946.982319] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8533 of 11377 items, 6553600 file size, 768 bytes per hash table item), suggesting rotation.
[522946.982335] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[549938.343797] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8534 of 11377 items, 6553600 file size, 767 bytes per hash table item), suggesting rotation.
[549938.343819] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[574584.072575] igc 0000:01:00.0 enp1s0: NIC Link is Down
[574586.529020] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[574588.088698] igc 0000:01:00.0 enp1s0: NIC Link is Down
[574591.032782] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[574595.588427] igc 0000:01:00.0 enp1s0: NIC Link is Down
[574597.876690] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[576485.564268] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8533 of 11377 items, 6553600 file size, 768 bytes per hash table item), suggesting rotation.
[576485.564290] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[604328.017568] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8533 of 11377 items, 6553600 file size, 768 bytes per hash table item), suggesting rotation.
[604328.017590] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[630030.616874] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8534 of 11377 items, 6553600 file size, 767 bytes per hash table item), suggesting rotation.
[630030.616898] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[658471.398578] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8534 of 11377 items, 6553600 file size, 767 bytes per hash table item), suggesting rotation.
[658471.398600] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[660984.646138] igc 0000:01:00.0 enp1s0: NIC Link is Down
[660987.126220] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[660988.661916] igc 0000:01:00.0 enp1s0: NIC Link is Down
[660991.650267] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[660996.165876] igc 0000:01:00.0 enp1s0: NIC Link is Down
[660998.398116] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[690790.292215] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8534 of 11377 items, 6553600 file size, 767 bytes per hash table item), suggesting rotation.
[690790.292236] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[720251.626490] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8536 of 11377 items, 6553600 file size, 767 bytes per hash table item), suggesting rotation.
[720251.626507] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[744229.443862] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8536 of 11377 items, 6553600 file size, 767 bytes per hash table item), suggesting rotation.
[744229.443883] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[747381.051316] igc 0000:01:00.0 enp1s0: NIC Link is Down
[747384.019596] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[747385.067258] igc 0000:01:00.0 enp1s0: NIC Link is Down
[747387.519550] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[747392.571196] igc 0000:01:00.0 enp1s0: NIC Link is Down
[747395.239457] igc 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[767231.201539] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8534 of 11377 items, 6553600 file size, 767 bytes per hash table item), suggesting rotation.
[767231.201560] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.
[774144.398972] loop0: detected capacity change from 0 to 2349400
[791164.960150] systemd-journald[564]: Data hash table of /var/log/journal/894c6df826e847e690de87951cc94728/system.journal has a fill level at 75.0 (8536 of 11377 items, 6553600 file size, 767 bytes per hash table item), suggesting rotation.
[791164.960171] systemd-journald[564]: /var/log/journal/894c6df826e847e690de87951cc94728/system.journal: Journal header limits reached or header out-of-date, rotating.

sudo zpool events -v: since scrub started at 00:00 KST yesterday and read/write speed became 0 around 00:30, here is the log from 00:00 KST Dec 1st.

Nov 30 2024 23:58:12.514916675 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xa41405dc1e900001
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24a413d206246
        vdev_delta_ts = 0x4202e5a656
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x249ffd1149b6a
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x3603ec56000
        zio_size = 0x1000
        time = 0x674b2804 0x1eb10143
        eid = 0x13ea

Dec  1 2024 00:03:23.810271466 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xa89bada572b00c01
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24a89b8051e5b
        vdev_delta_ts = 0x3e5a6aec2f
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24a81133a6c68
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x360d2dba000
        zio_size = 0x1000
        time = 0x674b293b 0x304bc2ea
        eid = 0x13eb

Dec  1 2024 00:04:25.250933603 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xa9808ee630500c01
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24a9808ede3c3
        vdev_delta_ts = 0x3df72dd92f
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24a81133a6c68
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x360d2dba000
        zio_size = 0x1000
        time = 0x674b2979 0xef4f163
        eid = 0x13ec

Dec  1 2024 00:05:26.691595740 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xaa656fde6cd00c01
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24aa64cdb679f
        vdev_delta_ts = 0x3edd26dc7f
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24a81133a6c68
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x360d2dba000
        zio_size = 0x1000
        time = 0x674b29b6 0x2938e9dc
        eid = 0x13ed

Dec  1 2024 00:06:28.124257789 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xab4a4d2c6dd00c01
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24ab4a0d902fa
        vdev_delta_ts = 0x3f7dc17c4e
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24a81133a6c68
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x360d2dba000
        zio_size = 0x1000
        time = 0x674b29f4 0x76805fd
        eid = 0x13ee

Dec  1 2024 00:12:24.480061850 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xb079cd4badb00801
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24b07989a9fe4
        vdev_delta_ts = 0x41de3efd4f
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24b0300a0864f
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x361a88af000
        zio_size = 0x1000
        time = 0x674b2b58 0x1c9d299a
        eid = 0x13ef

Dec  1 2024 00:13:25.916716709 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xb15eae6cfbc00801
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24b15e74d02a4
        vdev_delta_ts = 0x42b4175dbb
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24b0300a0864f
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x361a88af000
        zio_size = 0x1000
        time = 0x674b2b95 0x36a3fca5
        eid = 0x13f0

Dec  1 2024 00:14:27.357371608 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xb2438f8370f00801
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24b2438f7bc05
        vdev_delta_ts = 0x625160
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24b0300a0864f
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x361a88af000
        zio_size = 0x1000
        time = 0x674b2bd3 0x154d0ed8
        eid = 0x13f1

Dec  1 2024 00:15:28.798026504 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xb32870a4aeb00c01
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24b3286843f99
        vdev_delta_ts = 0x4358710599
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24b0300a0864f
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x361a88af000
        zio_size = 0x1000
        time = 0x674b2c10 0x2f90eb08
        eid = 0x13f2

Dec  1 2024 00:16:30.238681401 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xb40d51c87d200c01
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24b40d393dc3c
        vdev_delta_ts = 0x46cc71853e
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24b0300a0864f
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x361a88af000
        zio_size = 0x1000
        time = 0x674b2c4e 0xe39fd39
        eid = 0x13f3

Dec  1 2024 00:22:14.302348787 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xb90f0b6c2e500c01
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24b90f0a0668c
        vdev_delta_ts = 0x41e07a4dec
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24b8b9ca79e46
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x362814dc000
        zio_size = 0x1000
        time = 0x674b2da6 0x120579f3
        eid = 0x13f4

Dec  1 2024 00:23:15.747003724 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xb9f3f0393a700c01
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24b9f3f035d6b
        vdev_delta_ts = 0x41e4328db5
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24b8b9ca79e46
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x362814dc000
        zio_size = 0x1000
        time = 0x674b2de3 0x2c865f4c
        eid = 0x13f5

Dec  1 2024 00:24:17.183658578 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xbad8cd947d000c01
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24bad87fd39a5
        vdev_delta_ts = 0x41956d915e
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24b8b9ca79e46
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x362814dc000
        zio_size = 0x1000
        time = 0x674b2e21 0xaf26852
        eid = 0x13f6

Dec  1 2024 00:25:18.620314832 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xbbbdaea797c00c01
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24bbbdab4177f
        vdev_delta_ts = 0x42011360f5
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24b8b9ca79e46
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x362814dc000
        zio_size = 0x1000
        time = 0x674b2e5e 0x24f940d0
        eid = 0x13f7

Dec  1 2024 00:26:20.060971918 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xbca28fbfcd500c01
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24bca26bcfa03
        vdev_delta_ts = 0x40f1e8ffb3
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24b8b9ca79e46
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x362814dc000
        zio_size = 0x1000
        time = 0x674b2e9c 0x3a25b8e
        eid = 0x13f8

Dec  1 2024 00:31:47.740476335 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xc1674070b5600001
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24c1674016f52
        vdev_delta_ts = 0x40022c13f8
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24c0d74d6dc68
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x3631d2ed000
        zio_size = 0x1000
        time = 0x674b2fe3 0x2c22c5af
        eid = 0x13f9

Dec  1 2024 00:32:49.181133420 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xc24c218423c00001
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24c24bcc43e8a
        vdev_delta_ts = 0x3f20c8495b
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24c0d74d6dc68
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x3631d2ed000
        zio_size = 0x1000
        time = 0x674b3021 0xacbe06c
        eid = 0x13fa

Dec  1 2024 00:33:50.621790503 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xc33102ad76800001
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24c330f21ce38
        vdev_delta_ts = 0x3ebe698422
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24c0d74d6dc68
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x3631d2ed000
        zio_size = 0x1000
        time = 0x674b305e 0x250fc527
        eid = 0x13fb

Dec  1 2024 00:34:52.062447589 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xc415e3ca98400001
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24c415dabf12a
        vdev_delta_ts = 0x3e7646343d
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24c0d74d6dc68
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x3631d2ed000
        zio_size = 0x1000
        time = 0x674b309c 0x3b8dfe5
        eid = 0x13fc

Dec  1 2024 00:40:48.414258645 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xc94563f1bf300001
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24c94563c0370
        vdev_delta_ts = 0x33ecbb0ee8
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24c8bc42782c8
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x363b1be0000
        zio_size = 0x1000
        time = 0x674b3200 0x18b115d5
        eid = 0x13fd

Dec  1 2024 00:41:49.854915733 ereport.fs.zfs.deadman
        class = "ereport.fs.zfs.deadman"
        ena = 0xca2a450932f00401
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x55295efe558f91e
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x55295efe558f91e
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/be64f67f-81d8-459c-a563-daffe3078fed"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24ca2a450e887
        vdev_delta_ts = 0x2456f6d586
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        zio_err = 0x0
        zio_flags = 0x180080 [CANFAIL DONT_QUEUE DONT_PROPAGATE]
        zio_stage = 0x200000 [VDEV_IO_START]
        zio_pipeline = 0x2e00000 [VDEV_IO_START VDEV_IO_DONE VDEV_IO_ASSESS DONE]
        zio_delay = 0x0
        zio_timestamp = 0x24c8bc42782c8
        zio_delta = 0x0
        zio_priority = 0x5 [REMOVAL]
        zio_offset = 0x363b1be0000
        zio_size = 0x1000
        time = 0x674b323d 0x32f4fa95
        eid = 0x13fe

Dec  1 2024 00:42:16.827203856 ereport.fs.zfs.vdev.unknown
        class = "ereport.fs.zfs.vdev.unknown"
        ena = 0xca8ebf72acf00801
        detector = (embedded nvlist)
                version = 0x0
                scheme = "zfs"
                pool = 0x6be6583decf30821
                vdev = 0x6d0bdb8c30b204e2
        (end detector)
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        pool_failmode = "continue"
        vdev_guid = 0x6d0bdb8c30b204e2
        vdev_type = "disk"
        vdev_path = "/dev/disk/by-partuuid/0da0b4a6-16fd-4a86-9141-821f846a3134"
        vdev_ashift = 0x9
        vdev_complete_ts = 0x24c8bdb120fcb
        vdev_delta_ts = 0x14ae19
        vdev_read_errors = 0x0
        vdev_write_errors = 0x0
        vdev_cksum_errors = 0x0
        vdev_delays = 0x0
        parent_guid = 0xa19560c81de36cac
        parent_type = "raidz"
        vdev_spare_paths =
        vdev_spare_guids =
        prev_state = 0x1
        time = 0x674b3258 0x314e2110
        eid = 0x13ff

Dec  1 2024 00:42:16.827203856 resource.fs.zfs.statechange
        version = 0x0
        class = "resource.fs.zfs.statechange"
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        vdev_guid = 0x6d0bdb8c30b204e2
        vdev_state = "UNAVAIL" (0x4)
        vdev_path = "/dev/disk/by-partuuid/0da0b4a6-16fd-4a86-9141-821f846a3134"
        vdev_laststate = "ONLINE" (0x7)
        time = 0x674b3258 0x314e2110
        eid = 0x1400

Dec  1 2024 00:42:17.259208469 sysevent.fs.zfs.scrub_start
        version = 0x0
        class = "sysevent.fs.zfs.scrub_start"
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        time = 0x674b3259 0xf733515
        eid = 0x1401

Dec  1 2024 00:42:17.259208469 sysevent.fs.zfs.history_event
        version = 0x0
        class = "sysevent.fs.zfs.history_event"
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        history_hostname = "nigoserver"
        history_internal_str = "func=1 mintxg=0 maxtxg=2453812"
        history_internal_name = "scan setup"
        history_txg = 0x257134
        history_time = 0x674b3259
        time = 0x674b3259 0xf733515
        eid = 0x1402

Dec  1 2024 00:42:41.343465611 sysevent.fs.zfs.history_event
        version = 0x0
        class = "sysevent.fs.zfs.history_event"
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        history_hostname = "nigoserver"
        history_internal_str = "offset=14906861154304 failed_offset=14906860007424"
        history_internal_name = "reflow pause"
        history_txg = 0x257138
        history_time = 0x674b326a
        time = 0x674b3271 0x1478de8b
        eid = 0x1403

Dec  2 2024 02:58:13.710404110 sysevent.fs.zfs.history_event
        version = 0x0
        class = "sysevent.fs.zfs.history_event"
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        history_hostname = "nigoserver"
        history_internal_str = "errors=0"
        history_internal_name = "scan done"
        history_txg = 0x25a9e6
        history_time = 0x674ca3b5
        time = 0x674ca3b5 0x2a57e80e
        eid = 0x1408

Dec  2 2024 02:58:13.734404381 sysevent.fs.zfs.scrub_finish
        version = 0x0
        class = "sysevent.fs.zfs.scrub_finish"
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        time = 0x674ca3b5 0x2bc61f1d
        eid = 0x1409

Dec  2 2024 02:58:22.114498910 sysevent.fs.zfs.history_event
        version = 0x0
        class = "sysevent.fs.zfs.history_event"
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        history_hostname = "nigoserver"
        history_internal_str = "offset=14906883694592 failed_offset=14906860007424"
        history_internal_name = "reflow pause"
        history_txg = 0x25a9ea
        history_time = 0x674ca3b8
        time = 0x674ca3be 0x6d31d5e
        eid = 0x140a

Dec  2 2024 11:23:23.229171634 sysevent.fs.zfs.history_event
        version = 0x0
        class = "sysevent.fs.zfs.history_event"
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        history_hostname = "nigoserver"
        history_internal_str = "offset=14906878033920 failed_offset=14906860007424"
        history_internal_name = "reflow pause"
        history_txg = 0x25c10c
        history_time = 0x674d1a1b
        time = 0x674d1a1b 0xda8e1b2
        eid = 0x140b

Dec  2 2024 11:23:23.429173798 sysevent.fs.zfs.config_sync
        version = 0x0
        class = "sysevent.fs.zfs.config_sync"
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        time = 0x674d1a1b 0x1994ac26
        eid = 0x140c

Dec  2 2024 11:23:23.429173798 sysevent.fs.zfs.config_sync
        version = 0x0
        class = "sysevent.fs.zfs.config_sync"
        pool = "RAID5"
        pool_guid = 0x6be6583decf30821
        pool_state = 0x0
        pool_context = 0x0
        time = 0x674d1a1b 0x1994ac26
        eid = 0x140d

I don’t know what happened there. The ZFS expansion features is still pretty new and unfamiliar to me.

You are getting a lot of deadman events, I’d be somewhat concerned of the health of the toshiba disk, but I’m not sure if that is related to the new disk being “unavailed”.

Maybe someone else with more in-depth zfs knowledge can chime in.

Thanks for your investigation.
It seems that the reason why the vdev_path is the toshiba’s path is just because that is the first disk of the raidz1 pool.

UPDATE:

UPDATE:

status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: resilvered 58.9M in 00:04:28 with 0 errors on Tue Dec  3 11:35:49 2024
expand: expansion of raidz1-0 in progress since Sat Nov 23 12:59:50 2024
        10.4T / 25.0T copied at 12.7M/s, 41.58% done, 13 days 23:15:04 to go

It says me to clear the error. Now I will try that.


current status: it started resilvering after reboot.

After the dialogue, I tried zpool offline (which just worked), and one more zpool online (still unavail).
Also, START extended offline test said no errors have been found.
So I decided to reboot the system, and now the status:

  pool: RAID5
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Tue Dec  3 11:31:21 2024
        192M / 25.0T scanned, 81.4M / 25.0T issued at 772K/s
        21.1M resilvered, 0.00% done, no estimated completion time
expand: expansion of raidz1-0 in progress since Sat Nov 23 12:59:50 2024
        10.4T / 25.0T copied at 12.7M/s, 41.57% done, 13 days 23:18:01 to go
config:

        NAME                                      STATE     READ WRITE CKSUM
        RAID5                                     ONLINE       0     0     0
          raidz1-0                                ONLINE       0     0     0
            be64f67f-81d8-459c-a563-daffe3078fed  ONLINE       0     0     0
            259a0b75-6c49-4561-8319-c920dbd33bbc  ONLINE       0     0     0
            59048391-9c54-4d6b-8c7d-54f4fe8b5bdf  ONLINE       0     0     0
            0da0b4a6-16fd-4a86-9141-821f846a3134  ONLINE       0     0     1  (resilvering)

Much slower than expanding process itself. It makes me to wait 180 days or more (or it will be faster when the expanding process ends).