LSI SAS 9300-16i error: "Power-on or device reset occurred"

Hello!

I’m getting some ZFS pools marked as unhealthy after connecting them to LSI SAS 9300-16i board.
Before I was using some nvme to sata adpter connnected into a pcie-16 board and using pcie bifurcation. I wanted to upgrade since my motherboard (asrock x570d4u-2l2t) has limitation on bifurcation settings when using a cpu with integrated gpu. I needed more drives to be connected and the only way was to use these boards.
FYI: I’m using a setup where each drive (a classic 3.5 sata drive of about 20 TB) belongs to a ZFS pool. I don’t care if that drive breaks, so let’s not go into having a stripe setup is not good… I’m using 24 drives connected to 9300-16i boards.
The workload is ready only: once I fill up a drive (I do it once) I only need to read from it (and almost all drives are already filled). For a day is quite normal to read about 3-4 TB of data among all drives connected to 9300-16i cards.

I bough 2 used cards LSI SAS 9300-16i. They looks in good conditions and the cables too. Anyway I don’t know if there is a test I can run to determine if cards / cables are faulty… One card is installed in the PCIe x16 slot and the other in the PCIe x8 slot. I also placed a fan just in front of both cards and by touching them I can confirm temperature is under control: you can touch them and they are just a bit warm. I used them for about a day before installing the fan and you could’t touch them for how hot they were.

Initially they looked running fine, but after some days I noticed a lot of pools going offline or getting errors. After doing zfs clear I was able to get back the device. ZFS was sometimes reporting errors on some files, but it was not since I store on separate pool (Z2 pool) sha512 of all files and the sha matched. A scrub was able to mark the device healthy again (even if the pool has no redundancy at all, but simply because the data itself was not corrupted).

I started looking at logs in /var/log/messages and I noticed a tons of:

Dec  7 19:36:29 truenas kernel: mpt3sas_cm1: log_info(0x31110e03): originator(PL), code(0x11), sub_code(0x0e03)
Dec  7 19:36:29 truenas kernel: mpt3sas_cm1: log_info(0x31110e03): originator(PL), code(0x11), sub_code(0x0e03)
Dec  7 19:36:29 truenas kernel: mpt3sas_cm1: log_info(0x31110e03): originator(PL), code(0x11), sub_code(0x0e03)
Dec  7 19:36:29 truenas kernel: sd 37:0:1:0: Power-on or device reset occurred

Doing some research I found out that for these HBA you need a special firmware to work with truenas: https://www.truenas.com/community/resources/lsi-9300-xx-firmware-update.145/.

I was using some old firmware and so I updated as described in the post. Here the logs of my 2 boards:

root@truenas[/home/admin]# sas3flash -listall  
Avago Technologies SAS3 Flash Utility
Version 16.00.00.00 (2017.05.02) 
Copyright 2008-2017 Avago Technologies. All rights reserved.

        Adapter Selected is a Avago SAS: SAS3008(C0)

Num   Ctlr            FW Ver        NVDATA        x86-BIOS         PCI Addr
----------------------------------------------------------------------------

0  SAS3008(C0)  16.00.12.00    0e.01.00.03    08.15.00.00     00:12:00:00
1  SAS3008(C0)  16.00.12.00    0e.01.00.03    08.15.00.00     00:14:00:00
2  SAS3008(C0)  16.00.12.00    0e.01.00.03    08.15.00.00     00:2f:00:00
3  SAS3008(C0)  16.00.12.00    0e.01.00.03    08.15.00.00     00:31:00:00

        Finished Processing Commands Successfully.
        Exiting SAS3Flash.
root@truenas[/home/admin]# sas3flash -c 0 -list
Avago Technologies SAS3 Flash Utility
Version 16.00.00.00 (2017.05.02) 
Copyright 2008-2017 Avago Technologies. All rights reserved.

        Adapter Selected is a Avago SAS: SAS3008(C0)

        Controller Number              : 0
        Controller                     : SAS3008(C0)
        PCI Address                    : 00:12:00:00
        SAS Address                    : 500062b-2-015d-03c0
        NVDATA Version (Default)       : 0e.01.00.03
        NVDATA Version (Persistent)    : 0e.01.00.03
        Firmware Product ID            : 0x2221 (IT)
        Firmware Version               : 16.00.12.00
        NVDATA Vendor                  : LSI
        NVDATA Product ID              : SAS9300-16i
        BIOS Version                   : 08.15.00.00
        UEFI BSD Version               : 06.00.00.00
        FCODE Version                  : N/A
        Board Name                     : SAS9300-16i
        Board Assembly                 : 03-25600-01B
        Board Tracer Number            : SP62102950

        Finished Processing Commands Successfully.
        Exiting SAS3Flash.
root@truenas[/home/admin]# sas3flash -c 1 -list
Avago Technologies SAS3 Flash Utility
Version 16.00.00.00 (2017.05.02) 
Copyright 2008-2017 Avago Technologies. All rights reserved.

        Adapter Selected is a Avago SAS: SAS3008(C0)

        Controller Number              : 1
        Controller                     : SAS3008(C0)
        PCI Address                    : 00:14:00:00
        SAS Address                    : 500062b-2-015d-2940
        NVDATA Version (Default)       : 0e.01.00.03
        NVDATA Version (Persistent)    : 0e.01.00.03
        Firmware Product ID            : 0x2221 (IT)
        Firmware Version               : 16.00.12.00
        NVDATA Vendor                  : LSI
        NVDATA Product ID              : SAS9300-16i
        BIOS Version                   : 08.15.00.00
        UEFI BSD Version               : 06.00.00.00
        FCODE Version                  : N/A
        Board Name                     : SAS9300-16i
        Board Assembly                 : 03-25600-01B
        Board Tracer Number            : SP62102950

        Finished Processing Commands Successfully.
        Exiting SAS3Flash.
root@truenas[/home/admin]# sas3flash -c 2 -list
Avago Technologies SAS3 Flash Utility
Version 16.00.00.00 (2017.05.02) 
Copyright 2008-2017 Avago Technologies. All rights reserved.

        Adapter Selected is a Avago SAS: SAS3008(C0)

        Controller Number              : 2
        Controller                     : SAS3008(C0)
        PCI Address                    : 00:2f:00:00
        SAS Address                    : 500062b-2-015d-00c0
        NVDATA Version (Default)       : 0e.01.00.03
        NVDATA Version (Persistent)    : 0e.01.00.03
        Firmware Product ID            : 0x2221 (IT)
        Firmware Version               : 16.00.12.00
        NVDATA Vendor                  : LSI
        NVDATA Product ID              : SAS9300-16i
        BIOS Version                   : 08.15.00.00
        UEFI BSD Version               : 06.00.00.00
        FCODE Version                  : N/A
        Board Name                     : SAS9300-16i
        Board Assembly                 : 03-25600-01B
        Board Tracer Number            : SP62102749

        Finished Processing Commands Successfully.
        Exiting SAS3Flash.
root@truenas[/home/admin]# sas3flash -c 3 -list
Avago Technologies SAS3 Flash Utility
Version 16.00.00.00 (2017.05.02) 
Copyright 2008-2017 Avago Technologies. All rights reserved.

        Adapter Selected is a Avago SAS: SAS3008(C0)

        Controller Number              : 3
        Controller                     : SAS3008(C0)
        PCI Address                    : 00:31:00:00
        SAS Address                    : 500062b-2-015d-2640
        NVDATA Version (Default)       : 0e.01.00.03
        NVDATA Version (Persistent)    : 0e.01.00.03
        Firmware Product ID            : 0x2221 (IT)
        Firmware Version               : 16.00.12.00
        NVDATA Vendor                  : LSI
        NVDATA Product ID              : SAS9300-16i
        BIOS Version                   : 08.15.00.00
        UEFI BSD Version               : 06.00.00.00
        FCODE Version                  : N/A
        Board Name                     : SAS9300-16i
        Board Assembly                 : 03-25600-01B
        Board Tracer Number            : SP62102749

        Finished Processing Commands Successfully.
        Exiting SAS3Flash.
root@truenas[/home/admin]# 

BIOS Version and UEFI BSD Version looks old but I think we only care about Firmware Version, right ?

Then I rebooted truenas and things got better…
I did firmaware update before going to bed and on the next morning I noticed a pool being suspended (as it was happening with more drives before). It was from a pool under scrub (trying to clear the false errors from previous faulty reads):

Dec  8 05:25:04 truenas kernel: mpt3sas_cm3: log_info(0x31110d00): originator(PL), code(0x11), sub_code(0x0d00)
Dec  8 05:25:04 truenas kernel: sd 39:0:3:0: [sdo] tag#7214 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=3s
Dec  8 05:25:04 truenas kernel: sd 39:0:3:0: [sdo] tag#7214 Sense Key : Not Ready [current] 
Dec  8 05:25:04 truenas kernel: sd 39:0:3:0: [sdo] tag#7214 Add. Sense: Logical unit not ready, cause not reportable
Dec  8 05:25:04 truenas kernel: sd 39:0:3:0: [sdo] tag#7214 CDB: Read(16) 88 00 00 00 00 01 96 5c ec 60 00 00 01 00 00 00
Dec  8 05:25:04 truenas kernel: zio pool=my_pool_12 vdev=/dev/disk/by-partuuid/169f3144-7ec4-4ecf-9642-af8fced28480 error=5 type=1 offset=3490629337088 size=131072 flags=1572992
Dec  8 05:25:04 truenas kernel: sd 39:0:3:0: [sdo] tag#7215 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
Dec  8 05:25:04 truenas kernel: sd 39:0:3:0: [sdo] tag#7215 Sense Key : Not Ready [current] 
Dec  8 05:25:04 truenas kernel: sd 39:0:3:0: [sdo] tag#7215 Add. Sense: Logical unit not ready, cause not reportable
Dec  8 05:25:04 truenas kernel: sd 39:0:3:0: [sdo] tag#7215 CDB: Read(16) 88 00 00 00 00 00 00 00 12 10 00 00 00 10 00 00
Dec  8 05:25:04 truenas kernel: zio pool=my_pool_12 vdev=/dev/disk/by-partuuid/169f3144-7ec4-4ecf-9642-af8fced28480 error=5 type=1 offset=270336 size=8192 flags=721089
Dec  8 05:25:04 truenas kernel: sd 39:0:3:0: [sdo] tag#7216 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
Dec  8 05:25:04 truenas kernel: sd 39:0:3:0: [sdo] tag#7216 Sense Key : Not Ready [current] 
Dec  8 05:25:04 truenas kernel: sd 39:0:3:0: [sdo] tag#7216 Add. Sense: Logical unit not ready, cause not reportable
Dec  8 05:25:04 truenas kernel: sd 39:0:3:0: [sdo] tag#7216 CDB: Read(16) 88 00 00 00 00 08 2f 7f f4 10 00 00 00 10 00 00
Dec  8 05:25:04 truenas kernel: zio pool=my_pool_12 vdev=/dev/disk/by-partuuid/169f3144-7ec4-4ecf-9642-af8fced28480 error=5 type=1 offset=18000204275712 size=8192 flags=721089
Dec  8 05:25:04 truenas kernel: sd 39:0:3:0: [sdo] tag#7217 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
Dec  8 05:25:04 truenas kernel: sd 39:0:3:0: [sdo] tag#7217 Sense Key : Not Ready [current] 
Dec  8 05:25:04 truenas kernel: sd 39:0:3:0: [sdo] tag#7217 Add. Sense: Logical unit not ready, cause not reportable
Dec  8 05:25:04 truenas kernel: sd 39:0:3:0: [sdo] tag#7217 CDB: Read(16) 88 00 00 00 00 08 2f 7f f6 10 00 00 00 10 00 00
Dec  8 05:25:04 truenas kernel: zio pool=my_pool_12 vdev=/dev/disk/by-partuuid/169f3144-7ec4-4ecf-9642-af8fced28480 error=5 type=1 offset=18000204537856 size=8192 flags=721089
Dec  8 05:25:07 truenas kernel: WARNING: Pool 'my_pool_12' has encountered an uncorrectable I/O failure and has been suspended.

For sure it’s less then before, but if it was only because of the firmware why it happened again ?

Now I zfs clear the pool again and scrub is continuing…
No more error at time of writing, but it’s just few hours after.

I found this post all-disks-in-vdev-faulted-all-at-once-but-no-other-drives-on-backplane.110077 which links to this reddit post: scale_drive_resets_with_lsi_93008i_looking_for. It seems I have the same issue, do you agree ?
As per comments, the fix should be to upgrade firmware to the specific version (which I did it) and to do blacklist mpt3sas. The last step I did not tried and I wanted to check people opinions… That seems not something you want to do in truenas, since you will change settings in the OS…

Do you have any suggestions on how to proceed ?
If one (or both) HBA board is faulty how can I detect it ?

I may also buy a brand new card (HBA 9600-24i in example)… The goal is to connect as many drives I can (where each drive needs I would say 220 MB/s since they are 3.5 mechanical drives).

Any help in troubleshooting this problem is very appreciated!
Thanks!

Another option is to use an expander.

That is also an option… Not very familiar on what could be a good config. Do you have some example oh HBA / expander that I can use together ?
Anyway if I’m correct the problem would be also there with expander, right ? in example if one of my 4 controllers (2 cards) is faulty, the extender connected to it will get I/O errors from all drives connected to it…
Would be nice If is possible to check if the problem is not due to software (like firmware) first…

You can use a RES3TV360 or similar SAS3 expander variant. Some server chassis come with built-in expanders in their chassis. But you’re right with regards to an expander not fixing your controller issue, if a controller resets all drives connected through that controller with be affected by its issues.

I don’t know why yours reset. They have the recommended firmware 16.00.12.00. The controllers are notorious for running too hot without adequate cooling but you mention that you realised that and now cool them.

As suspected I got again error from a pool. I was performing 2 scrubs and one of them was on that pool. Here some log from /var/log/messages:

Dec  8 13:28:21 truenas kernel: sd 39:0:3:0: device_block, handle(0x000e)
Dec  8 13:28:23 truenas kernel: sd 39:0:3:0: device_unblock and setting to running, handle(0x000e)
Dec  8 13:28:24 truenas kernel: mpt3sas_cm3: log_info(0x31110d00): originator(PL), code(0x11), sub_code(0x0d00)
Dec  8 13:28:24 truenas kernel: mpt3sas_cm3: log_info(0x31110d00): originator(PL), code(0x11), sub_code(0x0d00)
Dec  8 13:28:24 truenas kernel: mpt3sas_cm3: log_info(0x31110d00): originator(PL), code(0x11), sub_code(0x0d00)
Dec  8 13:28:24 truenas kernel: sd 39:0:3:0: [sdo] tag#7211 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK cmd_age=3s
Dec  8 13:28:24 truenas kernel: sd 39:0:3:0: [sdo] tag#7211 CDB: Read(16) 88 00 00 00 00 00 00 00 10 20 00 00 00 e0 00 00
Dec  8 13:28:24 truenas kernel: mpt3sas_cm3: log_info(0x31110d00): originator(PL), code(0x11), sub_code(0x0d00)
Dec  8 13:28:24 truenas kernel: sd 39:0:3:0: [sdo] tag#7226 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK cmd_age=3s
Dec  8 13:28:24 truenas kernel: sd 39:0:3:0: [sdo] tag#7226 CDB: Read(16) 88 00 00 00 00 04 0c bc 74 18 00 00 08 00 00 00
Dec  8 13:28:24 truenas kernel: zio pool=my_pool_12 vdev=/dev/disk/by-partuuid/169f3144-7ec4-4ecf-9642-af8fced28480 error=5 type=1 offset=8905493590016 size=1048576 flags=1074267312
Dec  8 13:28:24 truenas kernel: zio pool=my_pool_12 vdev=/dev/disk/by-partuuid/169f3144-7ec4-4ecf-9642-af8fced28480 error=5 type=1 offset=1920407437312 size=131072 flags=1572992
Dec  8 13:28:24 truenas kernel: zio pool=my_pool_12 vdev=/dev/disk/by-partuuid/169f3144-7ec4-4ecf-9642-af8fced28480 error=5 type=1 offset=17167241535488 size=131072 flags=1572992
Dec  8 13:28:24 truenas kernel: sd 39:0:3:0: [sdo] tag#7222 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK cmd_age=3s
Dec  8 13:28:24 truenas kernel: sd 39:0:3:0: [sdo] tag#7221 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK cmd_age=3s
Dec  8 13:28:24 truenas kernel: sd 39:0:3:0: [sdo] tag#7222 CDB: Read(16) 88 00 00 00 00 04 0c bc 6c 18 00 00 08 00 00 00
Dec  8 13:28:24 truenas kernel: sd 39:0:3:0: [sdo] tag#7221 CDB: Read(16) 88 00 00 00 00 04 0c bc 64 18 00 00 08 00 00 00
Dec  8 13:28:24 truenas kernel: zio pool=my_pool_12 vdev=/dev/disk/by-partuuid/169f3144-7ec4-4ecf-9642-af8fced28480 error=5 type=1 offset=8905492541440 size=1048576 flags=1074267312
Dec  8 13:28:24 truenas kernel: zio pool=my_pool_12 vdev=/dev/disk/by-partuuid/169f3144-7ec4-4ecf-9642-af8fced28480 error=5 type=1 offset=8905491492864 size=1048576 flags=1074267312
Dec  8 13:28:24 truenas kernel: WARNING: Pool 'my_pool_12' has encountered an uncorrectable I/O failure and has been suspended.
Dec  8 13:28:24 truenas kernel: sd 39:0:3:0: [sdo] Synchronizing SCSI cache
Dec  8 13:28:24 truenas kernel: sd 39:0:3:0: [sdo] Synchronize Cache(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Dec  8 13:28:24 truenas kernel: mpt3sas_cm3: mpt3sas_transport_port_remove: removed: sas_addr(0x4433221103000000)
Dec  8 13:28:24 truenas kernel: mpt3sas_cm3: removing handle(0x000e), sas_addr(0x4433221103000000)
Dec  8 13:28:24 truenas kernel: mpt3sas_cm3: enclosure logical id(0x500062b2015d2640), slot(1)
Dec  8 13:28:24 truenas kernel: mpt3sas_cm3: enclosure level(0x0000), connector name(     )
Dec  8 13:28:30 truenas netdata[732525]: CONFIG: cannot load cloud config '/var/lib/netdata/cloud.d/cloud.conf'. Running with internal defaults.
Dec  8 13:28:43 truenas kernel: mpt3sas_cm3: handle(0xe) sas_address(0x4433221103000000) port_type(0x1)
Dec  8 13:28:44 truenas kernel: scsi 39:0:8:0: Direct-Access     ATA      ST18000NM000J-2T SN02 PQ: 0 ANSI: 6
Dec  8 13:28:44 truenas kernel: scsi 39:0:8:0: SATA: handle(0x000e), sas_addr(0x4433221103000000), phy(3), device_name(0x0000000000000000)
Dec  8 13:28:44 truenas kernel: scsi 39:0:8:0: enclosure logical id (0x500062b2015d2640), slot(1) 
Dec  8 13:28:44 truenas kernel: scsi 39:0:8:0: enclosure level(0x0000), connector name(     )
Dec  8 13:28:44 truenas kernel: scsi 39:0:8:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
Dec  8 13:28:44 truenas kernel: scsi 39:0:8:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
Dec  8 13:28:44 truenas kernel: sd 39:0:8:0: Attached scsi generic sg17 type 0
Dec  8 13:28:44 truenas kernel: sd 39:0:8:0: Power-on or device reset occurred
Dec  8 13:28:44 truenas kernel:  end_device-39:8: add: handle(0x000e), sas_addr(0x4433221103000000)
Dec  8 13:28:44 truenas kernel: sd 39:0:8:0: [sdag] 35156656128 512-byte logical blocks: (18.0 TB/16.4 TiB)
Dec  8 13:28:44 truenas kernel: sd 39:0:8:0: [sdag] 4096-byte physical blocks
Dec  8 13:28:44 truenas kernel: sd 39:0:8:0: [sdag] Write Protect is off
Dec  8 13:28:44 truenas kernel: sd 39:0:8:0: [sdag] Write cache: enabled, read cache: enabled, supports DPO and FUA
Dec  8 13:28:44 truenas kernel:  sdag: sdag1
Dec  8 13:28:44 truenas kernel: sd 39:0:8:0: [sdag] Attached SCSI disk
Dec  8 13:28:51 truenas netdata[733617]: CONFIG: cannot load cloud config '/var/lib/netdata/cloud.d/cloud.conf'. Running with internal defaults.
Dec  8 13:32:18 truenas kernel: task:txg_sync        state:D stack:0     pid:6643  ppid:2      flags:0x00004000
Dec  8 13:32:18 truenas kernel: Call Trace:
Dec  8 13:32:18 truenas kernel:  <TASK>
Dec  8 13:32:18 truenas kernel:  __schedule+0x349/0x950
Dec  8 13:32:18 truenas kernel:  schedule+0x5b/0xa0
Dec  8 13:32:18 truenas kernel:  schedule_timeout+0x98/0x160
Dec  8 13:32:18 truenas kernel:  ? __pfx_process_timeout+0x10/0x10
Dec  8 13:32:18 truenas kernel:  io_schedule_timeout+0x50/0x80
Dec  8 13:32:18 truenas kernel:  __cv_timedwait_common+0x12a/0x160 [spl]
Dec  8 13:32:18 truenas kernel:  ? __pfx_autoremove_wake_function+0x10/0x10
Dec  8 13:32:18 truenas kernel:  __cv_timedwait_io+0x19/0x20 [spl]
Dec  8 13:32:18 truenas kernel:  zio_wait+0x124/0x240 [zfs]
Dec  8 13:32:18 truenas kernel:  dsl_pool_sync_mos+0x37/0xa0 [zfs]
Dec  8 13:32:18 truenas kernel:  dsl_pool_sync+0x3b9/0x410 [zfs]
Dec  8 13:32:18 truenas kernel:  spa_sync_iterate_to_convergence+0xd8/0x200 [zfs]
Dec  8 13:32:18 truenas kernel:  spa_sync+0x30a/0x600 [zfs]
Dec  8 13:32:18 truenas kernel:  txg_sync_thread+0x1ec/0x270 [zfs]
Dec  8 13:32:18 truenas kernel:  ? __pfx_txg_sync_thread+0x10/0x10 [zfs]
Dec  8 13:32:18 truenas kernel:  ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
Dec  8 13:32:18 truenas kernel:  thread_generic_wrapper+0x5e/0x70 [spl]
Dec  8 13:32:18 truenas kernel:  kthread+0xe8/0x120
Dec  8 13:32:18 truenas kernel:  ? __pfx_kthread+0x10/0x10
Dec  8 13:32:18 truenas kernel:  ret_from_fork+0x34/0x50
Dec  8 13:32:18 truenas kernel:  ? __pfx_kthread+0x10/0x10
Dec  8 13:32:18 truenas kernel:  ret_from_fork_asm+0x1b/0x30
Dec  8 13:32:18 truenas kernel:  </TASK>
Dec  8 13:32:18 truenas kernel: task:agents          state:D stack:0     pid:9922  ppid:1      flags:0x00004002
Dec  8 13:32:18 truenas kernel: Call Trace:
Dec  8 13:32:18 truenas kernel:  <TASK>
Dec  8 13:32:18 truenas kernel:  __schedule+0x349/0x950
Dec  8 13:32:18 truenas kernel:  schedule+0x5b/0xa0
Dec  8 13:32:18 truenas kernel:  io_schedule+0x46/0x70
Dec  8 13:32:18 truenas kernel:  cv_wait_common+0xaa/0x130 [spl]
Dec  8 13:32:18 truenas kernel:  ? __pfx_autoremove_wake_function+0x10/0x10
Dec  8 13:32:18 truenas kernel:  txg_wait_synced_impl+0xc0/0x110 [zfs]
Dec  8 13:32:18 truenas kernel:  txg_wait_synced+0x10/0x40 [zfs]
Dec  8 13:32:18 truenas kernel:  spa_vdev_state_exit+0x95/0x150 [zfs]
Dec  8 13:32:18 truenas kernel:  zfs_ioc_vdev_set_state+0xea/0x1c0 [zfs]
Dec  8 13:32:18 truenas kernel:  zfsdev_ioctl_common+0x680/0x790 [zfs]
Dec  8 13:32:18 truenas kernel:  ? __kmalloc_node+0xc6/0x150
Dec  8 13:32:18 truenas kernel:  zfsdev_ioctl+0x53/0xe0 [zfs]
Dec  8 13:32:18 truenas kernel:  __x64_sys_ioctl+0x97/0xd0
Dec  8 13:32:18 truenas kernel:  do_syscall_64+0x59/0xb0
Dec  8 13:32:18 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:32:18 truenas kernel:  ? handle_mm_fault+0xa2/0x370
Dec  8 13:32:18 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:32:18 truenas kernel:  ? do_user_addr_fault+0x323/0x630
Dec  8 13:32:18 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:32:18 truenas kernel:  ? exc_page_fault+0x77/0x170
Dec  8 13:32:18 truenas kernel:  entry_SYSCALL_64_after_hwframe+0x78/0xe2
Dec  8 13:32:18 truenas kernel: RIP: 0033:0x7f375ecfdc5b
Dec  8 13:32:18 truenas kernel: RSP: 002b:00007f375dc4ea00 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Dec  8 13:32:18 truenas kernel: RAX: ffffffffffffffda RBX: 00007f3750035450 RCX: 00007f375ecfdc5b
Dec  8 13:32:18 truenas kernel: RDX: 00007f375dc4ea70 RSI: 0000000000005a0d RDI: 000000000000000a
Dec  8 13:32:18 truenas kernel: RBP: 00007f375dc52460 R08: 0000000000000001 R09: 0000000000000000
Dec  8 13:32:18 truenas kernel: R10: 1d51124cead9259d R11: 0000000000000246 R12: 00007f375dc52020
Dec  8 13:32:18 truenas kernel: R13: 000055f7e35ee7b0 R14: 00007f3750034c20 R15: 00007f375dc4ea70
Dec  8 13:32:18 truenas kernel:  </TASK>
Dec  8 13:34:19 truenas kernel: task:agents          state:D stack:0     pid:9922  ppid:1      flags:0x00004002
Dec  8 13:34:19 truenas kernel: Call Trace:
Dec  8 13:34:19 truenas kernel:  <TASK>
Dec  8 13:34:19 truenas kernel:  __schedule+0x349/0x950
Dec  8 13:34:19 truenas kernel:  schedule+0x5b/0xa0
Dec  8 13:34:19 truenas kernel:  io_schedule+0x46/0x70
Dec  8 13:34:19 truenas kernel:  cv_wait_common+0xaa/0x130 [spl]
Dec  8 13:34:19 truenas kernel:  ? __pfx_autoremove_wake_function+0x10/0x10
Dec  8 13:34:19 truenas kernel:  txg_wait_synced_impl+0xc0/0x110 [zfs]
Dec  8 13:34:19 truenas kernel:  txg_wait_synced+0x10/0x40 [zfs]
Dec  8 13:34:19 truenas kernel:  spa_vdev_state_exit+0x95/0x150 [zfs]
Dec  8 13:34:19 truenas kernel:  zfs_ioc_vdev_set_state+0xea/0x1c0 [zfs]
Dec  8 13:34:19 truenas kernel:  zfsdev_ioctl_common+0x680/0x790 [zfs]
Dec  8 13:34:19 truenas kernel:  ? __kmalloc_node+0xc6/0x150
Dec  8 13:34:19 truenas kernel:  zfsdev_ioctl+0x53/0xe0 [zfs]
Dec  8 13:34:19 truenas kernel:  __x64_sys_ioctl+0x97/0xd0
Dec  8 13:34:19 truenas kernel:  do_syscall_64+0x59/0xb0
Dec  8 13:34:19 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:34:19 truenas kernel:  ? handle_mm_fault+0xa2/0x370
Dec  8 13:34:19 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:34:19 truenas kernel:  ? do_user_addr_fault+0x323/0x630
Dec  8 13:34:19 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:34:19 truenas kernel:  ? exc_page_fault+0x77/0x170
Dec  8 13:34:19 truenas kernel:  entry_SYSCALL_64_after_hwframe+0x78/0xe2
Dec  8 13:34:19 truenas kernel: RIP: 0033:0x7f375ecfdc5b
Dec  8 13:34:19 truenas kernel: RSP: 002b:00007f375dc4ea00 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Dec  8 13:34:19 truenas kernel: RAX: ffffffffffffffda RBX: 00007f3750035450 RCX: 00007f375ecfdc5b
Dec  8 13:34:19 truenas kernel: RDX: 00007f375dc4ea70 RSI: 0000000000005a0d RDI: 000000000000000a
Dec  8 13:34:19 truenas kernel: RBP: 00007f375dc52460 R08: 0000000000000001 R09: 0000000000000000
Dec  8 13:34:19 truenas kernel: R10: 1d51124cead9259d R11: 0000000000000246 R12: 00007f375dc52020
Dec  8 13:34:19 truenas kernel: R13: 000055f7e35ee7b0 R14: 00007f3750034c20 R15: 00007f375dc4ea70
Dec  8 13:34:19 truenas kernel:  </TASK>
Dec  8 13:36:19 truenas kernel: task:agents          state:D stack:0     pid:9922  ppid:1      flags:0x00004002
Dec  8 13:36:19 truenas kernel: Call Trace:
Dec  8 13:36:19 truenas kernel:  <TASK>
Dec  8 13:36:19 truenas kernel:  __schedule+0x349/0x950
Dec  8 13:36:19 truenas kernel:  schedule+0x5b/0xa0
Dec  8 13:36:19 truenas kernel:  io_schedule+0x46/0x70
Dec  8 13:36:19 truenas kernel:  cv_wait_common+0xaa/0x130 [spl]
Dec  8 13:36:19 truenas kernel:  ? __pfx_autoremove_wake_function+0x10/0x10
Dec  8 13:36:19 truenas kernel:  txg_wait_synced_impl+0xc0/0x110 [zfs]
Dec  8 13:36:19 truenas kernel:  txg_wait_synced+0x10/0x40 [zfs]
Dec  8 13:36:19 truenas kernel:  spa_vdev_state_exit+0x95/0x150 [zfs]
Dec  8 13:36:19 truenas kernel:  zfs_ioc_vdev_set_state+0xea/0x1c0 [zfs]
Dec  8 13:36:19 truenas kernel:  zfsdev_ioctl_common+0x680/0x790 [zfs]
Dec  8 13:36:19 truenas kernel:  ? __kmalloc_node+0xc6/0x150
Dec  8 13:36:19 truenas kernel:  zfsdev_ioctl+0x53/0xe0 [zfs]
Dec  8 13:36:19 truenas kernel:  __x64_sys_ioctl+0x97/0xd0
Dec  8 13:36:19 truenas kernel:  do_syscall_64+0x59/0xb0
Dec  8 13:36:19 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:36:19 truenas kernel:  ? handle_mm_fault+0xa2/0x370
Dec  8 13:36:19 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:36:19 truenas kernel:  ? do_user_addr_fault+0x323/0x630
Dec  8 13:36:19 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:36:19 truenas kernel:  ? exc_page_fault+0x77/0x170
Dec  8 13:36:19 truenas kernel:  entry_SYSCALL_64_after_hwframe+0x78/0xe2
Dec  8 13:36:19 truenas kernel: RIP: 0033:0x7f375ecfdc5b
Dec  8 13:36:19 truenas kernel: RSP: 002b:00007f375dc4ea00 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Dec  8 13:36:19 truenas kernel: RAX: ffffffffffffffda RBX: 00007f3750035450 RCX: 00007f375ecfdc5b
Dec  8 13:36:19 truenas kernel: RDX: 00007f375dc4ea70 RSI: 0000000000005a0d RDI: 000000000000000a
Dec  8 13:36:19 truenas kernel: RBP: 00007f375dc52460 R08: 0000000000000001 R09: 0000000000000000
Dec  8 13:36:19 truenas kernel: R10: 1d51124cead9259d R11: 0000000000000246 R12: 00007f375dc52020
Dec  8 13:36:19 truenas kernel: R13: 000055f7e35ee7b0 R14: 00007f3750034c20 R15: 00007f375dc4ea70
Dec  8 13:36:19 truenas kernel:  </TASK>
Dec  8 13:38:20 truenas kernel: task:agents          state:D stack:0     pid:9922  ppid:1      flags:0x00004002
Dec  8 13:38:20 truenas kernel: Call Trace:
Dec  8 13:38:20 truenas kernel:  <TASK>
Dec  8 13:38:20 truenas kernel:  __schedule+0x349/0x950
Dec  8 13:38:20 truenas kernel:  schedule+0x5b/0xa0
Dec  8 13:38:20 truenas kernel:  io_schedule+0x46/0x70
Dec  8 13:38:20 truenas kernel:  cv_wait_common+0xaa/0x130 [spl]
Dec  8 13:38:20 truenas kernel:  ? __pfx_autoremove_wake_function+0x10/0x10
Dec  8 13:38:20 truenas kernel:  txg_wait_synced_impl+0xc0/0x110 [zfs]
Dec  8 13:38:20 truenas kernel:  txg_wait_synced+0x10/0x40 [zfs]
Dec  8 13:38:20 truenas kernel:  spa_vdev_state_exit+0x95/0x150 [zfs]
Dec  8 13:38:20 truenas kernel:  zfs_ioc_vdev_set_state+0xea/0x1c0 [zfs]
Dec  8 13:38:20 truenas kernel:  zfsdev_ioctl_common+0x680/0x790 [zfs]
Dec  8 13:38:20 truenas kernel:  ? __kmalloc_node+0xc6/0x150
Dec  8 13:38:20 truenas kernel:  zfsdev_ioctl+0x53/0xe0 [zfs]
Dec  8 13:38:20 truenas kernel:  __x64_sys_ioctl+0x97/0xd0
Dec  8 13:38:20 truenas kernel:  do_syscall_64+0x59/0xb0
Dec  8 13:38:20 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:38:20 truenas kernel:  ? handle_mm_fault+0xa2/0x370
Dec  8 13:38:20 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:38:20 truenas kernel:  ? do_user_addr_fault+0x323/0x630
Dec  8 13:38:20 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:38:20 truenas kernel:  ? exc_page_fault+0x77/0x170
Dec  8 13:38:20 truenas kernel:  entry_SYSCALL_64_after_hwframe+0x78/0xe2
Dec  8 13:38:20 truenas kernel: RIP: 0033:0x7f375ecfdc5b
Dec  8 13:38:20 truenas kernel: RSP: 002b:00007f375dc4ea00 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Dec  8 13:38:20 truenas kernel: RAX: ffffffffffffffda RBX: 00007f3750035450 RCX: 00007f375ecfdc5b
Dec  8 13:38:20 truenas kernel: RDX: 00007f375dc4ea70 RSI: 0000000000005a0d RDI: 000000000000000a
Dec  8 13:38:20 truenas kernel: RBP: 00007f375dc52460 R08: 0000000000000001 R09: 0000000000000000
Dec  8 13:38:20 truenas kernel: R10: 1d51124cead9259d R11: 0000000000000246 R12: 00007f375dc52020
Dec  8 13:38:20 truenas kernel: R13: 000055f7e35ee7b0 R14: 00007f3750034c20 R15: 00007f375dc4ea70
Dec  8 13:38:20 truenas kernel:  </TASK>
Dec  8 13:40:21 truenas kernel: task:agents          state:D stack:0     pid:9922  ppid:1      flags:0x00004002
Dec  8 13:40:21 truenas kernel: Call Trace:
Dec  8 13:40:21 truenas kernel:  <TASK>
Dec  8 13:40:21 truenas kernel:  __schedule+0x349/0x950
Dec  8 13:40:21 truenas kernel:  schedule+0x5b/0xa0
Dec  8 13:40:21 truenas kernel:  io_schedule+0x46/0x70
Dec  8 13:40:21 truenas kernel:  cv_wait_common+0xaa/0x130 [spl]
Dec  8 13:40:21 truenas kernel:  ? __pfx_autoremove_wake_function+0x10/0x10
Dec  8 13:40:21 truenas kernel:  txg_wait_synced_impl+0xc0/0x110 [zfs]
Dec  8 13:40:21 truenas kernel:  txg_wait_synced+0x10/0x40 [zfs]
Dec  8 13:40:21 truenas kernel:  spa_vdev_state_exit+0x95/0x150 [zfs]
Dec  8 13:40:21 truenas kernel:  zfs_ioc_vdev_set_state+0xea/0x1c0 [zfs]
Dec  8 13:40:21 truenas kernel:  zfsdev_ioctl_common+0x680/0x790 [zfs]
Dec  8 13:40:21 truenas kernel:  ? __kmalloc_node+0xc6/0x150
Dec  8 13:40:21 truenas kernel:  zfsdev_ioctl+0x53/0xe0 [zfs]
Dec  8 13:40:21 truenas kernel:  __x64_sys_ioctl+0x97/0xd0
Dec  8 13:40:21 truenas kernel:  do_syscall_64+0x59/0xb0
Dec  8 13:40:21 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:40:21 truenas kernel:  ? handle_mm_fault+0xa2/0x370
Dec  8 13:40:21 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:40:21 truenas kernel:  ? do_user_addr_fault+0x323/0x630
Dec  8 13:40:21 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:40:21 truenas kernel:  ? exc_page_fault+0x77/0x170
Dec  8 13:40:21 truenas kernel:  entry_SYSCALL_64_after_hwframe+0x78/0xe2
Dec  8 13:40:21 truenas kernel: RIP: 0033:0x7f375ecfdc5b
Dec  8 13:40:21 truenas kernel: RSP: 002b:00007f375dc4ea00 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Dec  8 13:40:21 truenas kernel: RAX: ffffffffffffffda RBX: 00007f3750035450 RCX: 00007f375ecfdc5b
Dec  8 13:40:21 truenas kernel: RDX: 00007f375dc4ea70 RSI: 0000000000005a0d RDI: 000000000000000a
Dec  8 13:40:21 truenas kernel: RBP: 00007f375dc52460 R08: 0000000000000001 R09: 0000000000000000
Dec  8 13:40:21 truenas kernel: R10: 1d51124cead9259d R11: 0000000000000246 R12: 00007f375dc52020
Dec  8 13:40:21 truenas kernel: R13: 000055f7e35ee7b0 R14: 00007f3750034c20 R15: 00007f375dc4ea70
Dec  8 13:40:21 truenas kernel:  </TASK>
Dec  8 13:42:22 truenas kernel: task:agents          state:D stack:0     pid:9922  ppid:1      flags:0x00004002
Dec  8 13:42:22 truenas kernel: Call Trace:
Dec  8 13:42:22 truenas kernel:  <TASK>
Dec  8 13:42:22 truenas kernel:  __schedule+0x349/0x950
Dec  8 13:42:22 truenas kernel:  schedule+0x5b/0xa0
Dec  8 13:42:22 truenas kernel:  io_schedule+0x46/0x70
Dec  8 13:42:22 truenas kernel:  cv_wait_common+0xaa/0x130 [spl]
Dec  8 13:42:22 truenas kernel:  ? __pfx_autoremove_wake_function+0x10/0x10
Dec  8 13:42:22 truenas kernel:  txg_wait_synced_impl+0xc0/0x110 [zfs]
Dec  8 13:42:22 truenas kernel:  txg_wait_synced+0x10/0x40 [zfs]
Dec  8 13:42:22 truenas kernel:  spa_vdev_state_exit+0x95/0x150 [zfs]
Dec  8 13:42:22 truenas kernel:  zfs_ioc_vdev_set_state+0xea/0x1c0 [zfs]
Dec  8 13:42:22 truenas kernel:  zfsdev_ioctl_common+0x680/0x790 [zfs]
Dec  8 13:42:22 truenas kernel:  ? __kmalloc_node+0xc6/0x150
Dec  8 13:42:22 truenas kernel:  zfsdev_ioctl+0x53/0xe0 [zfs]
Dec  8 13:42:22 truenas kernel:  __x64_sys_ioctl+0x97/0xd0
Dec  8 13:42:22 truenas kernel:  do_syscall_64+0x59/0xb0
Dec  8 13:42:22 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:42:22 truenas kernel:  ? handle_mm_fault+0xa2/0x370
Dec  8 13:42:22 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:42:22 truenas kernel:  ? do_user_addr_fault+0x323/0x630
Dec  8 13:42:22 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:42:22 truenas kernel:  ? exc_page_fault+0x77/0x170
Dec  8 13:42:22 truenas kernel:  entry_SYSCALL_64_after_hwframe+0x78/0xe2
Dec  8 13:42:22 truenas kernel: RIP: 0033:0x7f375ecfdc5b
Dec  8 13:42:22 truenas kernel: RSP: 002b:00007f375dc4ea00 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Dec  8 13:42:22 truenas kernel: RAX: ffffffffffffffda RBX: 00007f3750035450 RCX: 00007f375ecfdc5b
Dec  8 13:42:22 truenas kernel: RDX: 00007f375dc4ea70 RSI: 0000000000005a0d RDI: 000000000000000a
Dec  8 13:42:22 truenas kernel: RBP: 00007f375dc52460 R08: 0000000000000001 R09: 0000000000000000
Dec  8 13:42:22 truenas kernel: R10: 1d51124cead9259d R11: 0000000000000246 R12: 00007f375dc52020
Dec  8 13:42:22 truenas kernel: R13: 000055f7e35ee7b0 R14: 00007f3750034c20 R15: 00007f375dc4ea70
Dec  8 13:42:22 truenas kernel:  </TASK>
Dec  8 13:44:23 truenas kernel: task:agents          state:D stack:0     pid:9922  ppid:1      flags:0x00004002
Dec  8 13:44:23 truenas kernel: Call Trace:
Dec  8 13:44:23 truenas kernel:  <TASK>
Dec  8 13:44:23 truenas kernel:  __schedule+0x349/0x950
Dec  8 13:44:23 truenas kernel:  schedule+0x5b/0xa0
Dec  8 13:44:23 truenas kernel:  io_schedule+0x46/0x70
Dec  8 13:44:23 truenas kernel:  cv_wait_common+0xaa/0x130 [spl]
Dec  8 13:44:23 truenas kernel:  ? __pfx_autoremove_wake_function+0x10/0x10
Dec  8 13:44:23 truenas kernel:  txg_wait_synced_impl+0xc0/0x110 [zfs]
Dec  8 13:44:23 truenas kernel:  txg_wait_synced+0x10/0x40 [zfs]
Dec  8 13:44:23 truenas kernel:  spa_vdev_state_exit+0x95/0x150 [zfs]
Dec  8 13:44:23 truenas kernel:  zfs_ioc_vdev_set_state+0xea/0x1c0 [zfs]
Dec  8 13:44:23 truenas kernel:  zfsdev_ioctl_common+0x680/0x790 [zfs]
Dec  8 13:44:23 truenas kernel:  ? __kmalloc_node+0xc6/0x150
Dec  8 13:44:23 truenas kernel:  zfsdev_ioctl+0x53/0xe0 [zfs]
Dec  8 13:44:23 truenas kernel:  __x64_sys_ioctl+0x97/0xd0
Dec  8 13:44:23 truenas kernel:  do_syscall_64+0x59/0xb0
Dec  8 13:44:23 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:44:23 truenas kernel:  ? handle_mm_fault+0xa2/0x370
Dec  8 13:44:23 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:44:23 truenas kernel:  ? do_user_addr_fault+0x323/0x630
Dec  8 13:44:23 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:44:23 truenas kernel:  ? exc_page_fault+0x77/0x170
Dec  8 13:44:23 truenas kernel:  entry_SYSCALL_64_after_hwframe+0x78/0xe2
Dec  8 13:44:23 truenas kernel: RIP: 0033:0x7f375ecfdc5b
Dec  8 13:44:23 truenas kernel: RSP: 002b:00007f375dc4ea00 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Dec  8 13:44:23 truenas kernel: RAX: ffffffffffffffda RBX: 00007f3750035450 RCX: 00007f375ecfdc5b
Dec  8 13:44:23 truenas kernel: RDX: 00007f375dc4ea70 RSI: 0000000000005a0d RDI: 000000000000000a
Dec  8 13:44:23 truenas kernel: RBP: 00007f375dc52460 R08: 0000000000000001 R09: 0000000000000000
Dec  8 13:44:23 truenas kernel: R10: 1d51124cead9259d R11: 0000000000000246 R12: 00007f375dc52020
Dec  8 13:44:23 truenas kernel: R13: 000055f7e35ee7b0 R14: 00007f3750034c20 R15: 00007f375dc4ea70
Dec  8 13:44:23 truenas kernel:  </TASK>
Dec  8 13:46:24 truenas kernel: task:agents          state:D stack:0     pid:9922  ppid:1      flags:0x00004002
Dec  8 13:46:24 truenas kernel: Call Trace:
Dec  8 13:46:24 truenas kernel:  <TASK>
Dec  8 13:46:24 truenas kernel:  __schedule+0x349/0x950
Dec  8 13:46:24 truenas kernel:  schedule+0x5b/0xa0
Dec  8 13:46:24 truenas kernel:  io_schedule+0x46/0x70
Dec  8 13:46:24 truenas kernel:  cv_wait_common+0xaa/0x130 [spl]
Dec  8 13:46:24 truenas kernel:  ? __pfx_autoremove_wake_function+0x10/0x10
Dec  8 13:46:24 truenas kernel:  txg_wait_synced_impl+0xc0/0x110 [zfs]
Dec  8 13:46:24 truenas kernel:  txg_wait_synced+0x10/0x40 [zfs]
Dec  8 13:46:24 truenas kernel:  spa_vdev_state_exit+0x95/0x150 [zfs]
Dec  8 13:46:24 truenas kernel:  zfs_ioc_vdev_set_state+0xea/0x1c0 [zfs]
Dec  8 13:46:24 truenas kernel:  zfsdev_ioctl_common+0x680/0x790 [zfs]
Dec  8 13:46:24 truenas kernel:  ? __kmalloc_node+0xc6/0x150
Dec  8 13:46:24 truenas kernel:  zfsdev_ioctl+0x53/0xe0 [zfs]
Dec  8 13:46:24 truenas kernel:  __x64_sys_ioctl+0x97/0xd0
Dec  8 13:46:24 truenas kernel:  do_syscall_64+0x59/0xb0
Dec  8 13:46:24 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:46:24 truenas kernel:  ? handle_mm_fault+0xa2/0x370
Dec  8 13:46:24 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:46:24 truenas kernel:  ? do_user_addr_fault+0x323/0x630
Dec  8 13:46:24 truenas kernel:  ? srso_alias_return_thunk+0x5/0xfbef5
Dec  8 13:46:24 truenas kernel:  ? exc_page_fault+0x77/0x170
Dec  8 13:46:24 truenas kernel:  entry_SYSCALL_64_after_hwframe+0x78/0xe2
Dec  8 13:46:24 truenas kernel: RIP: 0033:0x7f375ecfdc5b
Dec  8 13:46:24 truenas kernel: RSP: 002b:00007f375dc4ea00 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Dec  8 13:46:24 truenas kernel: RAX: ffffffffffffffda RBX: 00007f3750035450 RCX: 00007f375ecfdc5b
Dec  8 13:46:24 truenas kernel: RDX: 00007f375dc4ea70 RSI: 0000000000005a0d RDI: 000000000000000a
Dec  8 13:46:24 truenas kernel: RBP: 00007f375dc52460 R08: 0000000000000001 R09: 0000000000000000
Dec  8 13:46:24 truenas kernel: R10: 1d51124cead9259d R11: 0000000000000246 R12: 00007f375dc52020
Dec  8 13:46:24 truenas kernel: R13: 000055f7e35ee7b0 R14: 00007f3750034c20 R15: 00007f375dc4ea70
Dec  8 13:46:24 truenas kernel:  </TASK>
Dec  8 13:46:24 truenas kernel: Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
Dec  8 13:54:30 truenas kernel: bnxt_en 0000:24:00.0 enp36s0f0np0: NIC Link is Up, 10000 Mbps full duplex, Flow control: none
Dec  8 13:54:30 truenas kernel: bnxt_en 0000:24:00.0 enp36s0f0np0: EEE is not active
Dec  8 13:54:30 truenas kernel: bnxt_en 0000:24:00.0 enp36s0f0np0: FEC autoneg off encoding: None
Dec  8 13:54:31 truenas kernel: bnxt_en 0000:24:00.0 enp36s0f0np0: NIC Link is Up, 10000 Mbps full duplex, Flow control: none
Dec  8 13:54:31 truenas kernel: bnxt_en 0000:24:00.0 enp36s0f0np0: EEE is not active
Dec  8 13:54:31 truenas kernel: bnxt_en 0000:24:00.0 enp36s0f0np0: FEC autoneg off encoding: None
Dec  8 13:54:31 truenas systemd-journald[961]: Data hash table of /var/log/journal/c104978bcd914b2d978ef84c9a131c8d/system.journal has a fill level at 75.0 (8533 of 11377 items, 6553600 file size, 768 bytes per hash table item), suggesting rotation.
Dec  8 13:54:31 truenas systemd-journald[961]: /var/log/journal/c104978bcd914b2d978ef84c9a131c8d/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec  8 13:56:37 truenas kernel: bnxt_en 0000:24:00.0 enp36s0f0np0: NIC Link is Up, 10000 Mbps full duplex, Flow control: none
Dec  8 13:56:37 truenas kernel: bnxt_en 0000:24:00.0 enp36s0f0np0: EEE is not active
Dec  8 13:56:37 truenas kernel: bnxt_en 0000:24:00.0 enp36s0f0np0: FEC autoneg off encoding: None

Is your NIC also being reset?

no, actually that is OK. I have 2 lan ports and one of them is directly plugged into a PC which I’m restarting quite often…I only see this due to restart of the other pc…