TrueNAS-13.0-U6.2 - server reboots every 65 min

@bkindel I had meant for you to send it as a PM, so as to not put all of your information into the public’s eye. I flagged your post for moderation to edit out the link and I can confirm I have the debug file.

1 Like

This looks like it’s been happening since 8/27

2024-08-27.03:50:48  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.03:50:48  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.04:55:49  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.04:55:50  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.06:00:49  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.06:00:50  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.07:05:51  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.07:05:51  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.08:10:49  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.08:10:50  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.09:15:47  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.09:15:48  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.10:20:48  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.10:20:48  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.11:25:47  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.11:25:48  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.12:30:47  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.12:30:48  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.13:35:46  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.13:35:47  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.14:40:49  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.14:40:49  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.15:45:48  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.15:45:49  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.16:50:48  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.16:50:48  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.17:55:47  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.17:55:48  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.19:00:48  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.19:00:49  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.20:05:48  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.20:05:48  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.21:10:48  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.21:10:48  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.22:15:49  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.22:15:50  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-27.23:20:49  zpool import 11721839244249075587  POOL-4x5TB
2024-08-27.23:20:50  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.00:25:48  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.00:25:48  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.01:30:48  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.01:30:48  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.02:35:49  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.02:35:50  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.03:40:50  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.03:40:51  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.04:45:49  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.04:45:49  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.05:50:49  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.05:50:49  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.06:55:50  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.06:55:50  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.08:00:50  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.08:00:50  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.09:05:51  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.09:05:52  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.10:10:52  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.10:10:52  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.11:15:51  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.11:15:52  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.12:20:51  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.12:20:51  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.12:48:20 zfs destroy POOL-4x5TB/.system/samba4@update--2023-02-16-01-48--12.0-U5.1
2024-08-28.12:48:20 zfs snapshot POOL-4x5TB/.system/samba4@update--2024-08-28-17-48--13.0-U6.1
2024-08-28.12:50:14  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.12:50:14  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.13:25:49  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.13:25:49  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.14:30:53  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.14:30:53  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.15:35:49  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.15:35:49  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.16:40:51  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.16:40:51  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.17:45:51  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.17:45:51  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.18:50:50  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.18:50:50  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.19:55:51  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.19:55:51  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.20:30:09  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.20:30:09  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.21:00:49  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.21:00:49  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.22:05:49  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.22:05:49  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-28.23:10:51  zpool import 11721839244249075587  POOL-4x5TB
2024-08-28.23:10:51  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.00:15:50  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.00:15:50  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.01:20:50  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.01:20:50  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.02:26:00  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.02:26:00  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.03:31:02  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.03:31:02  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.04:36:01  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.04:36:01  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.05:41:00  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.05:41:00  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.06:46:00  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.06:46:00  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.07:51:01  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.07:51:01  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.08:56:00  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.08:56:00  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.10:01:04  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.10:01:04  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.11:06:02  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.11:06:02  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.12:11:02  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.12:11:02  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.13:16:00  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.13:16:00  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.14:21:01  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.14:21:01  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.15:26:01  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.15:26:01  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.16:31:01  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.16:31:01  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.17:36:01  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.17:36:01  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.18:41:00  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.18:41:00  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.19:46:00  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.19:46:00  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.20:51:05  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.20:51:05  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.21:56:03  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.21:56:03  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-29.23:01:02  zpool import 11721839244249075587  POOL-4x5TB
2024-08-29.23:01:02  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.00:06:02  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.00:06:02  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.01:11:02  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.01:11:02  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.02:16:02  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.02:16:03  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.03:21:01  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.03:21:01  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.04:26:02  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.04:26:02  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.05:31:03  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.05:31:03  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.06:36:02  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.06:36:02  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.07:41:01  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.07:41:01  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.08:11:03  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.08:11:03  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.08:46:02  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.08:46:02  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.09:51:02  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.09:51:02  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.10:56:02  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.10:56:02  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.12:01:03  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.12:01:03  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.13:06:03  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.13:06:03  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.13:40:59  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.13:40:59  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.14:11:02  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.14:11:02  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.15:16:03  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.15:16:03  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.16:21:04  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.16:21:04  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.17:26:02  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.17:26:02  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.18:31:03  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.18:31:03  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.19:36:03  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.19:36:04  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.20:41:04  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.20:41:04  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.21:46:03  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.21:46:03  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.22:51:04  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.22:51:04  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-30.23:56:05  zpool import 11721839244249075587  POOL-4x5TB
2024-08-30.23:56:05  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.01:01:04  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.01:01:04  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.02:06:05  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.02:06:05  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.03:11:07  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.03:11:07  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.04:16:06  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.04:16:07  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.05:21:07  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.05:21:07  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.06:26:05  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.06:26:05  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.07:31:06  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.07:31:06  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.08:14:36  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.08:14:36  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.08:36:05  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.08:36:05  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.09:41:04  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.09:41:04  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.10:46:06  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.10:46:06  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.11:51:05  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.11:51:05  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.12:56:04  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.12:56:05  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.14:01:05  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.14:01:06  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.15:06:05  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.15:06:06  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.16:11:05  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.16:11:06  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.17:16:05  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.17:16:05  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.18:21:07  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.18:21:07  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.19:26:07  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.19:26:07  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.20:31:07  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.20:31:07  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.21:36:07  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.21:36:07  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.22:41:06  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.22:41:06  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-08-31.23:46:07  zpool import 11721839244249075587  POOL-4x5TB
2024-08-31.23:46:07  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.00:00:03  zpool scrub POOL-4x5TB
2024-09-01.00:51:34  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.00:51:34  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.01:56:41  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.01:56:41  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.03:01:15  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.03:01:15  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.04:06:17  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.04:06:17  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.05:11:22  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.05:11:22  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.06:16:23  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.06:16:23  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.07:21:21  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.07:21:21  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.08:26:18  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.08:26:18  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.09:31:20  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.09:31:20  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.10:36:17  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.10:36:17  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.11:41:37  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.11:41:37  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.15:35:35  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.15:35:35  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.16:50:32  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.16:50:32  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.17:06:41  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.17:06:41  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.18:11:45  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.18:11:45  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.13:55:47  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.13:55:47  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.19:16:10  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.19:16:10  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.14:57:17  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.14:57:17  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.20:21:41  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.20:21:41  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-01.21:26:23  zpool import 11721839244249075587  POOL-4x5TB
2024-09-01.21:26:23  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-02.08:52:27  zpool import 11721839244249075587  POOL-4x5TB
2024-09-02.08:52:27  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-02.09:36:10  zpool import 11721839244249075587  POOL-4x5TB
2024-09-02.09:36:10  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-02.10:41:13  zpool import 11721839244249075587  POOL-4x5TB
2024-09-02.10:41:13  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-02.11:46:28  zpool import 11721839244249075587  POOL-4x5TB
2024-09-02.11:46:28  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-02.12:51:17  zpool import 11721839244249075587  POOL-4x5TB
2024-09-02.12:51:17  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-02.13:56:23  zpool import 11721839244249075587  POOL-4x5TB
2024-09-02.13:56:23  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-02.15:01:18  zpool import 11721839244249075587  POOL-4x5TB
2024-09-02.15:01:18  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-02.16:06:28  zpool import 11721839244249075587  POOL-4x5TB
2024-09-02.16:06:28  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-02.17:11:19  zpool import 11721839244249075587  POOL-4x5TB
2024-09-02.17:11:19  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-02.18:16:22  zpool import 11721839244249075587  POOL-4x5TB
2024-09-02.18:16:22  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-02.19:21:21  zpool import 11721839244249075587  POOL-4x5TB
2024-09-02.19:21:21  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-02.20:26:25  zpool import 11721839244249075587  POOL-4x5TB
2024-09-02.20:26:25  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-02.21:31:26  zpool import 11721839244249075587  POOL-4x5TB
2024-09-02.21:31:26  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-03.07:37:05  zpool import 11721839244249075587  POOL-4x5TB
2024-09-03.07:37:05  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-03.08:21:37  zpool import 11721839244249075587  POOL-4x5TB
2024-09-03.08:21:37  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-03.09:26:15  zpool import 11721839244249075587  POOL-4x5TB
2024-09-03.09:26:15  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-03.10:31:27  zpool import 11721839244249075587  POOL-4x5TB
2024-09-03.10:31:27  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-03.11:36:33  zpool import 11721839244249075587  POOL-4x5TB
2024-09-03.11:36:33  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-03.12:41:22  zpool import 11721839244249075587  POOL-4x5TB
2024-09-03.12:41:22  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-03.13:46:25  zpool import 11721839244249075587  POOL-4x5TB
2024-09-03.13:46:25  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-03.14:51:26  zpool import 11721839244249075587  POOL-4x5TB
2024-09-03.14:51:26  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-03.15:56:35  zpool import 11721839244249075587  POOL-4x5TB
2024-09-03.15:56:35  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-03.16:02:19  zpool import 11721839244249075587  POOL-4x5TB
2024-09-03.16:02:19  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-03.17:01:39  zpool import 11721839244249075587  POOL-4x5TB
2024-09-03.17:01:39  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-03.18:25:42  zpool import 11721839244249075587  POOL-4x5TB
2024-09-03.18:25:42  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-03.19:11:20  zpool import 11721839244249075587  POOL-4x5TB
2024-09-03.19:11:20  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-03.20:16:15  zpool import 11721839244249075587  POOL-4x5TB
2024-09-03.20:16:15  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-03.21:21:23  zpool import 11721839244249075587  POOL-4x5TB
2024-09-03.21:21:23  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-03.22:26:31  zpool import 11721839244249075587  POOL-4x5TB
2024-09-03.22:26:31  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-03.23:31:21  zpool import 11721839244249075587  POOL-4x5TB
2024-09-03.23:31:21  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.00:36:23  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.00:36:23  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.01:41:47  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.01:41:47  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.02:46:41  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.02:46:41  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.03:51:15  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.03:51:15  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.04:56:23  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.04:56:23  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.06:01:26  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.06:01:26  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.07:06:24  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.07:06:24  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.08:11:19  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.08:11:19  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.09:16:24  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.09:16:24  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.10:21:53  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.10:21:53  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.11:26:39  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.11:26:39  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.12:31:22  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.12:31:22  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.13:36:21  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.13:36:21  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.14:41:27  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.14:41:27  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.16:03:36  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.16:03:36  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.16:51:17  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.16:51:17  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.17:12:20  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.17:12:20  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.17:56:41  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.17:56:41  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.19:01:41  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.19:01:41  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-04.20:06:45  zpool import 11721839244249075587  POOL-4x5TB
2024-09-04.20:06:45  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-05.07:40:18  zpool import 11721839244249075587  POOL-4x5TB
2024-09-05.07:40:18  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-05.08:01:40  zpool import 11721839244249075587  POOL-4x5TB
2024-09-05.08:01:40  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-05.09:06:34  zpool import 11721839244249075587  POOL-4x5TB
2024-09-05.09:06:34  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-05.10:11:48  zpool import 11721839244249075587  POOL-4x5TB
2024-09-05.10:11:48  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-05.10:43:08  zpool import 11721839244249075587  POOL-4x5TB
2024-09-05.10:43:08  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-05.11:16:39  zpool import 11721839244249075587  POOL-4x5TB
2024-09-05.11:16:39  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-05.12:21:40  zpool import 11721839244249075587  POOL-4x5TB
2024-09-05.12:21:40  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-05.13:26:52  zpool import 11721839244249075587  POOL-4x5TB
2024-09-05.13:26:52  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-05.14:13:35  zpool import 11721839244249075587  POOL-4x5TB
2024-09-05.14:13:35  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-05.14:31:51  zpool import 11721839244249075587  POOL-4x5TB
2024-09-05.14:31:51  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-05.15:36:54  zpool import 11721839244249075587  POOL-4x5TB
2024-09-05.15:36:54  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-05.16:54:11  zpool import 11721839244249075587  POOL-4x5TB
2024-09-05.16:54:11  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-05.17:46:34  zpool import 11721839244249075587  POOL-4x5TB
2024-09-05.17:46:34  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-05.18:51:29  zpool import 11721839244249075587  POOL-4x5TB
2024-09-05.18:51:29  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-05.19:56:36  zpool import 11721839244249075587  POOL-4x5TB
2024-09-05.19:56:36  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-05.21:01:34  zpool import 11721839244249075587  POOL-4x5TB
2024-09-05.21:01:34  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-05.22:06:53  zpool import 11721839244249075587  POOL-4x5TB
2024-09-05.22:06:53  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.07:41:50  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.07:41:50  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.07:51:36  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.07:51:36  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.08:56:51  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.08:56:51  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.10:01:37  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.10:01:37  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.11:06:30  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.11:06:30  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.11:38:29  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.11:38:29  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.12:11:32  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.12:11:32  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.13:16:29  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.13:16:29  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.14:21:27  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.14:21:27  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.15:26:51  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.15:26:51  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.16:31:54  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.16:31:54  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.17:36:59  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.17:36:59  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.18:41:58  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.18:41:58  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.19:46:31  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.19:46:31  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.20:26:10  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.20:26:10  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.20:51:38  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.20:51:38  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.22:22:07  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.22:22:07  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-06.23:01:53  zpool import 11721839244249075587  POOL-4x5TB
2024-09-06.23:01:53  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-07.14:53:12  zpool import 11721839244249075587  POOL-4x5TB
2024-09-07.14:53:12  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-07.15:52:41  zpool import 11721839244249075587  POOL-4x5TB
2024-09-07.15:52:41  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-07.16:44:49 zpool scrub -s POOL-4x5TB
2024-09-07.16:57:48  zpool import 11721839244249075587  POOL-4x5TB
2024-09-07.16:57:48  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-07.18:02:27  zpool import 11721839244249075587  POOL-4x5TB
2024-09-07.18:02:27  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-07.19:07:28  zpool import 11721839244249075587  POOL-4x5TB
2024-09-07.19:07:28  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-07.20:12:28  zpool import 11721839244249075587  POOL-4x5TB
2024-09-07.20:12:28  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-07.21:17:27  zpool import 11721839244249075587  POOL-4x5TB
2024-09-07.21:17:27  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-07.22:22:28  zpool import 11721839244249075587  POOL-4x5TB
2024-09-07.22:22:28  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-07.23:27:28  zpool import 11721839244249075587  POOL-4x5TB
2024-09-07.23:27:28  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.00:32:28  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.00:32:28  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.01:37:28  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.01:37:28  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.02:42:30  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.02:42:30  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.03:47:28  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.03:47:28  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.04:52:29  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.04:52:29  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.05:57:29  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.05:57:29  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.07:02:30  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.07:02:30  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.08:07:32  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.08:07:32  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.09:12:29  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.09:12:29  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.10:17:30  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.10:17:30  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.11:22:29  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.11:22:29  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.12:27:29  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.12:27:29  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.16:13:03  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.16:13:03  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.16:32:28  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.16:32:28  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.17:40:49  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.17:40:49  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.18:42:30  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.18:42:30  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.19:47:30  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.19:47:30  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.20:19:05  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.20:19:05  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.20:52:32  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.20:52:32  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.20:54:26  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.20:54:26  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.20:56:50  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.20:56:50  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.21:57:30  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.21:57:30  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.21:59:38  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.21:59:38  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.22:02:25  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.22:02:25  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.22:04:43  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.22:04:43  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.23:02:29  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.23:02:29  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.23:05:19  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.23:05:19  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-08.23:06:54  zpool import 11721839244249075587  POOL-4x5TB
2024-09-08.23:06:54  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-09.07:45:51  zpool import 11721839244249075587  POOL-4x5TB
2024-09-09.07:45:51  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-09.07:51:09  zpool scrub POOL-4x5TB
2024-09-09.08:47:46  zpool import 11721839244249075587  POOL-4x5TB
2024-09-09.08:47:46  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-09.08:49:13  zpool scrub -s POOL-4x5TB
2024-09-09.08:49:23  zpool scrub POOL-4x5TB
2024-09-09.09:53:08  zpool import 11721839244249075587  POOL-4x5TB
2024-09-09.09:53:08  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-09.10:52:44 zpool scrub -p POOL-4x5TB
2024-09-09.10:57:30  zpool import 11721839244249075587  POOL-4x5TB
2024-09-09.10:57:30  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-09.11:55:33 zpool scrub -p POOL-4x5TB
2024-09-09.11:57:10 zpool scrub -p POOL-4x5TB
2024-09-09.12:02:30  zpool import 11721839244249075587  POOL-4x5TB
2024-09-09.12:02:30  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-09.13:08:03  zpool import 11721839244249075587  POOL-4x5TB
2024-09-09.13:08:03  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-09.14:13:06  zpool import 11721839244249075587  POOL-4x5TB
2024-09-09.14:13:06  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-09.15:18:03  zpool import 11721839244249075587  POOL-4x5TB
2024-09-09.15:18:03  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-09.16:22:47  zpool import 11721839244249075587  POOL-4x5TB
2024-09-09.16:22:47  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-09.17:27:43  zpool import 11721839244249075587  POOL-4x5TB
2024-09-09.17:27:43  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-09.18:32:40  zpool import 11721839244249075587  POOL-4x5TB
2024-09-09.18:32:40  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-09.19:38:27  zpool import 11721839244249075587  POOL-4x5TB
2024-09-09.19:38:27  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-09.20:42:57  zpool import 11721839244249075587  POOL-4x5TB
2024-09-09.20:42:57  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-10.07:42:53  zpool import 11721839244249075587  POOL-4x5TB
2024-09-10.07:42:53  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-10.08:37:31  zpool import 11721839244249075587  POOL-4x5TB
2024-09-10.08:37:31  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-10.09:42:41  zpool import 11721839244249075587  POOL-4x5TB
2024-09-10.09:42:41  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-10.10:19:31 zpool scrub -s POOL-4x5TB
2024-09-10.10:47:52  zpool import 11721839244249075587  POOL-4x5TB
2024-09-10.10:47:52  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-10.11:52:31  zpool import 11721839244249075587  POOL-4x5TB
2024-09-10.11:52:31  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-10.12:57:31  zpool import 11721839244249075587  POOL-4x5TB
2024-09-10.12:57:31  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-10.14:02:30  zpool import 11721839244249075587  POOL-4x5TB
2024-09-10.14:02:30  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-10.15:07:30  zpool import 11721839244249075587  POOL-4x5TB
2024-09-10.15:07:30  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-10.16:12:30  zpool import 11721839244249075587  POOL-4x5TB
2024-09-10.16:12:30  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-10.17:17:32  zpool import 11721839244249075587  POOL-4x5TB
2024-09-10.17:17:32  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-10.18:22:31  zpool import 11721839244249075587  POOL-4x5TB
2024-09-10.18:22:31  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-10.19:27:29  zpool import 11721839244249075587  POOL-4x5TB
2024-09-10.19:27:29  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-10.20:32:29  zpool import 11721839244249075587  POOL-4x5TB
2024-09-10.20:32:29  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-10.21:37:31  zpool import 11721839244249075587  POOL-4x5TB
2024-09-10.21:37:31  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-11.08:02:46  zpool import 11721839244249075587  POOL-4x5TB
2024-09-11.08:02:46  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-11.08:27:29  zpool import 11721839244249075587  POOL-4x5TB
2024-09-11.08:27:29  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-11.09:32:32  zpool import 11721839244249075587  POOL-4x5TB
2024-09-11.09:32:32  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-11.10:37:30  zpool import 11721839244249075587  POOL-4x5TB
2024-09-11.10:37:30  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-11.11:42:31  zpool import 11721839244249075587  POOL-4x5TB
2024-09-11.11:42:31  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-11.12:47:30  zpool import 11721839244249075587  POOL-4x5TB
2024-09-11.12:47:30  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-11.13:52:30  zpool import 11721839244249075587  POOL-4x5TB
2024-09-11.13:52:30  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-11.14:57:30  zpool import 11721839244249075587  POOL-4x5TB
2024-09-11.14:57:30  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-11.16:02:31  zpool import 11721839244249075587  POOL-4x5TB
2024-09-11.16:02:31  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-11.17:07:31  zpool import 11721839244249075587  POOL-4x5TB
2024-09-11.17:07:31  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-11.20:27:49 zpool scrub POOL-4x5TB
2024-09-15.17:20:34  zpool import 11721839244249075587  POOL-4x5TB
2024-09-15.17:20:35  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-15.18:25:33  zpool import 11721839244249075587  POOL-4x5TB
2024-09-15.18:25:33  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-15.19:30:33  zpool import 11721839244249075587  POOL-4x5TB
2024-09-15.19:30:33  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-15.20:35:32  zpool import 11721839244249075587  POOL-4x5TB
2024-09-15.20:35:32  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-15.21:40:33  zpool import 11721839244249075587  POOL-4x5TB
2024-09-15.21:40:33  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-15.22:45:33  zpool import 11721839244249075587  POOL-4x5TB
2024-09-15.22:45:33  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-15.23:50:35  zpool import 11721839244249075587  POOL-4x5TB
2024-09-15.23:50:35  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-16.00:55:36  zpool import 11721839244249075587  POOL-4x5TB
2024-09-16.00:55:36  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-16.02:00:32  zpool import 11721839244249075587  POOL-4x5TB
2024-09-16.02:00:32  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-16.03:05:34  zpool import 11721839244249075587  POOL-4x5TB
2024-09-16.03:05:34  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-16.04:10:34  zpool import 11721839244249075587  POOL-4x5TB
2024-09-16.04:10:34  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-16.05:15:34  zpool import 11721839244249075587  POOL-4x5TB
2024-09-16.05:15:34  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-16.06:20:32  zpool import 11721839244249075587  POOL-4x5TB
2024-09-16.06:20:32  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-16.07:25:34  zpool import 11721839244249075587  POOL-4x5TB
2024-09-16.07:25:34  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-16.08:30:33  zpool import 11721839244249075587  POOL-4x5TB
2024-09-16.08:30:33  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-16.09:35:45  zpool import 11721839244249075587  POOL-4x5TB
2024-09-16.09:35:45  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-16.10:40:34  zpool import 11721839244249075587  POOL-4x5TB
2024-09-16.10:40:34  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB
2024-09-16.11:45:35  zpool import 11721839244249075587  POOL-4x5TB
2024-09-16.11:45:35  zpool set cachefile=/data/zfs/zpool.cache POOL-4x5TB

There is evidence that at least some of the reboots are expected. In the past month or so there are 21 shutdown and 4 reboots issued by root in the auth.log file. Here’s some examples:

Sep  6 11:33:16 bksvr 1 2024-09-06T11:33:16.090048-05:00 bksvr.local shutdown 3530 - - reboot by root: 
Sep  6 11:33:16 bksvr 1 2024-09-06T11:33:16.446385-05:00 bksvr.local sshd 1186 - - Received signal 15; terminating.
Sep  6 11:38:39 bksvr 1 2024-09-06T11:38:39.711414-05:00 bksvr.local sshd 1198 - - Server listening on :: port 22.
Sep  6 11:38:39 bksvr 1 2024-09-06T11:38:39.711587-05:00 bksvr.local sshd 1198 - - Server listening on 0.0.0.0 port 22.
Sep  6 12:11:53 bksvr 1 2024-09-06T12:11:53.603951-05:00 bksvr.local sshd 1186 - - Server listening on :: port 22.
Sep  6 12:11:53 bksvr 1 2024-09-06T12:11:53.604124-05:00 bksvr.local sshd 1186 - - Server listening on 0.0.0.0 port 22.
Sep  6 13:16:43 bksvr 1 2024-09-06T13:16:43.239144-05:00 bksvr.local sshd 1186 - - Server listening on :: port 22.
Sep  6 13:16:43 bksvr 1 2024-09-06T13:16:43.239317-05:00 bksvr.local sshd 1186 - - Server listening on 0.0.0.0 port 22.
Sep  6 14:21:42 bksvr 1 2024-09-06T14:21:42.037061-05:00 bksvr.local sshd 1186 - - Server listening on :: port 22.
Sep  6 14:21:42 bksvr 1 2024-09-06T14:21:42.037288-05:00 bksvr.local sshd 1186 - - Server listening on 0.0.0.0 port 22.
Sep  6 15:27:01 bksvr 1 2024-09-06T15:27:01.344073-05:00 bksvr.local sshd 1198 - - Server listening on :: port 22.
Sep  6 15:27:01 bksvr 1 2024-09-06T15:27:01.344229-05:00 bksvr.local sshd 1198 - - Server listening on 0.0.0.0 port 22.
Sep  6 16:32:09 bksvr 1 2024-09-06T16:32:09.304591-05:00 bksvr.local sshd 1164 - - Server listening on :: port 22.
Sep  6 16:32:09 bksvr 1 2024-09-06T16:32:09.304748-05:00 bksvr.local sshd 1164 - - Server listening on 0.0.0.0 port 22.
Sep  6 17:37:11 bksvr 1 2024-09-06T17:37:11.889149-05:00 bksvr.local sshd 1191 - - Server listening on :: port 22.
Sep  6 17:37:11 bksvr 1 2024-09-06T17:37:11.889313-05:00 bksvr.local sshd 1191 - - Server listening on 0.0.0.0 port 22.
Sep  6 18:42:08 bksvr 1 2024-09-06T18:42:08.731257-05:00 bksvr.local sshd 1198 - - Server listening on :: port 22.
Sep  6 18:42:08 bksvr 1 2024-09-06T18:42:08.731411-05:00 bksvr.local sshd 1198 - - Server listening on 0.0.0.0 port 22.
Sep  6 19:46:47 bksvr 1 2024-09-06T19:46:47.554856-05:00 bksvr.local sshd 1186 - - Server listening on :: port 22.
Sep  6 19:46:47 bksvr 1 2024-09-06T19:46:47.555030-05:00 bksvr.local sshd 1186 - - Server listening on 0.0.0.0 port 22.
Sep  6 20:10:27 bksvr 1 2024-09-06T20:10:27.464739-05:00 bksvr.local shutdown 3493 - - power-down by root: 
Sep  6 20:10:27 bksvr 1 2024-09-06T20:10:27.823650-05:00 bksvr.local sshd 1186 - - Received signal 15; terminating.
Sep  6 20:26:23 bksvr 1 2024-09-06T20:26:23.042884-05:00 bksvr.local sshd 1196 - - Server listening on :: port 22.
Sep  6 20:26:23 bksvr 1 2024-09-06T20:26:23.043066-05:00 bksvr.local sshd 1196 - - Server listening on 0.0.0.0 port 22.
Sep  6 20:51:55 bksvr 1 2024-09-06T20:51:55.509411-05:00 bksvr.local sshd 1183 - - Server listening on :: port 22.
Sep  6 20:51:55 bksvr 1 2024-09-06T20:51:55.509596-05:00 bksvr.local sshd 1183 - - Server listening on 0.0.0.0 port 22.
Sep  6 21:25:06 bksvr 1 2024-09-06T21:25:06.395384-05:00 bksvr.local login 4317 - - login on pts/0 as root
Sep  6 22:22:25 bksvr 1 2024-09-06T22:22:25.134362-05:00 bksvr.local sshd 1183 - - Server listening on :: port 22.
Sep  6 22:22:25 bksvr 1 2024-09-06T22:22:25.134524-05:00 bksvr.local sshd 1183 - - Server listening on 0.0.0.0 port 22.
Sep  6 23:02:03 bksvr 1 2024-09-06T23:02:03.664568-05:00 bksvr.local sshd 1198 - - Server listening on :: port 22.
Sep  6 23:02:03 bksvr 1 2024-09-06T23:02:03.664730-05:00 bksvr.local sshd 1198 - - Server listening on 0.0.0.0 port 22.
Sep  6 23:16:49 bksvr 1 2024-09-06T23:16:49.433573-05:00 bksvr.local shutdown 3377 - - power-down by root: 

If it’s not that (like something telling the NAS to turn off over SSH) then I would suggest maybe looking to replace your PSU. I don’t see any kernel panics or anything interesting in messages before or after a reboot, so I would suspect it’s just losing power and coming back.

In all honestly, I’ve had a problem in the past where a compressor would turn on for an A/C and this would cause voltage to sag on a circuit and cause a desktop computer to reboot. Weird things happen.

The consistency and frequency of this issue is interesting. I suggested this because the trend absolutely seems to begin at 8/27. Can you think of any major appliances or things that draw alot of power that may have been introduced to the same circuit the NAS is in?

Thanks again @NickF1227!

Yes - since 8/27 sounds right. The shutdowns/reboots you referenced were by me, and are expected. (I’ve been working on trying to resolve since this started happening.)

I have already replaced the PSU (along with several other hardware items noted above).

As mentioned, I’m pretty positive the root-cause is something with the pool (config, etc.) or one of the drives in the pool. If I unplug the 4 drives in the pool and boot the system, it is stable without reboots.

No changes that I am aware of that coincide with the 8/27 date when this started occurring.

If that were the case, I would expect to see some evidence of that from the kernel logs. I don’t see that. The panic screenshot you posted is unfortunately not written out to logs, likely because of the failure mode. This makes it harder.

The pool did scrub recently, and shows healthy

2024-09-11.20:27:49 zpool scrub POOL-4x5TB

+--------------------------------------------------------------------------------+
+                          zpool status -v @1726506632                           +
+--------------------------------------------------------------------------------+
  pool: POOL-4x5TB
 state: ONLINE
  scan: scrub repaired 0B in 03:06:22 with 0 errors on Wed Sep 11 23:34:00 2024
config:

	NAME                                            STATE     READ WRITE CKSUM
	POOL-4x5TB                                      ONLINE       0     0     0
	  raidz1-0                                      ONLINE       0     0     0
	    gptid/84067a89-eb53-11e6-ad13-1866da48e055  ONLINE       0     0     0
	    gptid/84c356a5-eb53-11e6-ad13-1866da48e055  ONLINE       0     0     0
	    gptid/8574e059-eb53-11e6-ad13-1866da48e055  ONLINE       0     0     0
	    gptid/862355a8-eb53-11e6-ad13-1866da48e055  ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0B in 00:00:49 with 0 errors on Thu Sep 12 03:45:49 2024
config:

	NAME          STATE     READ WRITE CKSUM
	freenas-boot  ONLINE       0     0     0
	  ada4p2      ONLINE       0     0     0

errors: No known data errors
debug finished in 0 seconds for zpool status -v

or one of the drives in the pool. If I unplug the 4 drives in the pool and boot the system, it is stable without reboots.

I do note that one of your drives appears to be bad, and is likely a cause for concern

+--------------------------------------------------------------------------------+
+                         SMARTD Boot Status @1726506625                         +
+--------------------------------------------------------------------------------+
SMARTD will not start on boot.
debug finished in 0 seconds for SMARTD Boot Status


+--------------------------------------------------------------------------------+
+                         SMARTD Run Status @1726506625                          +
+--------------------------------------------------------------------------------+
smartd_daemon is not running.
debug finished in 0 seconds for SMARTD Run Status


+--------------------------------------------------------------------------------+
+                        Scheduled SMART Jobs @1726506625                        +
+--------------------------------------------------------------------------------+
debug finished in 0 seconds for Scheduled SMART Jobs


+--------------------------------------------------------------------------------+
+                    Disks being checked by SMART @1726506625                    +
+--------------------------------------------------------------------------------+
debug finished in 0 seconds for Disks being checked by SMART


+--------------------------------------------------------------------------------+
+                          smartctl output @1726506625                           +
+--------------------------------------------------------------------------------+
/dev/ada0 
smartctl 7.2 2021-09-14 r5236 [FreeBSD 13.1-RELEASE-p9 amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Toshiba X300
Device Model:     TOSHIBA HDWE150
Serial Number:    66GEKDMTF57D
LU WWN Device Id: 5 000039 71bd81e97
Firmware Version: FP2A
User Capacity:    5,000,981,078,016 bytes [5.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Mon Sep 16 12:10:25 2024 CDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART Status command failed
Please get assistance from https://www.smartmontools.org/
Register values returned from SMART Status command are:
 ERR=0x00, SC=0x00, LL=0x00, LM=0x00, LH=0x00, DEV=...., STS=....
SMART Status not supported: Invalid ATA output register values
SMART overall-health self-assessment test result: PASSED
Warning: This result is based on an Attribute check.

General SMART Values:
Offline data collection status:  (0x85)	Offline data collection activity
					was aborted by an interrupting command from host.
					Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0)	The previous self-test routine completed
					without error or no self-test has ever 
					been run.
Total time to complete Offline 
data collection: 		(  120) seconds.
Offline data collection
capabilities: 			 (0x5b) SMART execute Offline immediate.
					Auto Offline data collection on/off support.
					Suspend Offline collection upon new
					command.
					Offline surface scan supported.
					Self-test supported.
					No Conveyance Self-test supported.
					Selective Self-test supported.
SMART capabilities:            (0x0003)	Saves SMART data before entering
					power-saving mode.
					Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine 
recommended polling time: 	 (   2) minutes.
Extended self-test routine
recommended polling time: 	 ( 539) minutes.
SCT capabilities: 	       (0x003d)	SCT Status supported.
					SCT Error Recovery Control supported.
					SCT Feature Control supported.
					SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   050    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   100   100   050    Pre-fail  Offline      -       0
  3 Spin_Up_Time            0x0027   100   100   001    Pre-fail  Always       -       8891
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       161
  5 Reallocated_Sector_Ct   0x0033   100   100   050    Pre-fail  Always       -       20072
  7 Seek_Error_Rate         0x000b   100   100   050    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   100   100   050    Pre-fail  Offline      -       0
  9 Power_On_Hours          0x0032   001   001   000    Old_age   Always       -       66837
 10 Spin_Retry_Count        0x0033   103   100   030    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       161
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       1209
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       111
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       1144
194 Temperature_Celsius     0x0022   100   100   000    Old_age   Always       -       44 (Min/Max 20/57)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       2493
197 Current_Pending_Sector  0x0032   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   253   000    Old_age   Always       -       0
220 Disk_Shift              0x0002   100   100   000    Old_age   Always       -       0
222 Loaded_Hours            0x0032   001   001   000    Old_age   Always       -       66595
223 Load_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
224 Load_Friction           0x0022   100   100   000    Old_age   Always       -       0
226 Load-in_Time            0x0026   100   100   000    Old_age   Always       -       194
240 Head_Flying_Hours       0x0001   100   100   001    Pre-fail  Offline      -       0

SMART Error Log Version: 1
ATA Error Count: 2
	CR = Command Register [HEX]
	FR = Features Register [HEX]
	SC = Sector Count Register [HEX]
	SN = Sector Number Register [HEX]
	CL = Cylinder Low Register [HEX]
	CH = Cylinder High Register [HEX]
	DH = Device/Head Register [HEX]
	DC = Device Command Register [HEX]
	ER = Error register [HEX]
	ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 2 occurred at disk power-on lifetime: 58379 hours (2432 days + 11 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  04 31 00 80 9c 6c 48

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  ec 00 00 00 00 00 40 00  12d+01:57:50.785  IDENTIFY DEVICE
  aa aa aa aa aa aa aa ff  12d+01:57:50.578  [RESERVED]
  ea 00 00 00 00 00 40 00  12d+01:57:15.556  FLUSH CACHE EXT
  61 20 f0 38 f7 56 40 00  12d+01:57:15.555  WRITE FPDMA QUEUED
  61 08 e8 d8 86 bd 40 00  12d+01:57:15.555  WRITE FPDMA QUEUED

Error 1 occurred at disk power-on lifetime: 42314 hours (1763 days + 2 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  04 31 00 a8 24 41 40

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  ea 00 00 00 00 00 40 00      00:36:11.875  FLUSH CACHE EXT
  61 08 98 e8 3d 44 40 00      00:36:11.875  WRITE FPDMA QUEUED
  61 10 90 c8 03 41 40 00      00:36:11.875  WRITE FPDMA QUEUED
  61 10 88 b0 03 41 40 00      00:36:11.874  WRITE FPDMA QUEUED
  61 08 80 a0 03 41 40 00      00:36:11.874  WRITE FPDMA QUEUED

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%     18120         -
# 2  Short offline       Completed without error       00%     18035         -
# 3  Short offline       Completed without error       00%       671         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

/dev/ada3 
smartctl 7.2 2021-09-14 r5236 [FreeBSD 13.1-RELEASE-p9 amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Toshiba X300
Device Model:     TOSHIBA HDWE150
Serial Number:    66FAK8T7F57D
LU WWN Device Id: 5 000039 71bb81f64
Firmware Version: FP2A
User Capacity:    5,000,981,078,016 bytes [5.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Mon Sep 16 12:10:25 2024 CDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x85)	Offline data collection activity
					was aborted by an interrupting command from host.
					Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0)	The previous self-test routine completed
					without error or no self-test has ever 
					been run.
Total time to complete Offline 
data collection: 		(  120) seconds.
Offline data collection
capabilities: 			 (0x5b) SMART execute Offline immediate.
					Auto Offline data collection on/off support.
					Suspend Offline collection upon new
					command.
					Offline surface scan supported.
					Self-test supported.
					No Conveyance Self-test supported.
					Selective Self-test supported.
SMART capabilities:            (0x0003)	Saves SMART data before entering
					power-saving mode.
					Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine 
recommended polling time: 	 (   2) minutes.
Extended self-test routine
recommended polling time: 	 ( 547) minutes.
SCT capabilities: 	       (0x003d)	SCT Status supported.
					SCT Error Recovery Control supported.
					SCT Feature Control supported.
					SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   050    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   100   100   050    Pre-fail  Offline      -       0
  3 Spin_Up_Time            0x0027   100   100   001    Pre-fail  Always       -       8578
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       142
  5 Reallocated_Sector_Ct   0x0033   100   100   050    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000b   100   100   050    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   100   100   050    Pre-fail  Offline      -       0
  9 Power_On_Hours          0x0032   001   001   000    Old_age   Always       -       66381
 10 Spin_Retry_Count        0x0033   102   100   030    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       142
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       236
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       97
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       155
194 Temperature_Celsius     0x0022   100   100   000    Old_age   Always       -       40 (Min/Max 19/62)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
220 Disk_Shift              0x0002   100   100   000    Old_age   Always       -       0
222 Loaded_Hours            0x0032   001   001   000    Old_age   Always       -       66372
223 Load_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
224 Load_Friction           0x0022   100   100   000    Old_age   Always       -       0
226 Load-in_Time            0x0026   100   100   000    Old_age   Always       -       225
240 Head_Flying_Hours       0x0001   100   100   001    Pre-fail  Offline      -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%     65535         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

/dev/ada2 
smartctl 7.2 2021-09-14 r5236 [FreeBSD 13.1-RELEASE-p9 amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Toshiba X300
Device Model:     TOSHIBA HDWE150
Serial Number:    86G2KJ16F57D
LU WWN Device Id: 5 000039 73b781ce2
Firmware Version: FP2A
User Capacity:    5,000,981,078,016 bytes [5.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Mon Sep 16 12:10:26 2024 CDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x85)	Offline data collection activity
					was aborted by an interrupting command from host.
					Auto Offline Data Collection: Enabled.
Self-test execution status:      (  40)	The self-test routine was interrupted
					by the host with a hard or soft reset.
Total time to complete Offline 
data collection: 		(  120) seconds.
Offline data collection
capabilities: 			 (0x5b) SMART execute Offline immediate.
					Auto Offline data collection on/off support.
					Suspend Offline collection upon new
					command.
					Offline surface scan supported.
					Self-test supported.
					No Conveyance Self-test supported.
					Selective Self-test supported.
SMART capabilities:            (0x0003)	Saves SMART data before entering
					power-saving mode.
					Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine 
recommended polling time: 	 (   2) minutes.
Extended self-test routine
recommended polling time: 	 ( 535) minutes.
SCT capabilities: 	       (0x003d)	SCT Status supported.
					SCT Error Recovery Control supported.
					SCT Feature Control supported.
					SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   050    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   100   100   050    Pre-fail  Offline      -       0
  3 Spin_Up_Time            0x0027   100   100   001    Pre-fail  Always       -       8608
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       154
  5 Reallocated_Sector_Ct   0x0033   100   100   050    Pre-fail  Always       -       488
  7 Seek_Error_Rate         0x000b   100   100   050    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   100   100   050    Pre-fail  Offline      -       0
  9 Power_On_Hours          0x0032   001   001   000    Old_age   Always       -       66298
 10 Spin_Retry_Count        0x0033   103   100   030    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       154
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       297
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       107
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       159
194 Temperature_Celsius     0x0022   100   100   000    Old_age   Always       -       42 (Min/Max 20/60)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       59
197 Current_Pending_Sector  0x0032   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
220 Disk_Shift              0x0002   100   100   000    Old_age   Always       -       0
222 Loaded_Hours            0x0032   001   001   000    Old_age   Always       -       66298
223 Load_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
224 Load_Friction           0x0022   100   100   000    Old_age   Always       -       0
226 Load-in_Time            0x0026   100   100   000    Old_age   Always       -       556
240 Head_Flying_Hours       0x0001   100   100   001    Pre-fail  Offline      -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Interrupted (host reset)      80%     65535         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

/dev/ada1 
smartctl 7.2 2021-09-14 r5236 [FreeBSD 13.1-RELEASE-p9 amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Toshiba X300
Device Model:     TOSHIBA HDWE150
Serial Number:    66G8K0EPF57D
LU WWN Device Id: 5 000039 71ba81aee
Firmware Version: FP2A
User Capacity:    5,000,981,078,016 bytes [5.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Mon Sep 16 12:10:26 2024 CDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x85)	Offline data collection activity
					was aborted by an interrupting command from host.
					Auto Offline Data Collection: Enabled.
Self-test execution status:      (  41)	The self-test routine was interrupted
					by the host with a hard or soft reset.
Total time to complete Offline 
data collection: 		(  120) seconds.
Offline data collection
capabilities: 			 (0x5b) SMART execute Offline immediate.
					Auto Offline data collection on/off support.
					Suspend Offline collection upon new
					command.
					Offline surface scan supported.
					Self-test supported.
					No Conveyance Self-test supported.
					Selective Self-test supported.
SMART capabilities:            (0x0003)	Saves SMART data before entering
					power-saving mode.
					Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine 
recommended polling time: 	 (   2) minutes.
Extended self-test routine
recommended polling time: 	 ( 525) minutes.
SCT capabilities: 	       (0x003d)	SCT Status supported.
					SCT Error Recovery Control supported.
					SCT Feature Control supported.
					SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   050    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   100   100   050    Pre-fail  Offline      -       0
  3 Spin_Up_Time            0x0027   100   100   001    Pre-fail  Always       -       8718
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       143
  5 Reallocated_Sector_Ct   0x0033   100   100   050    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000b   100   100   050    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   100   100   050    Pre-fail  Offline      -       0
  9 Power_On_Hours          0x0032   001   001   000    Old_age   Always       -       66382
 10 Spin_Retry_Count        0x0033   102   100   030    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       143
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       738
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       94
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       273
194 Temperature_Celsius     0x0022   100   100   000    Old_age   Always       -       38 (Min/Max 20/58)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
220 Disk_Shift              0x0002   100   100   000    Old_age   Always       -       0
222 Loaded_Hours            0x0032   001   001   000    Old_age   Always       -       66364
223 Load_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
224 Load_Friction           0x0022   100   100   000    Old_age   Always       -       0
226 Load-in_Time            0x0026   100   100   000    Old_age   Always       -       580
240 Head_Flying_Hours       0x0001   100   100   001    Pre-fail  Offline      -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Interrupted (host reset)      90%     65535         -
# 2  Extended offline    Interrupted (host reset)      80%     65535         -
# 3  Short offline       Completed without error       00%     65535         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

/dev/ada4 
smartctl 7.2 2021-09-14 r5236 [FreeBSD 13.1-RELEASE-p9 amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     Micron_M600_MTFDDAY256MBF
Serial Number:    16121385A4FE
LU WWN Device Id: 5 00a075 11385a4fe
Firmware Version: MU05
User Capacity:    256,060,514,304 bytes [256 GB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    Solid State Device
Form Factor:      < 1.8 inches
TRIM Command:     Available, deterministic, zeroed
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ACS-3 T13/2161-D revision 4
SATA Version is:  SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Mon Sep 16 12:10:26 2024 CDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x84)	Offline data collection activity
					was suspended by an interrupting command from host.
					Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0)	The previous self-test routine completed
					without error or no self-test has ever 
					been run.
Total time to complete Offline 
data collection: 		(   23) seconds.
Offline data collection
capabilities: 			 (0x7b) SMART execute Offline immediate.
					Auto Offline data collection on/off support.
					Suspend Offline collection upon new
					command.
					Offline surface scan supported.
					Self-test supported.
					Conveyance Self-test supported.
					Selective Self-test supported.
SMART capabilities:            (0x0003)	Saves SMART data before entering
					power-saving mode.
					Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine 
recommended polling time: 	 (   2) minutes.
Extended self-test routine
recommended polling time: 	 (   3) minutes.
Conveyance self-test routine
recommended polling time: 	 (   3) minutes.
SCT capabilities: 	       (0x0035)	SCT Status supported.
					SCT Feature Control supported.
					SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   100   100   000    Pre-fail  Always       -       0
  5 Reallocated_Sector_Ct   0x0032   100   100   010    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       66256
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       169
171 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0
172 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0
173 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       8
174 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       121
180 Unused_Rsvd_Blk_Cnt_Tot 0x0033   000   000   000    Pre-fail  Always       -       1941
183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
184 End-to-End_Error        0x0032   100   100   000    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
194 Temperature_Celsius     0x0022   060   039   000    Old_age   Always       -       40 (Min/Max 25/61)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   100   100   000    Old_age   Always       -       0
202 Unknown_SSD_Attribute   0x0030   100   100   001    Old_age   Offline      -       0
206 Unknown_SSD_Attribute   0x000e   100   100   000    Old_age   Always       -       0
210 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0
246 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       1089962466
247 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       34089551
248 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       38737566

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Vendor (0xff)       Completed without error       00%       704         -
# 2  Vendor (0xff)       Completed without error       00%       679         -
# 3  Vendor (0xff)       Completed without error       00%       637         -
# 4  Vendor (0xff)       Completed without error       00%       595         -
# 5  Vendor (0xff)       Completed without error       00%       580         -
# 6  Vendor (0xff)       Completed without error       00%       557         -
# 7  Vendor (0xff)       Completed without error       00%       530         -
# 8  Vendor (0xff)       Completed without error       00%     64733         -
# 9  Short offline       Completed without error       00%       498         -
#10  Vendor (0xff)       Completed without error       00%       498         -
#11  Short offline       Completed without error       00%       477         -
#12  Vendor (0xff)       Completed without error       00%       467         -
#13  Vendor (0xff)       Completed without error       00%       453         -
#14  Vendor (0xff)       Completed without error       00%     64482         -
#15  Vendor (0xff)       Completed without error       00%       423         -
#16  Vendor (0xff)       Completed without error       00%       409         -
#17  Vendor (0xff)       Completed without error       00%       376         -
#18  Vendor (0xff)       Completed without error       00%       321         -
#19  Vendor (0xff)       Completed without error       00%       308         -
#20  Vendor (0xff)       Completed without error       00%       266         -
#21  Vendor (0xff)       Completed without error       00%       224         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Completed [00% left] (261803008-261868543)
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

debug finished in 1 seconds for smartctl output



ada0 has ATA Error Count: 2 and has Power_On_Hours 66837 which is almost 8 years old!

I would definitely suggest copying the data off these drives ASAP, because I don’t see anything else wrong other than some really high power on hours and at least one obviously bad drive. I suspect at least one of the other 3 drives is also having a problem that is silent in SMART.

1 Like

Based on timing, could it be a snapshot exiting really badly? Once an hour, takes about 5 minutes, then, bang!

I thought of that, but if you look at the snippet from the zpool history logs in my earlier post, you’ll see that we haven’t taken or destroyed any snapshots in the past month.

In fact these are all of the snaps on the system

-----------------------------------------------------------------------------+
+                        zfs list -t snapshot @1726506632                        +
+--------------------------------------------------------------------------------+
NAME                                                                                      USED  AVAIL     REFER  MOUNTPOINT  FREENAS:STATE
POOL-4x5TB/.system/samba4@wbc-1676512235                                                  413K      -     1.08M  -           -
POOL-4x5TB/.system/samba4@wbc-1685806876                                                  389K      -     1.10M  -           -
POOL-4x5TB/.system/samba4@update--2023-07-21-19-17--12.0-U8.1                             192K      -     1.11M  -           -
POOL-4x5TB/.system/samba4@wbc-1689967382                                                  151K      -     1.08M  -           -
POOL-4x5TB/.system/samba4@update--2023-07-30-00-33--13.0-U5.2                             215K      -     1.11M  -           -
POOL-4x5TB/.system/samba4@update--2023-12-04-13-45--13.0-U5.3                             366K      -     1.20M  -           -
POOL-4x5TB/.system/samba4@update--2023-12-23-00-30--13.0-U6                               355K      -     1.19M  -           -
POOL-4x5TB/.system/samba4@update--2024-08-28-17-48--13.0-U6.1                             320K      -     1.35M  -           -
POOL-4x5TB/iocage/jails/nextcloud21@ioc_update_12.2-RELEASE-p9_2021-10-01_20-27-45        163K      -      523K  -           -
POOL-4x5TB/iocage/jails/nextcloud21/root@ioc_update_12.2-RELEASE-p9_2021-10-01_20-27-45   662M      -     3.04G  -           -
POOL-4x5TB/iocage/releases/11.2-RELEASE/root@plex-plexpass                               2.44M      -      423M  -           -
POOL-4x5TB/iocage/releases/11.2-RELEASE/root@nextcloud                                   2.44M      -      423M  -           -
POOL-4x5TB/jails/.warden-template-pluginjail-10.3-x64@clean                              9.51M      -      518M  -           -
freenas-boot/ROOT/13.0-U6.2@2017-02-18-18:30:11                                          5.09M      -      848M  -           -
freenas-boot/ROOT/13.0-U6.2@2017-03-09-18:37:02                                          9.32M      -      855M  -           -
freenas-boot/ROOT/13.0-U6.2@2017-04-21-09:51:54                                           151M      -      859M  -           -
freenas-boot/ROOT/13.0-U6.2@2017-05-27-17:51:06                                           858M      -      859M  -           -
freenas-boot/ROOT/13.0-U6.2@2017-07-08-10:40:01                                           861M      -      862M  -           -
freenas-boot/ROOT/13.0-U6.2@2017-12-22-09:05:52                                           863M      -      864M  -           -
freenas-boot/ROOT/13.0-U6.2@2021-01-03-14:39:00                                           863M      -     1001M  -           -
freenas-boot/ROOT/13.0-U6.2@2021-01-03-18:16:48                                          1.01G      -     1.15G  -           -
freenas-boot/ROOT/13.0-U6.2@2021-08-09-08:09:49                                          1.03G      -     1.18G  -           -
freenas-boot/ROOT/13.0-U6.2@2021-08-09-08:19:29                                          1.33G      -     1.49G  -           -
freenas-boot/ROOT/13.0-U6.2@2021-08-18-20:12:58                                          1.46G      -     1.63G  -           -
freenas-boot/ROOT/13.0-U6.2@2023-02-15-19:47:08                                          1.46G      -     1.62G  -           -
freenas-boot/ROOT/13.0-U6.2@2023-07-21-14:16:12                                          1.48G      -     1.64G  -           -
freenas-boot/ROOT/13.0-U6.2@2023-07-29-19:32:23                                          1.58G      -     1.74G  -           -
freenas-boot/ROOT/13.0-U6.2@2023-12-04-07:44:13                                          1.58G      -     1.74G  -           -
freenas-boot/ROOT/13.0-U6.2@2023-12-22-18:29:06                                          1.58G      -     1.75G  -           -
freenas-boot/ROOT/13.0-U6.2@2024-08-28-12:47:13                                          1.58G      -     1.75G  -           -
debug finished in 0 seconds for zfs list -t snapshot

65 minutes may be a client ramping up IO

@NickF1227 / @Constantin - thank you both again for all of the assistance/input!

I’m definitely planning to head down the path of putting together new hardware. I just need to make sure that I backup/save my data.

Current pool shows 4.56 TiB used.

Any recommendations on a robust copy/backup method that will resume gracefully after a reboot? I would definitely prefer not to have to try to time it and copy in 1 hour chunks.

I’m also looking at drives. Any recommendation/feedback on something like this?

MDD 14TB 7200RPM SAS 12Gb/s 256MB Cache 3.5inch Internal Enterprise Hard Drive (MD14TSAS25672E)

Do I need to just completely avoid refurb for a home server application?

Any other drive suggestions?

1 Like

Honestly that would probably make it worse. Generally speaking, the more “Power Up/Spin Up and Power Down/Park” events on those drives, the higher the likelihood of failure. If you need access to the data, you may as well just leave it on. If you can live without access to the data, it may be better to power it off.

However, you can use ZFS Send/Receive also known as Replication in the UI. That’s probably the best way to ensure your data gets to the other end exactly as it was originally written.

I have alot of 10 TiB drives. A small few are shucked from WD USB enclosures, but most of them are all from some seller on eBay who bought a bunch of Cisco UCS storage servers (from ATT or some other telecom that offloaded these) and they had been reselling the drives for $99 bucks with between 20k-30k hours on them, HGST SAS 10TiB. Of the two dozen or so I bought from him, none of them have failed yet, and I’ve put on 20k hours or so since I bought them.

I also bought a TrueNAS M50 from a company in NYC selling it on eBay. I have 8-TiBs in that system all with over 30k writes without any issues.

The problem really is when you start pushing 40k hours, Backblazes data supports the failure rates looking like a bathtub or a “U”, and 40k is the uptick.

I say all of this to say, used drives are okay if you know what you are getting. Alot of sellers will mess with SMART data and “refurbish” them by simply zeroing out the tach and slapping a new sticker on it. Others are better, WSIWYG, but second hand is hit or miss because people are morally dishonest. Up to you on risk. I treat hard drives as fungible and account for that by having lots.

You may want something you set and forget as long as possible, it looks like this guy lived a good life! If that was second hand originally, you can certainly go second hand again. If that was new originally, do that. An 8 year service life is impressive.

2 Likes

ZFS replace should eventually finish, even if it has to take multiple crashes and 65 minutes of resilvering per crash.

If the issue is a dodgy disk (or two), then replacing dodgy disks with another disk (even while dodgy disk is online) could help.

But at this stage, you’ve replaced everything… unless you mysteriously got a PSU with the same issue.

it really is very strange that this happens precisly 65 minutes after power on. I assume it happens 60 minutes after boot finishes… which takes 5 minutes :wink:

Most likely something is kicking off after 60 minutes… I think truenas re-checks NTP 60 minutes after bootup etc…

Its unusual for an HD to cause a reboot, even it it were faulty… but if it did… why would it be 65 minutes after power on.

And it is after power on right? So it has to be caused by the device… rather than an external force, as the external force wouldn’t know that the device was turned on…

In a world with unknown variables, I wouldn’t say that was strictly true. But I hear you.

In this case, that’s possible, anything running locally on the TrueNAS COULD trigger this. The .system dataset lives in this pool.

------------------------------------------------------------------------------+
+                         POOL-4x5TB/.system @1726506632                         +
+--------------------------------------------------------------------------------+
NAME                PROPERTY                VALUE                   SOURCE
POOL-4x5TB/.system  type                    filesystem              -
POOL-4x5TB/.system  creation                Sat Feb  4 21:31 2017   -

But it could very much also be a client scripted to do something.

I’m pretty sure this is a “stress related injury” that is being triggered when the bad hard drive(s) end up causing the kernel to fart. But I have no idea why exactly and I’m not entirely sure I’d risk running this if I didn’t have the data backed up somewhere. The environmental questions were valid I swear. I’ve seen it :frowning:

@NickF1227 - agreed RE 8 years. I purchased these 4 Toshiba 5TB drives new. I can’t complain. I probably should have been a little more proactive thinking about a system refresh, but here I am.

Unfortunately, I don’t have the data backed up anywhere. Just to clarify, this was my question about a robust method of getting the data backed up (i.e., the server is currently in a state where it reboots once an hour, and I can’t do anything to prevent it.)

@Stux - my preference would be not to replace drives in the existing pool (although this is an option)

My thoughts are:

  • Get one new drive (6TB or larger)
  • Back the existing data up to the new 6TB drive
  • purchase 4x (or more) larger drives
  • my current pool is RAIDZ1; I have been thinking maybe RAIDZ2 for better integrity
  • copy data from 6TB backup drive to new pool

Getting the existing data backed up is most critical, and I’m just trying to figure out the best way to do so.

ZFS Replication can do it better than I’d trust rsync to. You’d probably have to script the replication to trigger on boot. Not pretty, but we play the cards we’re dealt in life.

Replication has this nifty thing called a receive_resume_token which should help us here over the alternatives.
zfs-recv.8 — OpenZFS documentation

Example being that rsync would have to calculate the checksums every time the server reboots.

Depending on how valuable this data is, I may consider potentially removing the known-bad drive from the system. This may give you more stability to get the data off, but you would be increasing the risk of pool failure.

I’ve seen it where only one drive can cause issues like this before, but it’s far more likely one or more of the other drives is also having problems. Causing the FreeBSD kernel to crash like this isn’t normal or expected behavior for a disk failure, but I’ve seen them cause unexpected behaviors like this.

Also if you plan to use ZFS replication to get the data off, you will have to script it to run on boot. This is entering murkey waters for sure, so I can’t make any promises, but it should theoretically work.

If you run midclt call replication query | jq you can get the ID number of the replication task. If it’s your first one it’s probably ID 1, but you’ve had this system for a while so it’s probably not 1.

Example except, this is my 7th replication id.

{
    "id": 7,
    "target_dataset": "optane_vm/test",
    "recursive": true,
    "compression": null,
    "speed_limit": null,
    "enabled": true,
    "direction": "PUSH",
    "transport": "LOCAL",
    "sudo": false,
    "netcat_active_side": null,
    "netcat_active_side_port_min": null,
    "netcat_active_side_port_max": null,
    "source_datasets": [
      "optane_vm/fio"
    ],
    "exclude": [],
    "naming_schema": [],
    "name_regex": null,
    "auto": false,
    "only_matching_schedule": false,
    "readonly": "IGNORE",
    "allow_from_scratch": false,
    "hold_pending_snapshots": false,
    "retention_policy": "SOURCE",
    "lifetime_unit": null,
    "lifetime_value": null,
    "lifetimes": [],
    "large_block": true,
    "embed": false,
    "compressed": true,
    "retries": 5,
    "netcat_active_side_listen_address": null,
    "netcat_passive_side_connect_address": null,
    "logging_level": null,
    "name": "optane_vm/fio - optane_vm/test",
    "state": {
      "state": "RUNNING",
      "datetime": {
        "$date": 1726546135000
      },
      "progress": {
        "dataset": "optane_vm/fio",
        "snapshot": "auto-2024-09-06_00-00",
        "snapshots_sent": 0,
        "snapshots_total": 12,
        "bytes_sent": 30800000000,
        "bytes_total": 202000000000,
        "current": 30800000000,
        "total": 202000000000
      },
      "last_snapshot": null
    },
    "properties": true,
    "properties_exclude": [],
    "properties_override": {},
    "replicate": false,
    "encryption": false,
    "encryption_inherit": null,
    "encryption_key": null,
    "encryption_key_format": null,
    "encryption_key_location": null,
    "ssh_credentials": null,
    "periodic_snapshot_tasks": [],
    "also_include_naming_schema": [
      "auto-%Y-%m-%d_%H-%M"
    ],
    "schedule": null,
    "restrict_schedule": null,
    "job": {
      "id": 35407,
      "method": "replication.run",
      "arguments": [
        7
      ],
      "transient": false,
      "description": null,
      "abortable": false,
      "logs_path": "/var/log/jobs/35407.log",
      "logs_excerpt": null,
      "progress": {
        "percent": 1,
        "description": "Sending 1 of 12: optane_vm/fio@auto-2024-09-06_00-00 (28.68 GiB / 188.13 GiB)",
        "extra": null
      },
      "result": null,
      "error": null,
      "exception": null,
      "exc_info": null,
      "state": "RUNNING",
      "time_started": {
        "$date": 1726546115000
      },
      "time_finished": null,
      "credentials": {
        "type": "LOGIN_PASSWORD",
        "data": {
          "username": "root"
        }

Then once you know the ID of the task you want to run, you can add the command midclt call replication.run 1 -job as a post boot task.

If you run it manually to test, it should return with a status, it may error out if somethings misconfigured with the task

root@prod[~]# midclt call replication.run 3 -job
Status: (none)
Total Progress: [________________________________________] 0.00%Total Progress: [________________________________________] 0.00%
[EFAULT] Task is not enabled
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 469, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 511, in __run_body
    rv = await self.method(*args)
         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 179, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/replication.py", line 483, in run
    raise CallError("Task is not enabled")
middlewared.service_exception.CallError: [EFAULT] Task is not enabled

root@prod[~]# midclt call replication.run 4 -job
Status: (none)
Total Progress: [________________________________________] 0.00%
[EFAULT] cannot open 'optane_vm/vms/dreadnought': dataset does not exist.
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 469, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 511, in __run_body
    rv = await self.method(*args)
         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 179, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/replication.py", line 491, in run
    await self.middleware.call("zettarepl.run_replication_task", id_, really_run, job)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1603, in call
    return await self._call(
           ^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1458, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1353, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

But if it’s configured correctly, it’ll look like this:

root@prod[~]# midclt call replication.run 7 -job
Status: (none)
Total Progress: [########################################] 100.00%
null
root@prod[~]# 

And you can check in the UI what it did.

But yeah once you’ve gotten it started the first time, you’d want to add the command midclt call replication.run 7 -job as a postboot script

Creating Init/Shutdown Scripts | TrueNAS Documentation Hub

Good luck! This is crazy but should work. @HoneyBadger You have any better ideas?

1 Like

Repeated rsyncs should do the trick easily, assuming you can finish a scan in an hour an rsync should eventually catch up.

But I’d figure out the replication token magic if it were me.

It should “just work”. -s is included when you setup a task in the GUI.

I wanted to share a quick update.

I had a breakthrough in determining the root-cause of my TrueNAS server reboots.

First of all, I want to thank @NickF1227 again - if it wouldn’t have been for the client suggestion, I might never have figured it out.

I have a Windows machine running a Plex server. I decided to check this morning, and sure enough, Plex is configured for a library scan interval of 1 hour (options are 15 min, 30 min, 1 hr, 2 hr, 6 hr, 12 hr, daily)

I shut down the Windows machine running the Plex server, and my system hasn’t rebooted since (currently ~ 9.5 hrs uptime).

Plex is scanning a couple of SMB shares on my TrueNAS server for new content.

I still have no idea why this just started 3 weeks or so ago, and still can’t correlate it to any changes (new Plex version, etc.), however I’m pretty sure this is the issue. I will do more testing over the next couple of days to confirm.

Any thoughts on why Plex scanning SMB shares would cause the TrueNAS server to reboot?

5 Likes

That’s a really weird one. I wonder if you can recreate the problem using rsync since it also will traverse directories and look for changes to the file system to copy to the backup drive.

To me, it sounds like the added workload is causing a fault in the motherboard / CPU. Not sure where but you’ve likely eliminated the PSU from the equation. Your CPU claims OK re heat so is it perhaps a HBA that is overheating (if you have one).

I’d consider buying a replacement motherboard that is compatible with your CPU and RAM, then see what happens. They are usually not that expensive and this could be an opportunity to upgrade to a server grade motherboard.