Continuing the discussion from Multiple 'hung' offline smart tests, can't cancel:
info on the disk in question as follows -
Device Model: TEAM T2532TB
Serial Number: TPBF*********************
LU WWN Device Id: 0 000000 000000000
Firmware Version: HP3618C8
User Capacity: 2,048,408,248,320 bytes [2.04 TB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
TRIM Command: Available
Device is: Not in smartctl database 7.3/5528
ATA Version is: ACS-4 T13/BSR INCITS 529 revision 5
SATA Version is: SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Tue Oct 8 16:54:06 2024 CDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
System -
Intel I5 12400
ASUS H610-PLUS D4
64GB RAM
24 2TB SSDs
LSI 9305-24i latest firmware/bios i could find
lsblk -bo NAME,PTTYPE,TYPE,START,SIZE,PARTTYPENAME
NAME PTTYPE TYPE START SIZE PARTTYPENAME
sda gpt disk 2048408248320
└─sda1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdb gpt disk 2048408248320
└─sdb1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdc gpt disk 2048408248320
└─sdc1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdd gpt disk 2048408248320
└─sdd1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sde gpt disk 2048408248320
└─sde1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdf gpt disk 2048408248320
└─sdf1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdg gpt disk 2048408248320
└─sdg1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdh gpt disk 2048408248320
└─sdh1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdi gpt disk 2048408248320
└─sdi1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdj gpt disk 2048408248320
└─sdj1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdk gpt disk 2048408248320
└─sdk1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdl gpt disk 2048408248320
└─sdl1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdm gpt disk 2048408248320
└─sdm1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdn gpt disk 2048408248320
└─sdn1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdo gpt disk 2048408248320
└─sdo1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdp gpt disk 2048408248320
└─sdp1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdq gpt disk 2048408248320
└─sdq1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdr gpt disk 2048408248320
└─sdr1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sds gpt disk 2048408248320
└─sds1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdt gpt disk 2048408248320
└─sdt1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdu gpt disk 2048408248320
└─sdu1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdv gpt disk 2048408248320
└─sdv1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdw gpt disk 2048408248320
└─sdw1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
sdx gpt disk 2048408248320
└─sdx1 gpt part 4096 2048405799424 Solaris /usr & Apple ZFS
nvme0n1 gpt disk 128035676160
├─nvme0n1p1 gpt part 4096 1048576 BIOS boot
├─nvme0n1p2 gpt part 6144 536870912 EFI System
├─nvme0n1p3 gpt part 34609152 110315773440 Solaris /usr & Apple ZFS
└─nvme0n1p4 gpt part 1054720 17179869184 Linux swap
sudo zpool status -v
pool: Datastore
state: ONLINE
scan: scrub repaired 0B in 00:17:19 with 0 errors on Sun Sep 8 02:17:20 2024
config:
NAME STATE READ WRITE CKSUM
Datastore ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
a87924ba-ab59-4faa-b4ab-08c4f5176bc7 ONLINE 0 0 0
177cec98-134e-4811-8f76-bcb4c1aff10b ONLINE 0 0 0
b4bfa063-ab5c-4d6f-ab4c-2091eea0ec1a ONLINE 0 0 0
9ea85a97-c1c9-4c6d-8fd8-330495f50c00 ONLINE 0 0 0
198be667-6ff1-4d9d-89d1-421a45ac4692 ONLINE 0 0 0
0dd7a353-bdcc-4a9a-b8e9-8853839661db ONLINE 0 0 0
7c7e5b63-816e-4fa5-a58a-e9b617398e6b ONLINE 0 0 0
89fc77fd-63ef-4cff-a1b6-8a28a182d7b9 ONLINE 0 0 0
9debc364-7057-4d9c-a3af-b72d6a2aa6ea ONLINE 0 0 0
99954aaa-56ec-4dbe-90e6-35215c54ac65 ONLINE 0 0 0
05af729b-40c3-46ed-b18c-3a432f1d1cd7 ONLINE 0 0 0
f2c0f8cc-379f-4e2c-8b19-e5e526fdf160 ONLINE 0 0 0
02ea63b2-b9bf-47b1-85e1-e269d2676490 ONLINE 0 0 0
0af9ef11-a00b-4096-a69c-b1a8648f9296 ONLINE 0 0 0
49db4976-4d64-413d-92ee-3504ca501c9f ONLINE 0 0 0
d5e1162a-693a-491f-8853-d20412717eb4 ONLINE 0 0 0
a6cde78c-b18b-43af-9aea-0b06544212d5 ONLINE 0 0 0
1dbc9345-f2a3-4b01-9c5b-e29adce5073e ONLINE 0 0 0
bc50f56c-8f84-40f3-9477-6947e4b37f99 ONLINE 0 0 0
867e2d02-3ce0-4259-b1af-fcaa839c1111 ONLINE 0 0 0
1912395a-4896-4180-bbd9-502542cc1321 ONLINE 0 0 0
fb3428e8-a431-454d-8e6f-8f4198dc8056 ONLINE 0 0 0
7dc42c54-fd8d-470e-be27-02db2c048b9d ONLINE 0 0 0
9edb5cee-e3ae-4a66-bfd5-1439d6017a52 ONLINE 0 0 0
errors: No known data errors
pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:00:16 with 0 errors on Mon Oct 7 03:45:17 2024
config:
NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
nvme0n1p3 ONLINE 0 0 0
errors: No known data errors
sudo zpool import
no pools available to import
Since this pool is offline, please describe how it is configured.
The pool is not offline, the type of queued tests are offline, no idea how they got there.
Of note all the drives of the same model have the tests queued. Might be something to take up with the manufacturer.
Edit -
Odd, I just did a smartctl -x on it again and it shows -
SMART Extended Self-test Log Version: 1 (1 sectors)
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Offline Completed without error 00% 1578 -
# 2 Offline Self-test routine in progress 10% 1578 -
# 3 Offline Self-test routine in progress 10% 1578 -
# 4 Offline Self-test routine in progress 10% 1578 -
# 5 Offline Self-test routine in progress 10% 1578 -
# 6 Offline Self-test routine in progress 10% 1578 -
# 7 Offline Self-test routine in progress 10% 1578 -
# 8 Offline Self-test routine in progress 10% 1578 -
# 9 Offline Self-test routine in progress 10% 1578 -
#10 Offline Self-test routine in progress 10% 1578 -
#11 Offline Self-test routine in progress 10% 1578 -
#12 Offline Self-test routine in progress 10% 1578 -
#13 Offline Self-test routine in progress 10% 1578 -
#14 Offline Self-test routine in progress 10% 1578 -
#15 Offline Self-test routine in progress 10% 1578 -
#16 Offline Self-test routine in progress 10% 1578 -
#17 Offline Self-test routine in progress 10% 1578 -
#18 Offline Self-test routine in progress 10% 1578 -
#19 Offline Self-test routine in progress 10% 1578 -
Will monitor it and see if it gets through them now that it’s progressing, it’s been hung with 21 in the queue for at least a week.
Disregard - smartctl -a still shows the output from the initial post. -x shows the above…not sure the difference