I'm Confused, Getting Alerts that a disks has errors but No disks In vdev are listed as bad

Apologies if this has been answered before but couldn’t find another topic exactly the same.

First off, I know very little regarding Linux or Truenas but trying to learn.
I am currently running TrueNas Community edition **Version:**25.04.2.6

My VDEV consists of 8 x 4TB drives in a RaidZ2.

I am getting alerts regarding a failed drive: Unreadable / Sectors Pending.

I have gone to the vdev and all 8 disks are listed as online with zero errors. I have a couple of good drives spare but which should I believe, the alerts or the information in the vdev manage disks page?

The dataset is working fine and I can transfer files to the dataset and also copy from the dataset with no file corruption.

I have two other copies of all the data in the dataset for that vdev so not a disaster if the dataset gets corrupted but I’d rather not have to spend the time rebuilding and copying everything again.

I have isolated the drive so I know which one to replace if needed.

1 Like

Those warnings are coming from S.M.A.R.T data that the drives report. How to interpret those warnings is difficult, they are manufacturer specific and are happening inside the low-level firmware of hard disks. From personal experience, warnings related to Unreadable / Sectors Pending do come with hard disk age. But that’s only personal, brand specific experience.

The remaining TrueNAS UI is mainly concerned about errors that the file system (ZFS) has found. If you run regular scrubs and there are no errors then the data is OK from a file system perspective. S.M.A.R.T. reports can be used as a pre-fail indicator to replace drives before they fail from a ZFS perspective, but their interpretation can be difficult.

If you update to 25.10 those warnings will be gone as S.M.A.R.T monitoring was mostly removed in that version.

1 Like
I’m leaning towards not a drive issue. I followed the flow chart and as I haven’t noticed any issues using the pool I used the path accordingly. After completing the path I ran zpool status and this is the result:
truenas_admin@truenas[~]$ sudo zpool status
  pool: NAS
 state: ONLINE
  scan: scrub repaired 0B in 03:38:45 with 0 errors on Wed Nov 12 02:20:02 2025
config:

        NAME                                      STATE     READ WRITE CKSUM
        NAS                                       ONLINE       0     0     0
          raidz2-0                                ONLINE       0     0     0
            4f2eaf89-9679-46d2-a063-a31219fc13fe  ONLINE       0     0     0
            9debb533-9c06-453a-a494-edb83af1d08e  ONLINE       0     0     0
            35d02064-4d1e-4cad-84d4-2b6c6c135725  ONLINE       0     0     0
            7fe26726-37c9-4a53-a5c1-6f2f9f87aa2b  ONLINE       0     0     0
            fd175f32-0d32-4596-8d3d-22f8744bef0b  ONLINE       0     0     0
            a9c20199-2cf7-44f3-9e4d-7e6892d0016f  ONLINE       0     0     0
            f3143dde-3e89-4127-bb12-8f989001de8a  ONLINE       0     0     0
            206c87a5-c4d8-4575-b962-a3db19152702  ONLINE       0     0     0

errors: No known data errors
I’m leaning towards not a drive issue. I followed the flow chart
and as I haven’t noticed any issues using the pool I used the path
accordingly. After completing the path I ran zpool status and this
is the result:
truenas_admin@truenas[~]$ sudo zpool status

  pool: NAS
 state: ONLINE
  scan: scrub repaired 0B in 03:38:45 with 0 errors on Wed Nov 12 02:20:02 2025
config:

        NAME                                      STATE     READ WRITE CKSUM
        NAS                                       ONLINE       0     0     0
          raidz2-0                                ONLINE       0     0     0
            4f2eaf89-9679-46d2-a063-a31219fc13fe  ONLINE       0     0     0
            9debb533-9c06-453a-a494-edb83af1d08e  ONLINE       0     0     0
            35d02064-4d1e-4cad-84d4-2b6c6c135725  ONLINE       0     0     0
            7fe26726-37c9-4a53-a5c1-6f2f9f87aa2b  ONLINE       0     0     0
            fd175f32-0d32-4596-8d3d-22f8744bef0b  ONLINE       0     0     0
            a9c20199-2cf7-44f3-9e4d-7e6892d0016f  ONLINE       0     0     0
            f3143dde-3e89-4127-bb12-8f989001de8a  ONLINE       0     0     0
            206c87a5-c4d8-4575-b962-a3db19152702  ONLINE       0     0     0

errors: No known data errors

Just noticed the last scrub was 1 month ago so running a scrub now. Will see what that comes back with. Thanks for the link to the flowchart

It looks like you are right. After finally figuring out where to find the smart test logs it does indeed show that this disk is failing the smart tests. I’ll replace the disk once the scrub has completed and resilver (if that’s the correct term).

Once the drive is removed I’ll test it with Victoria and see how many bad LBA’s are present.

sudo smartctl -x /dev/sdX where ‘X’ is whatever letter is appropriate will indicate how many bad sectors there are. You post the output here as formatted text (</> button) if you wish.

If you have a spare port and a cold spare drive, do the replacement with the old drive still attached to the system, and then send the failing drive for RMA.

As requested:

smartctl 7.4 2023-08-01 r5530 \[x86_64-linux-6.12.15-production+truenas\] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     WDC WD40EDAZ-11SLVB0
Serial Number:    WD-WX72D507J7JS
LU WWN Device Id: 5 0014ee 212b5648f
Firmware Version: 80.00A80
User Capacity:    4,000,787,030,016 bytes \[4.00 TB\]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Form Factor:      3.5 inches
TRIM Command:     Available, deterministic, zeroed
Device is:        Not in smartctl database 7.3/5528
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Mon Dec 15 15:50:31 2025 GMT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is:   Unavailable
APM feature is:   Unavailable
Rd look-ahead is: Enabled
Write cache is:   Enabled
DSN feature is:   Unavailable
ATA Security is:  Disabled, NOT FROZEN \[SEC1\]
Wt Cache Reorder: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status:      ( 121) The previous self-test completed having
the read element of the test failed.
Total time to complete Offline
data collection:                (27600) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        ( 335) minutes.
Conveyance self-test routine
recommended polling time:        (   2) minutes.
SCT capabilities:              (0x3031) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
1 Raw_Read_Error_Rate     POSR-K   200   200   051    -    0
3 Spin_Up_Time            POS–K   202   193   021    -    2875
4 Start_Stop_Count        -O–CK   083   083   000    -    17415
5 Reallocated_Sector_Ct   PO–CK   200   200   140    -    0
7 Seek_Error_Rate         -OSR-K   100   253   000    -    0
9 Power_On_Hours          -O–CK   044   044   000    -    41035
10 Spin_Retry_Count        -O–CK   100   100   000    -    0
11 Calibration_Retry_Count -O–CK   100   100   000    -    0
12 Power_Cycle_Count       -O–CK   099   099   000    -    1419
192 Power-Off_Retract_Count -O–CK   199   199   000    -    1167
193 Load_Cycle_Count        -O–CK   178   178   000    -    67393
194 Temperature_Celsius     -O—K   111   092   000    -    36
196 Reallocated_Event_Count -O–CK   200   200   000    -    0
197 Current_Pending_Sector  -O–CK   200   200   000    -    3
198 Offline_Uncorrectable   ----CK   200   200   000    -    3
199 UDMA_CRC_Error_Count    -O–CK   200   001   000    -    127147
200 Multi_Zone_Error_Rate   —R--   200   200   000    -    2
||||||\_ K auto-keep
|||||\_\_ C event count
||||\__\_ R error rate
|||\___\_ S speed/performance
||\____\_ O updated online
|\_____\_ P prefailure warning

General Purpose Log Directory Version 1
SMART           Log Directory Version 1 \[multi-sector log support\]
Address    Access  R/W   Size  Description
0x00       GPL,SL  R/O      1  Log Directory
0x01           SL  R/O      1  Summary SMART error log
0x02           SL  R/O      5  Comprehensive SMART error log
0x03       GPL     R/O      6  Ext. Comprehensive SMART error log
0x04       GPL     R/O    256  Device Statistics log
0x04       SL      R/O      8  Device Statistics log
0x06           SL  R/O      1  SMART self-test log
0x07       GPL     R/O      1  Extended self-test log
0x09           SL  R/W      1  Selective self-test log
0x0c       GPL     R/O   2048  Pending Defects log
0x10       GPL     R/O      1  NCQ Command Error log
0x11       GPL     R/O      1  SATA Phy Event Counters log
0x24       GPL     R/O    294  Current Device Internal Status Data log
0x30       GPL,SL  R/O      9  IDENTIFY DEVICE data log
0x80-0x9f  GPL,SL  R/W     16  Host vendor specific log
0xa0-0xa7  GPL,SL  VS      16  Device vendor specific log
0xa8-0xb6  GPL,SL  VS       1  Device vendor specific log
0xb7       GPL,SL  VS      78  Device vendor specific log
0xb9       GPL,SL  VS       4  Device vendor specific log
0xbd       GPL,SL  VS       1  Device vendor specific log
0xc0       GPL,SL  VS       1  Device vendor specific log
0xc1       GPL     VS      93  Device vendor specific log
0xe0       GPL,SL  R/W      1  SCT Command/Status
0xe1       GPL,SL  R/W      1  SCT Data Transfer

SMART Extended Comprehensive Error Log Version: 1 (6 sectors)
Device Error Count: 565 (device log contains only the most recent 24 errors)
CR     = Command Register
FEATR  = Features Register
COUNT  = Count (was: Sector Count) Register
LBA_48 = Upper bytes of LBA High/Mid/Low Registers \]  ATA-8
LH     = LBA High (was: Cylinder High) Register    \]   LBA
LM     = LBA Mid (was: Cylinder Low) Register      \] Register
LL     = LBA Low (was: Sector Number) Register     \]
DV     = Device (was: Device/Head) Register
DC     = Device Control Register
ER     = Error register
ST     = Status register
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It “wraps” after 49.710 days.

Error 565 \[12\] occurred at disk power-on lifetime: 6601 hours (275 days + 1 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER – ST COUNT  LBA_48  LH LM LL DV DC
– – – == – == == == – – – – –
04 – 61 00 00 00 00 00 00 00 00 40 00  Device Fault; Error: ABRT at LBA = 0x00000000 = 0

Commands leading to the command that caused the error were:
CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
– == – == – == == == – – – – –  ---------------  --------------------
60 00 01 00 b0 00 00 00 00 00 00 40 00     18:27:58.503  READ FPDMA QUEUED
ef 00 02 00 00 00 00 00 00 00 00 a0 00     18:27:58.175  SET FEATURES \[Enable write cache\]
60 00 01 00 a8 00 00 00 00 00 00 40 00     18:27:28.534  READ FPDMA QUEUED
ef 00 02 00 00 00 00 00 00 00 00 a0 00     18:27:28.206  SET FEATURES \[Enable write cache\]
60 00 01 00 a0 00 00 00 00 00 00 40 00     18:26:58.565  READ FPDMA QUEUED

Error 564 \[11\] occurred at disk power-on lifetime: 6601 hours (275 days + 1 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER – ST COUNT  LBA_48  LH LM LL DV DC
– – – == – == == == – – – – –
04 – 61 00 00 00 00 00 00 00 00 40 00  Device Fault; Error: ABRT at LBA = 0x00000000 = 0

Commands leading to the command that caused the error were:
CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
– == – == – == == == – – – – –  ---------------  --------------------
60 00 01 00 a8 00 00 00 00 00 00 40 00     18:27:28.534  READ FPDMA QUEUED
ef 00 02 00 00 00 00 00 00 00 00 a0 00     18:27:28.206  SET FEATURES \[Enable write cache\]
60 00 01 00 a0 00 00 00 00 00 00 40 00     18:26:58.565  READ FPDMA QUEUED
ef 00 02 00 00 00 00 00 00 00 00 a0 00     18:26:58.237  SET FEATURES \[Enable write cache\]
60 00 01 00 98 00 00 00 00 00 00 40 00     18:26:28.580  READ FPDMA QUEUED

Error 563 \[10\] occurred at disk power-on lifetime: 6601 hours (275 days + 1 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER – ST COUNT  LBA_48  LH LM LL DV DC
– – – == – == == == – – – – –
04 – 61 00 00 00 00 00 00 00 00 40 00  Device Fault; Error: ABRT at LBA = 0x00000000 = 0

Commands leading to the command that caused the error were:
CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
– == – == – == == == – – – – –  ---------------  --------------------
60 00 01 00 a0 00 00 00 00 00 00 40 00     18:26:58.565  READ FPDMA QUEUED
ef 00 02 00 00 00 00 00 00 00 00 a0 00     18:26:58.237  SET FEATURES \[Enable write cache\]
60 00 01 00 98 00 00 00 00 00 00 40 00     18:26:28.580  READ FPDMA QUEUED
ef 00 02 00 00 00 00 00 00 00 00 a0 00     18:26:28.252  SET FEATURES \[Enable write cache\]
60 00 01 00 90 00 00 00 00 00 00 40 00     18:25:58.626  READ FPDMA QUEUED

Error 562 \[9\] occurred at disk power-on lifetime: 6601 hours (275 days + 1 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER – ST COUNT  LBA_48  LH LM LL DV DC
– – – == – == == == – – – – –
04 – 61 00 00 00 00 00 00 00 00 40 00  Device Fault; Error: ABRT at LBA = 0x00000000 = 0

Commands leading to the command that caused the error were:
CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
– == – == – == == == – – – – –  ---------------  --------------------
60 00 01 00 98 00 00 00 00 00 00 40 00     18:26:28.580  READ FPDMA QUEUED
ef 00 02 00 00 00 00 00 00 00 00 a0 00     18:26:28.252  SET FEATURES \[Enable write cache\]
60 00 01 00 90 00 00 00 00 00 00 40 00     18:25:58.626  READ FPDMA QUEUED
ef 00 02 00 00 00 00 00 00 00 00 a0 00     18:25:58.329  SET FEATURES \[Enable write cache\]
60 00 01 00 88 00 00 00 00 00 00 40 00     18:25:28.657  READ FPDMA QUEUED

Error 561 \[8\] occurred at disk power-on lifetime: 6601 hours (275 days + 1 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER – ST COUNT  LBA_48  LH LM LL DV DC
– – – == – == == == – – – – –
04 – 61 00 00 00 00 00 00 00 00 40 00  Device Fault; Error: ABRT at LBA = 0x00000000 = 0

Commands leading to the command that caused the error were:
CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
– == – == – == == == – – – – –  ---------------  --------------------
60 00 01 00 90 00 00 00 00 00 00 40 00     18:25:58.626  READ FPDMA QUEUED
ef 00 02 00 00 00 00 00 00 00 00 a0 00     18:25:58.329  SET FEATURES \[Enable write cache\]
60 00 01 00 88 00 00 00 00 00 00 40 00     18:25:28.657  READ FPDMA QUEUED
ef 00 02 00 00 00 00 00 00 00 00 a0 00     18:25:28.329  SET FEATURES \[Enable write cache\]
60 00 01 00 80 00 00 00 00 00 00 40 00     18:24:58.687  READ FPDMA QUEUED

Error 560 \[7\] occurred at disk power-on lifetime: 6601 hours (275 days + 1 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER – ST COUNT  LBA_48  LH LM LL DV DC
– – – == – == == == – – – – –
04 – 61 00 00 00 00 00 00 00 00 40 00  Device Fault; Error: ABRT at LBA = 0x00000000 = 0

Commands leading to the command that caused the error were:
CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
– == – == – == == == – – – – –  ---------------  --------------------
60 00 01 00 88 00 00 00 00 00 00 40 00     18:25:28.657  READ FPDMA QUEUED
ef 00 02 00 00 00 00 00 00 00 00 a0 00     18:25:28.329  SET FEATURES \[Enable write cache\]
60 00 01 00 80 00 00 00 00 00 00 40 00     18:24:58.687  READ FPDMA QUEUED
ef 00 02 00 00 00 00 00 00 00 00 a0 00     18:24:58.391  SET FEATURES \[Enable write cache\]
60 00 01 00 78 00 00 00 00 00 00 40 00     18:24:28.718  READ FPDMA QUEUED

Error 559 \[6\] occurred at disk power-on lifetime: 6601 hours (275 days + 1 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER – ST COUNT  LBA_48  LH LM LL DV DC
– – – == – == == == – – – – –
04 – 61 00 00 00 00 00 00 00 00 40 00  Device Fault; Error: ABRT at LBA = 0x00000000 = 0

Commands leading to the command that caused the error were:
CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
– == – == – == == == – – – – –  ---------------  --------------------
60 00 01 00 80 00 00 00 00 00 00 40 00     18:24:58.687  READ FPDMA QUEUED
ef 00 02 00 00 00 00 00 00 00 00 a0 00     18:24:58.391  SET FEATURES \[Enable write cache\]
60 00 01 00 78 00 00 00 00 00 00 40 00     18:24:28.718  READ FPDMA QUEUED
ef 00 02 00 00 00 00 00 00 00 00 a0 00     18:24:28.390  SET FEATURES \[Enable write cache\]
60 00 01 00 70 00 00 00 00 00 00 40 00     18:23:58.733  READ FPDMA QUEUED

Error 558 \[5\] occurred at disk power-on lifetime: 6601 hours (275 days + 1 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER – ST COUNT  LBA_48  LH LM LL DV DC
– – – == – == == == – – – – –
04 – 61 00 00 00 00 00 00 00 00 40 00  Device Fault; Error: ABRT at LBA = 0x00000000 = 0

Commands leading to the command that caused the error were:
CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
– == – == – == == == – – – – –  ---------------  --------------------
60 00 01 00 78 00 00 00 00 00 00 40 00     18:24:28.718  READ FPDMA QUEUED
ef 00 02 00 00 00 00 00 00 00 00 a0 00     18:24:28.390  SET FEATURES \[Enable write cache\]
60 00 01 00 70 00 00 00 00 00 00 40 00     18:23:58.733  READ FPDMA QUEUED
ef 00 02 00 00 00 00 00 00 00 00 a0 00     18:23:58.437  SET FEATURES \[Enable write cache\]
60 00 01 00 60 00 00 00 00 00 00 40 00     18:23:28.263  READ FPDMA QUEUED

SMART Extended Self-test Log Version: 1 (1 sectors)
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error

# 1  Short offline       Completed: read failure       90%     41031         5160848096

# 2  Short offline       Completed: read failure       90%     41010         5160848096

# 3  Short offline       Completed: read failure       90%     40985         5160848104

# 4  Short offline       Completed: read failure       90%     40817         5160848096

# 5  Short offline       Completed: read failure       90%     40649         5160848096

# 6  Short offline       Completed: read failure       90%     40481         5160848096

# 7  Extended offline    Completed: read failure       90%     40336         5160848096

# 8  Short offline       Completed: read failure       90%     40314         5160848096

# 9  Short offline       Completed: read failure       90%     40164         5160848096

#10  Short offline       Completed: read failure       90%     40145         5160848096
#11  Short offline       Completed without error       00%     39977         -
#12  Short offline       Completed without error       00%     39836         -
#13  Short offline       Completed without error       00%     39668         -
#14  Extended offline    Completed: read failure       90%     39617         5465596840
#15  Short offline       Completed without error       00%     39500         -
#16  Conveyance offline  Completed without error       00%     39483         -
#17  Short offline       Completed without error       00%     39332         -
#18  Conveyance offline  Completed without error       00%     39315         -

SMART Selective self-test log data structure revision number 1
SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
1        0        0  Not_testing
2        0        0  Not_testing
3        0        0  Not_testing
4        0        0  Not_testing
5        0        0  Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

SCT Status Version:                  3
SCT Version (vendor specific):       258 (0x0102)
Device State:                        Active (0)
Current Temperature:                    36 Celsius
Power Cycle Min/Max Temperature:     26/36 Celsius
Lifetime    Min/Max Temperature:     15/55 Celsius
Under/Over Temperature Limit Count:   0/0
Vendor specific:
01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

SCT Temperature History Version:     2
Temperature Sampling Period:         1 minute
Temperature Logging Interval:        1 minute
Min/Max recommended Temperature:      0/65 Celsius
Min/Max Temperature Limit:           -41/85 Celsius
Temperature History Size (Index):    478 (433)

Index    Estimated Time   Temperature Celsius
434    2025-12-15 07:53    32  \*\*\*\*\*\*\*\*\*\*\*\*\*
…    ..( 26 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*\*\*
461    2025-12-15 08:20    32  \*\*\*\*\*\*\*\*\*\*\*\*\*
462    2025-12-15 08:21    33  \*\*\*\*\*\*\*\*\*\*\*\*\*\*
463    2025-12-15 08:22    32  \*\*\*\*\*\*\*\*\*\*\*\*\*
464    2025-12-15 08:23    33  \*\*\*\*\*\*\*\*\*\*\*\*\*\*
465    2025-12-15 08:24    32  \*\*\*\*\*\*\*\*\*\*\*\*\*
466    2025-12-15 08:25    33  \*\*\*\*\*\*\*\*\*\*\*\*\*\*
…    ..( 22 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*\*\*\*
11    2025-12-15 08:48    33  \*\*\*\*\*\*\*\*\*\*\*\*\*\*
12    2025-12-15 08:49     ?  -
13    2025-12-15 08:50    26  \*\*\*\*\*\*\*
…    ..(  2 skipped).    ..  \*\*\*\*\*\*\*
16    2025-12-15 08:53    26  \*\*\*\*\*\*\*
17    2025-12-15 08:54    27  \*\*\*\*\*\*\*\*
…    ..(  3 skipped).    ..  \*\*\*\*\*\*\*\*
21    2025-12-15 08:58    27  \*\*\*\*\*\*\*\*
22    2025-12-15 08:59    28  \*\*\*\*\*\*\*\*\*
…    ..(  2 skipped).    ..  \*\*\*\*\*\*\*\*\*
25    2025-12-15 09:02    28  \*\*\*\*\*\*\*\*\*
26    2025-12-15 09:03    29  \*\*\*\*\*\*\*\*\*\*
27    2025-12-15 09:04    29  \*\*\*\*\*\*\*\*\*\*
28    2025-12-15 09:05    28  \*\*\*\*\*\*\*\*\*
29    2025-12-15 09:06    29  \*\*\*\*\*\*\*\*\*\*
30    2025-12-15 09:07    28  \*\*\*\*\*\*\*\*\*
31    2025-12-15 09:08    28  \*\*\*\*\*\*\*\*\*
32    2025-12-15 09:09    29  \*\*\*\*\*\*\*\*\*\*
33    2025-12-15 09:10    28  \*\*\*\*\*\*\*\*\*
34    2025-12-15 09:11    29  \*\*\*\*\*\*\*\*\*\*
35    2025-12-15 09:12    29  \*\*\*\*\*\*\*\*\*\*
36    2025-12-15 09:13    28  \*\*\*\*\*\*\*\*\*
37    2025-12-15 09:14    28  \*\*\*\*\*\*\*\*\*
38    2025-12-15 09:15    28  \*\*\*\*\*\*\*\*\*
39    2025-12-15 09:16    29  \*\*\*\*\*\*\*\*\*\*
…    ..(  6 skipped).    ..  \*\*\*\*\*\*\*\*\*\*
46    2025-12-15 09:23    29  \*\*\*\*\*\*\*\*\*\*
47    2025-12-15 09:24    28  \*\*\*\*\*\*\*\*\*
48    2025-12-15 09:25    29  \*\*\*\*\*\*\*\*\*\*
49    2025-12-15 09:26    29  \*\*\*\*\*\*\*\*\*\*
50    2025-12-15 09:27    29  \*\*\*\*\*\*\*\*\*\*
51    2025-12-15 09:28    28  \*\*\*\*\*\*\*\*\*
52    2025-12-15 09:29    28  \*\*\*\*\*\*\*\*\*
53    2025-12-15 09:30    28  \*\*\*\*\*\*\*\*\*
54    2025-12-15 09:31    29  \*\*\*\*\*\*\*\*\*\*
…    ..( 27 skipped).    ..  \*\*\*\*\*\*\*\*\*\*
82    2025-12-15 09:59    29  \*\*\*\*\*\*\*\*\*\*
83    2025-12-15 10:00    30  \*\*\*\*\*\*\*\*\*\*\*
84    2025-12-15 10:01    29  \*\*\*\*\*\*\*\*\*\*
85    2025-12-15 10:02    30  \*\*\*\*\*\*\*\*\*\*\*
…    ..( 13 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*
99    2025-12-15 10:16    30  \*\*\*\*\*\*\*\*\*\*\*
100    2025-12-15 10:17    29  \*\*\*\*\*\*\*\*\*\*
…    ..(  2 skipped).    ..  \*\*\*\*\*\*\*\*\*\*
103    2025-12-15 10:20    29  \*\*\*\*\*\*\*\*\*\*
104    2025-12-15 10:21    30  \*\*\*\*\*\*\*\*\*\*\*
…    ..(  2 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*
107    2025-12-15 10:24    30  \*\*\*\*\*\*\*\*\*\*\*
108    2025-12-15 10:25    31  \*\*\*\*\*\*\*\*\*\*\*\*
…    ..(  4 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*\*
113    2025-12-15 10:30    31  \*\*\*\*\*\*\*\*\*\*\*\*
114    2025-12-15 10:31    32  \*\*\*\*\*\*\*\*\*\*\*\*\*
…    ..(  2 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*\*\*
117    2025-12-15 10:34    32  \*\*\*\*\*\*\*\*\*\*\*\*\*
118    2025-12-15 10:35    33  \*\*\*\*\*\*\*\*\*\*\*\*\*\*
…    ..(  5 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*\*\*\*
124    2025-12-15 10:41    33  \*\*\*\*\*\*\*\*\*\*\*\*\*\*
125    2025-12-15 10:42    34  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
…    ..(  8 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
134    2025-12-15 10:51    34  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
135    2025-12-15 10:52    35  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
…    ..( 72 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
208    2025-12-15 12:05    35  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
209    2025-12-15 12:06    36  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
210    2025-12-15 12:07    36  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
211    2025-12-15 12:08    35  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
…    ..(  2 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
214    2025-12-15 12:11    35  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
215    2025-12-15 12:12    36  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
216    2025-12-15 12:13    36  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
217    2025-12-15 12:14    35  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
218    2025-12-15 12:15    36  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
219    2025-12-15 12:16    35  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
220    2025-12-15 12:17    35  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
221    2025-12-15 12:18    35  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
222    2025-12-15 12:19    36  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
…    ..(  2 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
225    2025-12-15 12:22    36  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
226    2025-12-15 12:23    35  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
…    ..(  3 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
230    2025-12-15 12:27    35  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
231    2025-12-15 12:28    36  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
…    ..( 14 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
246    2025-12-15 12:43    36  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
247    2025-12-15 12:44    35  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
…    ..( 59 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
307    2025-12-15 13:44    35  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
308    2025-12-15 13:45    36  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
309    2025-12-15 13:46    36  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
310    2025-12-15 13:47    35  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
311    2025-12-15 13:48    35  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
312    2025-12-15 13:49    36  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
313    2025-12-15 13:50    36  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
314    2025-12-15 13:51    35  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
315    2025-12-15 13:52    35  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
316    2025-12-15 13:53    36  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
…    ..( 54 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
371    2025-12-15 14:48    36  \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
372    2025-12-15 14:49    30  \*\*\*\*\*\*\*\*\*\*\*
…    ..( 15 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*
388    2025-12-15 15:05    30  \*\*\*\*\*\*\*\*\*\*\*
389    2025-12-15 15:06    31  \*\*\*\*\*\*\*\*\*\*\*\*
390    2025-12-15 15:07    31  \*\*\*\*\*\*\*\*\*\*\*\*
391    2025-12-15 15:08    30  \*\*\*\*\*\*\*\*\*\*\*
392    2025-12-15 15:09    30  \*\*\*\*\*\*\*\*\*\*\*
393    2025-12-15 15:10    30  \*\*\*\*\*\*\*\*\*\*\*
394    2025-12-15 15:11    31  \*\*\*\*\*\*\*\*\*\*\*\*
…    ..( 10 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*\*
405    2025-12-15 15:22    31  \*\*\*\*\*\*\*\*\*\*\*\*
406    2025-12-15 15:23    30  \*\*\*\*\*\*\*\*\*\*\*
407    2025-12-15 15:24    30  \*\*\*\*\*\*\*\*\*\*\*
408    2025-12-15 15:25    30  \*\*\*\*\*\*\*\*\*\*\*
409    2025-12-15 15:26    31  \*\*\*\*\*\*\*\*\*\*\*\*
…    ..( 11 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*\*
421    2025-12-15 15:38    31  \*\*\*\*\*\*\*\*\*\*\*\*
422    2025-12-15 15:39    32  \*\*\*\*\*\*\*\*\*\*\*\*\*
…    ..( 10 skipped).    ..  \*\*\*\*\*\*\*\*\*\*\*\*\*
433    2025-12-15 15:50    32  \*\*\*\*\*\*\*\*\*\*\*\*\*

SCT Error Recovery Control command not supported

Device Statistics (GP Log 0x04)
Page  Offset Size        Value Flags Description
0x01  =====  =               =  ===  == General Statistics (rev 1) ==
0x01  0x008  4            1419  —  Lifetime Power-On Resets
0x01  0x010  4           41035  —  Power-on Hours
0x01  0x018  6    157063662256  —  Logical Sectors Written
0x01  0x020  6       638051498  —  Number of Write Commands
0x01  0x028  6    562379631960  —  Logical Sectors Read
0x01  0x030  6      1349406468  —  Number of Read Commands
0x01  0x038  6      1697111936  —  Date and Time TimeStamp
0x03  =====  =               =  ===  == Rotating Media Statistics (rev 1) ==
0x03  0x008  4           32576  —  Spindle Motor Power-on Hours
0x03  0x010  4           18968  —  Head Flying Hours
0x03  0x018  4           68561  —  Head Load Events
0x03  0x020  4               0  —  Number of Reallocated Logical Sectors
0x03  0x028  4             497  —  Read Recovery Attempts
0x03  0x030  4               0  —  Number of Mechanical Start Failures
0x03  0x038  4              72  —  Number of Realloc. Candidate Logical Sectors
0x03  0x040  4            1167  —  Number of High Priority Unload Events
0x04  =====  =               =  ===  == General Errors Statistics (rev 1) ==
0x04  0x008  4               0  —  Number of Reported Uncorrectable Errors
0x04  0x010  4           11395  —  Resets Between Cmd Acceptance and Completion
0x05  =====  =               =  ===  == Temperature Statistics (rev 1) ==
0x05  0x008  1              36  —  Current Temperature
0x05  0x010  1              31  —  Average Short Term Temperature
0x05  0x018  1              32  —  Average Long Term Temperature
0x05  0x020  1              55  —  Highest Temperature
0x05  0x028  1              15  —  Lowest Temperature
0x05  0x030  1              51  —  Highest Average Short Term Temperature
0x05  0x038  1              19  —  Lowest Average Short Term Temperature
0x05  0x040  1              43  —  Highest Average Long Term Temperature
0x05  0x048  1              23  —  Lowest Average Long Term Temperature
0x05  0x050  4               0  —  Time in Over-Temperature
0x05  0x058  1              65  —  Specified Maximum Operating Temperature
0x05  0x060  4               0  —  Time in Under-Temperature
0x05  0x068  1               0  —  Specified Minimum Operating Temperature
0x06  =====  =               =  ===  == Transport Statistics (rev 1) ==
0x06  0x008  4          143234  —  Number of Hardware Resets
0x06  0x010  4          145054  —  Number of ASR Events
0x06  0x018  4          127147  —  Number of Interface CRC Errors
0xff  =====  =               =  ===  == Vendor Specific Statistics (rev 1) ==
0xff  0x008  7               0  —  Vendor Specific
0xff  0x010  7               0  —  Vendor Specific
0xff  0x018  7               0  —  Vendor Specific
|||\_ C monitored condition met
||\_\_ D supports DSN
|\__\_ N normalized value

Pending Defects log (GP Log 0x0c)
Index                LBA    Hours
0         3116242032        -
1         3116242033        -
2         3116242034        -
3         3116242035        -
4         3116242036        -
5         3116242037        -
6         3116242038        -
7         3116242039        -
8         3694750368        -
9         3694750369        -
10         3694750370        -
11         3694750371        -
12         3694750372        -
13         3694750373        -
14         3694750374        -
15         3694750375        -
16         3702457936        -
17         3702457937        -
18         3702457938        -
19         3702457939        -
20         3702457940        -
21         3702457941        -
22         3702457942        -
23         3702457943        -
24         5160848096        -
25         5160848097        -
26         5160848098        -
27         5160848099        -
28         5160848100        -
29         5160848101        -
30         5160848102        -
… (25 entries not shown)

SATA Phy Event Counters (GP Log 0x11)
ID      Size     Value  Description
0x0001  2            0  Command failed due to ICRC error
0x0002  2            5  R_ERR response for data FIS
0x0003  2            5  R_ERR response for device-to-host data FIS
0x0004  2            0  R_ERR response for host-to-device data FIS
0x0005  2            0  R_ERR response for non-data FIS
0x0006  2            0  R_ERR response for device-to-host non-data FIS
0x0007  2            0  R_ERR response for host-to-device non-data FIS
0x0008  2            0  Device-to-host non-data FIS retries
0x0009  2           15  Transition from drive PhyRdy to drive PhyNRdy
0x000a  2           17  Device-to-host register FISes sent due to a COMRESET
0x000b  2            0  CRC errors within host-to-device FIS
0x000d  2            0  Non-CRC errors within host-to-device FIS
0x000f  2            0  R_ERR response for host-to-device data FIS, CRC
0x0012  2            0  R_ERR response for host-to-device non-data FIS, CRC
0x8000  4        21743  Vendor specific

Looks like too many pending sectors to reallocate them all

1 Like

@storeman A few things first:

  1. List the exact error, all of it. If you only provide a tiny bit of data, then you as asking for advice based on assumptions. Don’t do that, it can lead you down the wrong path.
  2. A drive failure message does not mean a ZFS failure, and also a ZFS failure does not mean a drive failure. They may be related but linking them together without verifying all the indications is problematic.

Not really difficult at all.

This is not the solution, not even close. Why?

You have drive errors and ‘smartd’ was telling you have them. If you ignore them, you will end up likely in trouble. Also, while the GUI support for SMART is gone in 25.10, it still does some SMART (built in house version) checks. The problem does not just go away.

A drive error now does not mean your drive has failed, but then again it also doesn’t mean it isn’t on the edge of failing either.

You were given good advice from @dan and I recommend you follow it. The flowcharts are pretty easy to follow, I have not heard of a single complaint, or recommendation on how to improve it. But if you find something to improve, I’m happy to make changes for the better. But, use the flowcharts, they are likely to expose a problem.

EDIT: Your drive data you just posted states you have a drive failure. If it cannot complete a selftest, then if has failed.

Short offline Completed: read failure 90% 41031 5160848096

…minus the “formatted text” part for readability.

Still the drive has lots of CRC errors and consistently fails the short :scream: SMART test, so it has far more trouble than just three sectors going bad. Act swiftly!

1 Like

I know there wasn’t much info and as I stated, I know very little about truenas & linux. I mistakenly took the ZFS health as meaning the drive was healthy I have now learned that it just means the filesystem is healthy. I followed the flowchart following the route that applied to my system. I wasn’t noticing any problems with reading or writing data and tried the steps indicated in the flowchart.
After reading the reply from bacon I went back onto the sytem and eventually found the smart logs. At this time I was running a scrub (as indicated on the flowchart). Once that had completed succesfully I offlined the faulty drive, removed it and added my replacement disk.Next I used the replace disk and selected the drive and it is now resilvering.

The issue I was having was the alerts stating ‘Currently unreadable (pending) sectors.’
I don’t know the CLI commands to get the info and had to search on how to run the commands in the flowchart. Once I figured that out I was good to go.

After reading the response from Etorix I learned a new command and as requested I posted the results. While I don’t fully understand every detail of the results even a noob like me could see that SMART wasn’t particularly happy with the drive health.

I couldn’t furnish much info in my original post because I didn’t really know a great deal then. Also, it wasn’t a huge inconvenience if I lost the whole pool of data as I have multiple backups which are regularly checked. I just didn’t want to let it get that far and the good folks in the forum were able to help me learn and understand a tiny bit more.
Remember, everyone was as clueless as I am at some point in their past. I’ll never be as knowledgeable as many people here but I’m slightly more knowledgeable now.

Thanks to all who pitched in to help, It’s very much appreciated.

5 Likes

Learning is good, and backups are excellent.
As for drive health monitoring, we unfortunately agree that TrueNAS is not particularly helpful…

I used that icon and it totally messed it up so pasted it in plain text. Formatted there were losts of little snippets in a single line requiring a lot of horizontal scrollong

Ah… This must be an effect of posting across systems which use different line breaks.

I didn’t claim it to be a solution? Just a statement of fact.

Attribute 196 (Reallocated event count) / 197 (Current pending sectors) are not monitored in 25.10. Only 187 is monitored. If you have evidence otherwise I’d be happy to be proven wrong.

I think that might be a SMR drive. There are issues when using ZFS together with SMR drives. As far as I know, the issues mostly manifest during a resilver - where SMR drives perform horribly.
For HDDs you want to have CMR drives. There was a huge fiasco when western digital sneaked SMR drives into the WD red lineup.

Note that this SMR issue is likely unrelated to your original issue.

Link for further info: WD Red SMR vs CMR Tested Avoid Red SMR - ServeTheHome

1 Like

It was a spare drive I bought without even considering if it was CMR or not. I can’t remember the purpose it was bought for. Most of my drives are either WD Red Pro, A couple of WD gold and Toshiba NAS drives. When I built my truenas box I used that drive with 3 others as a manual mirror to the data pool. I later changed the pool to RaidZ2 for a little extra storage space. I had the redundancy and other backups of the same data. The reason for building the Truenas was as a Plex server. The Qnap NAS couldn’t quite cut it for me so I took the WD Reds out of it and sold it on to cover most of the costs in getting my current setup working.
After dfiscovering the difference between CMR & SMR I now avoid SMR drives regardless of the use it is going to get. Looking to get better replacements for a couple of mediocre disks as the funds allow.

Easy Fix, now looks good again.

And yes, learning is good. We all started somewhere.

I’m not arguing that point, my point was there is a problem, by moving to a different version which hides the alarms is not a good thing. I do not agree with the iX adage “Replace it when it has totally failed”.

Home users need to be smarter than that, my pockets are not that deep and filled with money. If we were a business and had a case of drives ready for use, in a large server system, and this is what you can afford, then you can do that. I understand it from a corporate perspective. Using a person to try troubleshoot if a drive is failing is money, but that is a corporate mindset, not a home user mindset.

It is an SMR drive, good pointing it out.

1 Like

I can only speak from personal experiences, but I had multiple western digital drives with 197 Current_Pending_Sector “errors”. Never had a ZFS error and the drives never failed even after years of use afterwards. Even overwriting the entire drive with random data doesn’t get rid of the pending sectors. Nobody seems to know how to get rid of pending sectors on those drive, so who knows. Again, just anecdotal evidence - your mileage may vary.

I do think that I should have suggested looking at the full smartctl -x output before jumping to any conclusion - I do apologize for that.

Now the failed self tests those would concern me. Also concerning that the failing self tests apparently didn’t trigger an alert.

1 Like