Help Determining If Drive Is Bad

I have TrueNAS Scale 23.10.2 running on a small SBC with a Celeron, and it is reporting that a drive keeps failing the SMART tests. It was a brand new Seagate Ironwolf drive. I have a hard time telling what the SMART tests actually mean. In one spot I see PASSED, but in another spot I see unreadable sectors. Below is the output from the CLI. I would love to know what I am looking for to determine what is bad, or if in general the unreadable sectors is a clear indicator it is bad and I should send in for RMA.

smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.1.74-production+truenas] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate IronWolf
Device Model:     ST12000VN0008-2YS101
Serial Number:    ZRT11RLF
LU WWN Device Id: 5 000c50 0e7edd00d
Firmware Version: SC60
User Capacity:    12,000,138,625,024 bytes [12.0 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database 7.3/5528
ATA Version is:   ACS-4 (minor revision not indicated)
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Wed Apr 17 19:55:07 2024 CDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
                                        was completed without error.
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      ( 121) The previous self-test completed having
                                        the read element of the test failed.
Total time to complete Offline 
data collection:                (  575) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine 
recommended polling time:        (   1) minutes.
Extended self-test routine
recommended polling time:        (1065) minutes.
Conveyance self-test routine
recommended polling time:        (   2) minutes.
SCT capabilities:              (0x50bd) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   084   064   044    Pre-fail  Always       -       241363960
  3 Spin_Up_Time            0x0003   090   090   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       11
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   085   060   045    Pre-fail  Always       -       289575574
  9 Power_On_Hours          0x0032   099   099   000    Old_age   Always       -       1020
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       11
 18 Head_Health             0x000b   100   100   050    Pre-fail  Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   072   050   000    Old_age   Always       -       28 (Min/Max 24/34)
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       2
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       62
194 Temperature_Celsius     0x0022   028   050   000    Old_age   Always       -       28 (0 17 0 0 0)
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       1
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       1
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
200 Pressure_Limit          0x0023   100   100   001    Pre-fail  Always       -       0
240 Head_Flying_Hours       0x0000   100   100   000    Old_age   Offline      -       1019h+35m+57.420s
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       13611496355
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       53477712797

SMART Error Log Version: 1
ATA Error Count: 1
        CR = Command Register [HEX]
        FR = Features Register [HEX]
        SC = Sector Count Register [HEX]
        SN = Sector Number Register [HEX]
        CL = Cylinder Low Register [HEX]
        CH = Cylinder High Register [HEX]
        DH = Device/Head Register [HEX]
        DC = Device Command Register [HEX]
        ER = Error register [HEX]
        ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 1 occurred at disk power-on lifetime: 1020 hours (42 days + 12 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  04 53 00 00 00 00 00  Error: ABRT

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  b0 d4 00 81 4f c2 00 00  11d+02:51:26.840  SMART EXECUTE OFF-LINE IMMEDIATE
  b0 d0 01 00 4f c2 00 00  11d+02:51:26.836  SMART READ DATA
  ec 00 01 00 00 00 00 00  11d+02:51:26.818  IDENTIFY DEVICE
  ec 00 01 00 00 00 00 00  11d+02:51:26.817  IDENTIFY DEVICE
  ea 00 00 00 00 00 a0 00  11d+02:51:24.292  FLUSH CACHE EXT

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short captive       Completed: read failure       90%      1020         -
# 2  Short offline       Completed: read failure       90%      1020         -
# 3  Short offline       Completed: read failure       90%      1005         -
# 4  Short offline       Completed: read failure       90%       981         -
# 5  Extended offline    Completed: read failure       10%       962         -
# 6  Extended offline    Interrupted (host reset)      30%       941         -
# 7  Extended offline    Completed: read failure       10%       921         -
# 8  Extended offline    Completed: read failure       10%       897         -
# 9  Extended offline    Completed: read failure       10%       873         -
#10  Extended offline    Completed: read failure       10%       849         -
#11  Extended offline    Completed: read failure       10%       832         -
#12  Extended offline    Completed: read failure       10%       801         -
#13  Extended offline    Completed: read failure       10%       777         -
#14  Extended offline    Interrupted (host reset)      00%       753         -
#15  Extended offline    Completed: read failure       10%       733         -
#16  Extended offline    Completed: read failure       10%       709         -
#17  Extended offline    Completed: read failure       10%       685         -
#18  Extended offline    Completed: read failure       10%       668         -
#19  Extended offline    Completed: read failure       10%       637         -
#20  Extended offline    Completed: read failure       10%       613         -
#21  Extended offline    Completed: read failure       10%       589         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

The above only provides legacy SMART information - try 'smartctl -x' for more

1000 hours of run time.

Failing smart tests.

RMA.

They say “infant mortality” is not a thing in HDs anymore. I don’t buy it. I just RMAd a 1000 hr HD last week, that had passed burnin. Seems like the bathtub curve is alive and well.

4 Likes

If the drive is failing the smart test, why does TrueNAS report the pool as being fine?

Because the pool is fine?

When was the last scrub? Try running one.

The pool would only be affected if a sector that is in use is bad. If your HDs are not full then the bad sector might not be in use.

Failing smart tests are reason to RMA a drive before the drive fails completely.

1 Like

I still had about 5TB free which may be why it was showing as fine. I just replaced the drive so I am not sure I can still see when the last SCRUB was run.

Scrub is per pool.

Not sure how to see the status in the gui in scale, but from the shell zpool status will show the last scrub or resilver operation status

1 Like

Is there a way to show past scrubs? Since it is currently resilvering it just shows that instead.

No, but if your notifications are setup properly, you’d have gotten warning messages about it if something had come up.

The SMART test is there to warn you about potential trouble using common statistics that a drive might be about to go bad. The severity of each error varies, so it’s good to peruse good guides like the one from @joeschmuck over here in the old forum.

(There may be a more recent one over here in the new forum as well.)

Depending on the error, I offline, pull and replace the drive ASAP, then resilver. The drive is usually still 100% responsive but there is no reason to tempt fate. That’s why it is also important to have qualified spare drives - give yourself the luxury of a fully protected NAS while the RMA process takes its time.

1 Like

I don’t remember seeing any scrub errors, just that the drive had unreadable sectors.

I’ll take a look at the guide. What do you mean by a qualified replacement? In my case i had a spare of the same drive (although untested)

Qualified means I have put the drive through multiple tests to weed out infant deaths before the drive becomes part of the pool. Usually I do this as a short, a long SMART test followed by bad blocks and another SMART long test. See here.

@spearfoot wrote a nice script to automate it all. I have yet to use it but I reckon it works.

Running these tests is not a guarantee that the spare will function as intended under actual use but every sector / byte / block has been tested at least once. If the drive doesn’t freak out after many hours of hot grinding bad blocks work then chances are it’ll take to a nicely cooled NAS with largely dormant data like mine and call it a vacation.

1 Like

Why do you assume there’s a necessary relationship between the two things?

Your drive hasn’t passed a single SMART self-test in 500 hours of runtime–not a single self-test in the log has passed. Because the log only shows the last 21 tests, we can’t see if the drive has ever passed a self-test. It’s long since time to RMA the drive.

And thoroughly test a drive before putting it into service. The method I generally follow is here:
https://www.familybrown.org/dokuwiki/doku.php?id=fester112:hvalid_hdd

2 Likes

Definitely going to test out the replacement drive. I am planning to build another NAS in the near future and will definitely test those drives as well. Do the burn-in tests use a lot of CPU resources? I have about 6 or 7 drives I will need to test and would love to do them all at once.

1 Like

I am still fairly new to the NAS world, let alone TrueNAS, so I figured if a drive was bad, the pool would throw some kind of warning as well.

Definitely going to stress test drives as I receive them from now on to make sure they are good drives, or at least better chances of being good.

“Pool health” is logical: Data is valid, and has the required level of redundancy.
“Drive health” is physical: Drive is working without defect.

ZFS will strive to keep data valid even if drives are partially failing.

Burn-in is not CPU intensive. You can test as many drives in parallel as you want, using a dedicated tmux session for ech drive if using badblocks, or test the entire array with solnet-array (read only, non destructive).

The first burn in guide I listed shows you how to use tmux to run the bad blocks test in parallel. It is disk intensive, ie 190MB/s per disk. My CPUs seemed to be ok (even the c2750 in my miniXL).

Practically nothing.

They can saturate your I/O resources though. I’ve run burnins simultaneously on 8 disks before.

And just use spearfoot’s script. It does the badbocks and long testing etc. it skips the initial long test these days.

I noticed I have the same drive and the same issues!
I opened a topic here: Critical sector errors for one drive - #11 by Nikotine
Looks like this type of drives are sh*t…

Usually I hear a lot of good things about Seagate drives. Maybe a bad batch? When was your drive manufactured?