I’m running TrueNAS scale and have a pool that TrueNAS reports as degraded. The two disks in the pool pass both short and extended SMART tests, however. How can I tell what the issue is?
zpool status is a good start.
Oh and post you system specs - cos otherwise we are guessing
TrueNAS 24.10.0
ASRock Rack X470D4U w/ AMD 3600X
32GB ECC RAM
Pool in question is two 6TB spinners.
pool: butternut
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use ‘zpool clear’ to mark the device
repaired.
scan: scrub repaired 0B in 09:23:07 with 0 errors on Sun Jun 1 09:23:11 2025
config:
NAME STATE READ WRITE CKSUM
butternut DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
d0a3fdb5-fa66-11eb-a822-7085c2fdcbaa ONLINE 0 0 0
04ace58e-2d8b-4caf-a1d1-3f0fb5a6c499 FAULTED 20 0 7 too many errors
errors: No known data errors
Mind providing the output of those tests? I can pass an exam with a 51/100; wouldn’t necessarily mean I’m a great rocket surgeon.
If you want to risk it & see if it was just a random flap:
zpool clear #poolname
But considering it is a mirror with only 1 disk of redundancy, it might be worth investigating further.
As for posting images, check your dms. Usually a bot sends you a tutorial & once you get that done, you unlock some features & trust on the forum.
Edit: also considering you have read & checksum errors in your zpool status, this is indeed a degraded pool with errors.
How can I print the SMART output? I’m just looking at the results in the GUI, but I’m guessing there is more info that than somewhere. In the GUI on the pool, it says “Failed S.M.A.R.T. Tests: 0”
Okay, sounds like a new disk or two might be in my future. Anything I should check before I start to replace disks?
Here is output of smartctl -l selftest /dev/ada0
edit - I have no insert key on my keyboard so I can’t copy it, nor am I allowed to upload images yet. Here’s an imgur link.
edit - I can’t share links either.
Switched back to my Mac. Here is the output, with the link in the output removed so I can post it.
`root@truenas[~]# smartctl -l selftest /dev/sde
=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
1 Extended offline Completed without error 00% 37379 -
2 Short offline Completed without error 00% 21887 -`
Check your private messages; a bot should have sent you a tutorial, which once complete you can post images/links
Edit:
Output of:
smartctl -a /dev/sde
Plx & ty
[quote=“Fleshmauler, post:8, topic:45371, full:true”]
Check your private messages; a bot should have sent you a tutorial, which once complete you can post images/links[/quote]
I did get the bot message, but no mention of a tutorial in it. It links to the trust levels, and that documentation page says even at trust level 0 where all users start, you should be able to post up to 1 image and 2 links. So, I’m not sure what’s going on. At least I can just copy/paste from my Mac.
root@truenas[~]# smartctl -a /dev/sde
=== START OF INFORMATION SECTION ===
Model Family: Western Digital Ultrastar (He10/12)
Device Model: WDC WD80EMAZ-00WJTA0
Serial Number: 7HKJ8SMN
LU WWN Device Id: 5 000cca 257f1ad57
Firmware Version: 83.H0A83
User Capacity: 8,001,563,222,016 bytes [8.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Form Factor: 3.5 inches
Device is: In smartctl database 7.3/5706
ATA Version is: ACS-2, ATA8-ACS T13/1699-D revision 4
SATA Version is: SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Sun Jun 8 17:04:24 2025 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 93) seconds.
Offline data collection
capabilities: (0x5b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: (1114) minutes.
SCT capabilities: (0x003d) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000b 100 100 016 Pre-fail Always - 0
2 Throughput_Performance 0x0004 129 129 054 Old_age Offline - 112
3 Spin_Up_Time 0x0007 148 148 024 Pre-fail Always - 447 (Average 444)
4 Start_Stop_Count 0x0012 100 100 000 Old_age Always - 231
5 Reallocated_Sector_Ct 0x0033 100 100 005 Pre-fail Always - 0
7 Seek_Error_Rate 0x000a 100 100 067 Old_age Always - 0
8 Seek_Time_Performance 0x0004 128 128 020 Old_age Offline - 18
9 Power_On_Hours 0x0012 093 093 000 Old_age Always - 50454
10 Spin_Retry_Count 0x0012 100 100 060 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 231
22 Helium_Level 0x0023 100 100 025 Pre-fail Always - 100
192 Power-Off_Retract_Count 0x0032 099 099 000 Old_age Always - 2317
193 Load_Cycle_Count 0x0012 099 099 000 Old_age Always - 2317
194 Temperature_Celsius 0x0002 166 166 000 Old_age Always - 39 (Min/Max 19/50)
196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0
197 Current_Pending_Sector 0x0022 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0008 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x000a 200 200 000 Old_age Always - 1
SMART Error Log Version: 1
ATA Error Count: 77 (device log contains only the most recent five errors)
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.
Error 77 occurred at disk power-on lifetime: 45231 hours (1884 days + 15 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 41 00 00 00 00 00 Error: UNC at LBA = 0x00000000 = 0
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 00 b0 4c 36 40 00 6d+12:23:28.544 READ FPDMA QUEUED
61 00 10 b0 34 36 40 00 6d+12:23:21.606 WRITE FPDMA QUEUED
61 00 08 b0 24 36 40 00 6d+12:23:21.606 WRITE FPDMA QUEUED
61 00 10 b0 3b 36 40 00 6d+12:23:21.604 WRITE FPDMA QUEUED
61 00 08 b0 3a 36 40 00 6d+12:23:21.604 WRITE FPDMA QUEUED
Error 76 occurred at disk power-on lifetime: 45231 hours (1884 days + 15 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 41 00 00 00 00 00 Error: UNC at LBA = 0x00000000 = 0
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 08 b0 34 36 40 00 6d+12:23:21.225 READ FPDMA QUEUED
60 10 28 10 26 81 40 00 6d+12:23:14.284 READ FPDMA QUEUED
60 10 20 10 24 81 40 00 6d+12:23:14.284 READ FPDMA QUEUED
60 10 18 10 12 00 40 00 6d+12:23:14.284 READ FPDMA QUEUED
60 00 10 b0 3c 36 40 00 6d+12:23:14.284 READ FPDMA QUEUED
Error 75 occurred at disk power-on lifetime: 45231 hours (1884 days + 15 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 41 00 00 00 00 00 Error: UNC at LBA = 0x00000000 = 0
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 08 b0 24 36 40 00 6d+12:23:14.281 READ FPDMA QUEUED
60 00 00 b0 1c 36 40 00 6d+12:23:00.429 READ FPDMA QUEUED
60 00 10 b0 14 36 40 00 6d+12:23:00.257 READ FPDMA QUEUED
60 00 08 b0 0c 36 40 00 6d+12:23:00.257 READ FPDMA QUEUED
60 00 00 b0 04 36 40 00 6d+12:23:00.257 READ FPDMA QUEUED
Error 74 occurred at disk power-on lifetime: 41056 hours (1710 days + 16 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 41 00 00 00 00 00 Error: UNC at LBA = 0x00000000 = 0
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 08 78 f1 08 40 00 23d+11:50:54.576 READ FPDMA QUEUED
60 00 08 78 e9 08 40 00 23d+11:50:46.927 READ FPDMA QUEUED
60 10 00 10 12 00 40 00 23d+11:50:46.927 READ FPDMA QUEUED
2f 00 01 10 00 00 00 00 23d+11:50:46.924 READ LOG EXT
2f 00 01 10 00 00 00 00 23d+11:50:46.924 READ LOG EXT
Error 73 occurred at disk power-on lifetime: 41056 hours (1710 days + 16 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 41 00 00 00 00 00 Error: UNC at LBA = 0x00000000 = 0
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 00 78 e1 08 40 00 23d+11:50:46.924 READ FPDMA QUEUED
60 10 08 10 12 00 40 00 23d+11:50:39.774 READ FPDMA QUEUED
2f 00 01 10 00 00 00 00 23d+11:50:39.750 READ LOG EXT
2f 00 01 10 00 00 00 00 23d+11:50:39.750 READ LOG EXT
61 10 00 10 26 81 40 00 23d+11:50:32.827 WRITE FPDMA QUEUED
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Extended offline Completed without error 00% 37379 -
# 2 Short offline Completed without error 00% 21887 -
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
The above only provides legacy SMART information - try 'smartctl -x' for more
Alright, so seems you had 1 crc error; most of the time this is due to wiring/hba/port.
Would recommend just reseating the sata data cable, clearing the error, and running a scrub. If it happens again, replace the sata data cable, run a scrub.
If it you’re using an HBA make sure it has good airflow (slap a fan on it). If you’re connected directly connected to motherboard, try connecting to a different sata port if available.
Might not hurt reseating sata power.
This is the least bad kind of error & there is a good chance your disk is ok.
Roger, thank you!
I do have an LSI 9207. Will need to check if this pool is plugged into that or directly into the motherboard. I think I moved all my drives over to the HBA but I can’t say for sure. Things are probably due for a vacuum in there anyway.
HBA has good airflow. with a couple 120mm fans blowing over the motherboard
Looking back over my notes I’d been meaning to 3D print the fan bracket for the 9207 but didn’t get around to it. Good reminder to do that.
Just don’t get too happy - least bad doesn’t mean good still investigate & try to mitigate. If errors keep coming back, regardless of wires or ports, get a replacement
and
make the extended test WAY out of date.
You need to run a SMART Extended test. smartctl -t long /dev/sde
then wait 19 hours and then check the results. Your drive has over 50,000 hours on it. It very well could be failing.
Good point.
Roger that. Just started it.
For the future:
- Schedule weekly short tests
- Schedule monthly long tests
- Implement @joeschmuck’s Multi-Report script so you get an email of disk status once a week.
@Protopia You had me losing my mind when you posted the line above. I thought I had missed something. Thankfully I didn’t, mind is restored. No more heart attack in session
Just to explain the line:
The first 100 = The Current Value
The Second 100 = The Worst Value is has ever been. 100 is no problems.
The 005 value is the Threshold at which the previous values should never reach. If either one reaches 005 or lower, the drive has really failed bad.
Pre-fail = This is a line of data/values are used to identify failures in advance. Meaning, if the values starts to drop, you can expect the drive to be failing and total failure is coming.
The last “0” zero = The RAW value and tells you, in this case, how many sectors are removed from the LBA table, or simply put, bad sectors.
Cheers
Hello, I’m new to Truenas, so forgive me for my ignorance and also for jumping in and hijacking OP’s thread. I just set my unit up and implemented weekly and monthly SMART tests exactly like this, but I’m a little frustrated that there is no way to see the results without going to command line. Is there ANY solution for easily viewing these results without resorting to a script or shell?
You could add the Scrutiny Application and it is all GUI. I don’t know if it will report errors, but TrueNAS does report any errors that pop up in realtime.
Hope this helps.
EDIT: TrueNAS does not currently test NVMe drives, just in case you have some of those.