Hello.
I finished my expansion job on early December.
And then, my RAIDZ1 pool has been degraded 4 days ago with lots of write error.
One of my disk (Toshiba N300 7200/256M (HDWG21C, 12TB)) reported there are lots of write errors. The strange thing is: this disk was not the new one that I used for expansion.
zpool status
pool: RAID5
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
scan: scrub repaired 0B in 1 days 04:17:12 with 0 errors on Sat Dec 14 22:52:46 2024
expand: expanded raidz1-0 copied 25.0T in 20 days 05:35:44, on Fri Dec 13 18:35:34 2024
config:
NAME STATE READ WRITE CKSUM
RAID5 DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
be64f67f-81d8-459c-a563-daffe3078fed FAULTED 0 74 0 too many errors
259a0b75-6c49-4561-8319-c920dbd33bbc ONLINE 0 0 0
59048391-9c54-4d6b-8c7d-54f4fe8b5bdf ONLINE 0 0 0
0da0b4a6-16fd-4a86-9141-821f846a3134 ONLINE 0 0 0
errors: No known data errors
However, my smartctl long test (performed on Dec 25) result reports that there is no disk error: smartctl -x /dev/sda
And it seems that turning off the system, re-inserting the disk, detach/reattach with /dev/sd(blah) worked.
Hmm… but anyway SMART test does not tells that this disk is 100% okay, so I am preparing a spare disk, but I know that resilvering with that disk will take 20 days or more, so I want to recover this situation.
Your error is with ZFS, not a drive failure based on the indications provided.
Run a SCRUB on the pool zpool scrub RAID5, verify it passes, if it does then run zpool clear RAID5, then run zpool status RAID5 to ensure the errors are cleared. If the problem occurs again, make another post.
The scrub is all you need, if there are no errors then all is good, you just need to clear the error flags, then you verify the error flags were cleared.
If the scrub identifies corrupt files, you will need to delete those files and restore those with a backup copy, then repeat the steps above to verify all is good.
If the problem returns, do the following: zpool status RAID5 lsblk -o +PARTUUID,NAME | grep "part" lsblk -o +SERIAL,NAME | grep "disk"
or you could grab the entire list and visually sort through it: lsblk -o PARTUUID,NAME,SERIAL
Make sure the drive is the same one you suspect, verify using the serial number, not the drive ID/name (sda, sdb…) as the ID can change during a reboot.
If you have a question, please ask. One on wants anyone to loose data if it can be prevented. Remember, while a failing drive can cause data corruption, it does not mean a drive failure was the cause. Often a scrub failure (read/write/cksum) are not due to a physical drive failure, so never jump to conclusions without verifying. And you did the right thing by running a SMART Long test.
I highly recommend that you setup daily SMART Short tests and weekly SMART Long tests. If a weekly long test feels excessive to you, at least monthly however I prefer weekly. With a 12TB drive, I would schedule one Long test a day, example, Monday = sda, Tuesday = sdb, etc.
After I restarted my NAS before doing systematical changes like you suggested (in other words, scrub), it automatically started resilvering on the disk(toshiba 12tb; /dev/sda) that had write errors.
It will take more than 20 days according to zpool status, but still I think stopping that process manually would be dangerous.
Is it okay to let it finish resilvering automatically and perform scrub after that?
Absolutely it is fine and in my book, preferred. With that said, I would highly recommend that you backup any really important files you can. If you are like me, I have under 5TB of data I must retain and that can and does fit on a 5TB removable archive drive. Doing this will bring you a little piece of mind.
Thank you. I didn’t import other half of the SMART result since I didn’t find anomalies on that. Here is the other half of the result on /dev/sda:
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
SCT Status Version: 3
SCT Version (vendor specific): 1 (0x0001)
Device State: Active (0)
Current Temperature: 31 Celsius
Power Cycle Min/Max Temperature: 21/37 Celsius
Lifetime Min/Max Temperature: 19/62 Celsius # this 62 celsius is not the value I have experienced, maybe from the former user's experience. In my device, around 40 celsius was the highest.
Specified Max Operating Temperature: 55 Celsius
Under/Over Temperature Limit Count: 0/0
SCT Temperature History Version: 2
Temperature Sampling Period: 1 minute
Temperature Logging Interval: 1 minute
Min/Max recommended Temperature: 5/55 Celsius
Min/Max Temperature Limit: -40/70 Celsius
Temperature History Size (Index): 478 (54)
Index Estimated Time Temperature Celsius
55 2024-12-26 13:51 35 ****************
... ..(175 skipped). .. ****************
231 2024-12-26 16:47 35 ****************
232 2024-12-26 16:48 36 *****************
... ..( 3 skipped). .. *****************
236 2024-12-26 16:52 36 *****************
237 2024-12-26 16:53 35 ****************
238 2024-12-26 16:54 36 *****************
239 2024-12-26 16:55 35 ****************
240 2024-12-26 16:56 36 *****************
... ..( 20 skipped). .. *****************
261 2024-12-26 17:17 36 *****************
262 2024-12-26 17:18 35 ****************
263 2024-12-26 17:19 35 ****************
264 2024-12-26 17:20 36 *****************
265 2024-12-26 17:21 36 *****************
266 2024-12-26 17:22 35 ****************
... ..( 2 skipped). .. ****************
269 2024-12-26 17:25 35 ****************
270 2024-12-26 17:26 36 *****************
271 2024-12-26 17:27 35 ****************
... ..( 28 skipped). .. ****************
300 2024-12-26 17:56 35 ****************
301 2024-12-26 17:57 34 ***************
... ..( 5 skipped). .. ***************
307 2024-12-26 18:03 34 ***************
308 2024-12-26 18:04 33 **************
... ..( 9 skipped). .. **************
318 2024-12-26 18:14 33 **************
319 2024-12-26 18:15 32 *************
320 2024-12-26 18:16 33 **************
... ..( 2 skipped). .. **************
323 2024-12-26 18:19 33 **************
324 2024-12-26 18:20 32 *************
... ..( 28 skipped). .. *************
353 2024-12-26 18:49 32 *************
354 2024-12-26 18:50 31 ************
... ..(143 skipped). .. ************
20 2024-12-26 21:14 31 ************
21 2024-12-26 21:15 30 ***********
22 2024-12-26 21:16 31 ************
... ..( 31 skipped). .. ************
54 2024-12-26 21:48 31 ************
SCT Error Recovery Control:
Read: Disabled
Write: Disabled
Device Statistics (GP Log 0x04)
Page Offset Size Value Flags Description
0x01 ===== = = === == General Statistics (rev 3) ==
0x01 0x008 4 22 --- Lifetime Power-On Resets
0x01 0x010 4 24551 --- Power-on Hours
0x01 0x018 6 74427863782 --- Logical Sectors Written
0x01 0x020 6 1026033388 --- Number of Write Commands
0x01 0x028 6 203765309018 --- Logical Sectors Read
0x01 0x030 6 1049888193 --- Number of Read Commands
0x01 0x038 6 88383600000 --- Date and Time TimeStamp
0x02 ===== = = === == Free-Fall Statistics (rev 1) ==
0x02 0x010 4 0 --- Overlimit Shock Events
0x03 ===== = = === == Rotating Media Statistics (rev 1) ==
0x03 0x008 4 1131 --- Spindle Motor Power-on Hours
0x03 0x010 4 1101 --- Head Flying Hours
0x03 0x018 4 1534 --- Head Load Events
0x03 0x020 4 0 --- Number of Reallocated Logical Sectors
0x03 0x028 4 9 --- Read Recovery Attempts
0x03 0x030 4 0 --- Number of Mechanical Start Failures
0x03 0x038 4 0 --- Number of Realloc. Candidate Logical Sectors
0x03 0x040 4 8 --- Number of High Priority Unload Events
0x04 ===== = = === == General Errors Statistics (rev 1) ==
0x04 0x008 4 0 --- Number of Reported Uncorrectable Errors
0x04 0x010 4 2 --- Resets Between Cmd Acceptance and Completion
0x05 ===== = = === == Temperature Statistics (rev 1) ==
0x05 0x008 1 31 --- Current Temperature
0x05 0x010 1 34 N-- Average Short Term Temperature
0x05 0x018 1 32 N-- Average Long Term Temperature
0x05 0x020 1 62 --- Highest Temperature
0x05 0x028 1 19 --- Lowest Temperature
0x05 0x030 1 61 N-- Highest Average Short Term Temperature # this 62 celsius is not the value I have experienced, maybe from the former user's experience. In my device, around 40 celsius was the highest.
0x05 0x038 1 27 N-- Lowest Average Short Term Temperature
0x05 0x040 1 56 N-- Highest Average Long Term Temperature
0x05 0x048 1 31 N-- Lowest Average Long Term Temperature
0x05 0x050 4 194800 --- Time in Over-Temperature
0x05 0x058 1 55 --- Specified Maximum Operating Temperature
0x05 0x060 4 0 --- Time in Under-Temperature
0x05 0x068 1 5 --- Specified Minimum Operating Temperature
0x06 ===== = = === == Transport Statistics (rev 1) ==
0x06 0x008 4 195 --- Number of Hardware Resets
0x06 0x010 4 69 --- Number of ASR Events
0x06 0x018 4 0 --- Number of Interface CRC Errors
0x07 ===== = = === == Solid State Device Statistics (rev 1) ==
|||_ C monitored condition met
||__ D supports DSN
|___ N normalized value
Pending Defects log (GP Log 0x0c)
No Defects Logged
SATA Phy Event Counters (GP Log 0x11)
ID Size Value Description
0x0001 4 0 Command failed due to ICRC error
0x0002 4 0 R_ERR response for data FIS
0x0003 4 0 R_ERR response for device-to-host data FIS
0x0004 4 0 R_ERR response for host-to-device data FIS
0x0005 4 0 R_ERR response for non-data FIS
0x0006 4 0 R_ERR response for device-to-host non-data FIS
0x0007 4 0 R_ERR response for host-to-device non-data FIS
0x0008 4 0 Device-to-host non-data FIS retries
0x0009 4 6 Transition from drive PhyRdy to drive PhyNRdy
0x000a 4 6 Device-to-host register FISes sent due to a COMRESET
0x000b 4 0 CRC errors within host-to-device FIS
0x000d 4 0 Non-CRC errors within host-to-device FIS
0x000f 4 0 R_ERR response for host-to-device data FIS, CRC
0x0010 4 0 R_ERR response for host-to-device data FIS, non-CRC
0x0012 4 0 R_ERR response for host-to-device non-data FIS, CRC
0x0013 4 0 R_ERR response for host-to-device non-data FIS, non-CRC
For the serial list, is lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID sufficient?
sda WDC WD140EDGZ-11B2DA2 1 gpt disk 14000519643136
└─sda1 1 gpt part 2048 14000518577664 Solaris /usr & Apple ZFS 0da0b4a6-16fd-4a86-9141-821f846a3134
sdb TOSHIBA HDWG21C 1 gpt disk 12000138625024
└─sdb1 1 gpt part 4096 12000136510976 Solaris /usr & Apple ZFS be64f67f-81d8-459c-a563-daffe3078fed
sdc ST12000VN0008-2PH103 1 gpt disk 12000138625024
└─sdc1 1 gpt part 4096 12000136510976 Solaris /usr & Apple ZFS 259a0b75-6c49-4561-8319-c920dbd33bbc
sdd ST12000VN0008-2PH103 1 gpt disk 12000138625024
└─sdd1 1 gpt part 4096 12000136510976 Solaris /usr & Apple ZFS 59048391-9c54-4d6b-8c7d-54f4fe8b5bdf
nvme0n1 SHGP31-500GM 0 gpt disk 500107862016
├─nvme0n1p1 0 gpt part 4096 1048576 BIOS boot f0c5003c-678f-4c25-8c38-5dc6bef11ec8
├─nvme0n1p2 0 gpt part 6144 536870912 EFI System a72994ef-da12-44a0-beac-0ee566c111d9
├─nvme0n1p3 0 gpt part 34609152 482387959296 Solaris /usr & Apple ZFS e58f1410-1215-407a-988e-d050f8e2a963
└─nvme0n1p4 0 gpt part 1054720 17179869184 Linux swap 072560bc-e4c4-4c2d-892d-7a92bdbe4e78
nvme1n1 SHGP31-500GM 0 gpt disk 500107862016
└─nvme1n1p1 0 gpt part 4096 500105740800 Solaris /usr & Apple ZFS 160079d5-e1f6-419f-bf09-a638629bf052
NOTE: after rebooting (re-attaching disks and rebooted), sda and sdb has been swapped (you could notice that now /dev/sdb is TOSHIBA disk). I posted the result of /dev/sdb’s smartctl correctly though.
This is not quite correct. There are a whole bunch of errors that are entirely ZFS errors and not at all SMART errors such as checksum errors. There are also a whole bunch of SMART errors that are often not ZFS errors (like reallocated sectors).
There are probably also some which are both types at the same time.