Unhealthy pool with failing disk - should I attempt to safe my data via ZFS replication or rsync?

Hi,

My media pool (which doesn’t have a backup) consists of 2x identical disks which are configured in mirror. One of them started to throw IO errors couple of days ago:

Pool XXYY state is ONLINE: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected.

Pool’s current status is: Unhealthy, and the failing drive has 3 checksum errors. Currently, I don’t have a spare drive to restore the pool, and I won’t be able to get one since it’s Holidays season. I want to make an attempt to save my data, and I’m asking what would be my safest option:

  • Use a replication task

  • Copy the files manually via something like rsync?

Should I offline the failing disk before attempting any operation?

Thanks!

Are these configured in a mirror? Perhaps you could share the output of zpool status?

Yeah I missed that critical part, lol. They are in mirror:

root@truenas[~]# zpool status
  pool: XXYY
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: resilvered 8.51G in 00:01:23 with 0 errors on Fri Dec 26 14:51:24 2025
config:

        NAME                                      STATE     READ WRITE CKSUM
        XXYYXXY                                   ONLINE       0     0     0
          mirror-0                                ONLINE       0     0     0
            e4d53faa-827a-4f16-9b8b-3ff821447d28  ONLINE       0     0     3
            38c9b9f3-3780-40c2-948e-ae98a8ced0e9  ONLINE       0     0     0

ZFS checksum errors are not always because of a faulty drive. They can also stem from faulty RAM, faulty cables or connectors, failing PSU, overheating controller, etc
Do you have smart test output, that you can post ?

1 Like

While I wish that was the case, I’ve been using this drive for the past ~3 years with a failing smart for my Media library. I think it finally gave up. I’m looking for suggestion to check if that was one-time error or the drive is really dying, but my first priority is to avoid re-downloading terabytes of data and just save what I can from the still-living pool.

root@truenas[~]# smartctl -x /dev/sdf
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.12.15-production+truenas] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Western Digital Red Plus
Device Model: WDC WD40EFRX-68N32N0
Serial Number: WD-WCC7K2ESEDXF
LU WWN Device Id: 5 0014ee 20ee505f9
Firmware Version: 82.00A82
User Capacity: 4,000,787,030,016 bytes [4.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Form Factor: 3.5 inches
Device is: In smartctl database 7.3/6045
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Tue Dec 30 13:16:42 2025 EET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is: Unavailable
APM feature is: Unavailable
Rd look-ahead is: Enabled
Write cache is: Enabled
DSN feature is: Unavailable
ATA Security is: Disabled, frozen [SEC2]
Wt Cache Reorder: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 113) The previous self-test completed having
the read element of the test failed.
Total time to complete Offline
data collection: (43440) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 462) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x303d) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
1 Raw_Read_Error_Rate POSR-K 199 199 051 - 40
3 Spin_Up_Time POS–K 163 162 021 - 6841
4 Start_Stop_Count -O–CK 100 100 000 - 322
5 Reallocated_Sector_Ct PO–CK 200 200 140 - 0
7 Seek_Error_Rate -OSR-K 200 200 000 - 0
9 Power_On_Hours -O–CK 025 025 000 - 55304
10 Spin_Retry_Count -O–CK 100 100 000 - 0
11 Calibration_Retry_Count -O–CK 100 100 000 - 0
12 Power_Cycle_Count -O–CK 100 100 000 - 104
192 Power-Off_Retract_Count -O–CK 200 200 000 - 12
193 Load_Cycle_Count -O–CK 199 199 000 - 5786
194 Temperature_Celsius -O—K 123 107 000 - 27
196 Reallocated_Event_Count -O–CK 200 200 000 - 0
197 Current_Pending_Sector -O–CK 200 200 000 - 0
198 Offline_Uncorrectable ----CK 100 253 000 - 0
199 UDMA_CRC_Error_Count -O–CK 200 200 000 - 0
200 Multi_Zone_Error_Rate —R-- 200 200 000 - 20
||||||_ K auto-keep
|||||__ C event count
||||__ R error rate
|||__ S speed/performance
||_
_ O updated online
|______ P prefailure warning

General Purpose Log Directory Version 1
SMART Log Directory Version 1 [multi-sector log support]
Address Access R/W Size Description
0x00 GPL,SL R/O 1 Log Directory
0x01 SL R/O 1 Summary SMART error log
0x02 SL R/O 5 Comprehensive SMART error log
0x03 GPL R/O 6 Ext. Comprehensive SMART error log
0x04 GPL,SL R/O 8 Device Statistics log
0x06 SL R/O 1 SMART self-test log
0x07 GPL R/O 1 Extended self-test log
0x09 SL R/W 1 Selective self-test log
0x10 GPL R/O 1 NCQ Command Error log
0x11 GPL R/O 1 SATA Phy Event Counters log
0x30 GPL,SL R/O 9 IDENTIFY DEVICE data log
0x80-0x9f GPL,SL R/W 16 Host vendor specific log
0xa0-0xa7 GPL,SL VS 16 Device vendor specific log
0xa8-0xb6 GPL,SL VS 1 Device vendor specific log
0xb7 GPL,SL VS 56 Device vendor specific log
0xbd GPL,SL VS 1 Device vendor specific log
0xc0 GPL,SL VS 1 Device vendor specific log
0xc1 GPL VS 93 Device vendor specific log
0xe0 GPL,SL R/W 1 SCT Command/Status
0xe1 GPL,SL R/W 1 SCT Data Transfer

SMART Extended Comprehensive Error Log Version: 1 (6 sectors)
Device Error Count: 30 (device log contains only the most recent 24 errors)
CR = Command Register
FEATR = Features Register
COUNT = Count (was: Sector Count) Register
LBA_48 = Upper bytes of LBA High/Mid/Low Registers ] ATA-8
LH = LBA High (was: Cylinder High) Register ] LBA
LM = LBA Mid (was: Cylinder Low) Register ] Register
LL = LBA Low (was: Sector Number) Register ]
DV = Device (was: Device/Head) Register
DC = Device Control Register
ER = Error register
ST = Status register
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It “wraps” after 49.710 days.

Error 30 [5] occurred at disk power-on lifetime: 55204 hours (2300 days + 4 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER – ST COUNT LBA_48 LH LM LL DV DC
– – – == – == == == – – – – –
40 – 51 00 00 00 01 cf 18 33 e0 40 00 Error: UNC at LBA = 0x1cf1833e0 = 7769437152

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
– == – == – == == == – – – – – --------------- --------------------
60 08 00 00 78 00 01 cf 18 2f e0 40 08 1d+18:40:48.487 READ FPDMA QUEUED
61 01 00 00 10 00 01 cf 18 16 e0 40 08 1d+18:40:48.481 WRITE FPDMA QUEUED
47 00 00 00 01 00 00 00 00 00 30 e0 08 1d+18:40:48.480 READ LOG DMA EXT
47 00 00 00 01 00 00 00 00 00 00 e0 08 1d+18:40:48.480 READ LOG DMA EXT
ef 00 10 00 02 00 00 00 00 00 00 a0 08 1d+18:40:48.480 SET FEATURES [Enable SATA feature]

Error 29 [4] occurred at disk power-on lifetime: 55204 hours (2300 days + 4 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER – ST COUNT LBA_48 LH LM LL DV DC
– – – == – == == == – – – – –
40 – 51 00 00 00 01 cf 18 2c b8 40 00 Error: WP at LBA = 0x1cf182cb8 = 7769435320

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
– == – == – == == == – – – – – --------------- --------------------
61 08 00 00 b0 00 01 cf 18 0e e0 40 08 1d+18:40:41.906 WRITE FPDMA QUEUED
60 00 10 00 a8 00 01 d1 c0 bc 90 40 08 1d+18:40:41.905 READ FPDMA QUEUED
60 00 10 00 88 00 01 d1 c0 ba 90 40 08 1d+18:40:41.905 READ FPDMA QUEUED
60 00 10 00 80 00 00 00 40 02 90 40 08 1d+18:40:41.905 READ FPDMA QUEUED
60 08 00 00 78 00 01 cf 18 27 e0 40 08 1d+18:40:41.903 READ FPDMA QUEUED

Error 28 [3] occurred at disk power-on lifetime: 55204 hours (2300 days + 4 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER – ST COUNT LBA_48 LH LM LL DV DC
– – – == – == == == – – – – –
40 – 51 00 00 00 01 cf 18 25 70 40 00 Error: WP at LBA = 0x1cf182570 = 7769433456

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
– == – == – == == == – – – – – --------------- --------------------
61 06 00 00 90 00 01 cf 18 00 b8 40 08 1d+18:40:34.836 WRITE FPDMA QUEUED
61 08 00 00 a0 00 01 cf 17 f0 30 40 08 1d+18:40:34.833 WRITE FPDMA QUEUED
60 08 00 00 98 00 01 cf 18 1f e0 40 08 1d+18:40:34.833 READ FPDMA QUEUED
47 00 00 00 01 00 00 00 00 00 30 e0 08 1d+18:40:34.833 READ LOG DMA EXT
47 00 00 00 01 00 00 00 00 00 00 e0 08 1d+18:40:34.832 READ LOG DMA EXT

Error 27 [2] occurred at disk power-on lifetime: 55204 hours (2300 days + 4 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER – ST COUNT LBA_48 LH LM LL DV DC
– – – == – == == == – – – – –
40 – 51 00 00 00 01 cf 18 25 68 40 00 Error: WP at LBA = 0x1cf182568 = 7769433448

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
– == – == – == == == – – – – – --------------- --------------------
61 08 00 00 80 00 01 cf 17 f0 30 40 08 1d+18:40:30.168 WRITE FPDMA QUEUED
60 08 00 00 08 00 01 cf 18 1f e0 40 08 1d+18:40:30.166 READ FPDMA QUEUED
61 08 00 00 68 00 01 cf 17 f8 b8 40 08 1d+18:40:30.166 WRITE FPDMA QUEUED
61 00 10 00 00 00 00 00 40 02 90 40 08 1d+18:40:30.139 WRITE FPDMA QUEUED
61 08 00 00 88 00 01 cf 17 e8 30 40 08 1d+18:40:29.115 WRITE FPDMA QUEUED

Error 26 [1] occurred at disk power-on lifetime: 55204 hours (2300 days + 4 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER – ST COUNT LBA_48 LH LM LL DV DC
– – – == – == == == – – – – –
40 – 51 00 00 00 01 cf 18 16 90 40 00 Error: WP at LBA = 0x1cf181690 = 7769429648

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
– == – == – == == == – – – – – --------------- --------------------
61 01 00 00 a8 00 01 cf 17 e4 a8 40 08 1d+18:40:22.063 WRITE FPDMA QUEUED
60 00 10 00 80 00 01 d1 c0 bc 90 40 08 1d+18:40:22.063 READ FPDMA QUEUED
60 00 10 00 78 00 01 d1 c0 ba 90 40 08 1d+18:40:22.063 READ FPDMA QUEUED
60 00 10 00 70 00 00 00 40 02 90 40 08 1d+18:40:22.062 READ FPDMA QUEUED
60 08 00 00 68 00 01 cf 18 0f e0 40 08 1d+18:40:22.062 READ FPDMA QUEUED

Error 25 [0] occurred at disk power-on lifetime: 55204 hours (2300 days + 4 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER – ST COUNT LBA_48 LH LM LL DV DC
– – – == – == == == – – – – –
40 – 51 00 00 00 01 cf 18 08 20 40 00 Error: WP at LBA = 0x1cf180820 = 7769425952

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
– == – == – == == == – – – – – --------------- --------------------
61 00 10 00 b0 00 01 d1 c0 bc 90 40 08 1d+18:40:15.030 WRITE FPDMA QUEUED
61 00 10 00 a8 00 01 d1 c0 ba 90 40 08 1d+18:40:15.030 WRITE FPDMA QUEUED
60 08 00 00 70 00 01 cf 18 07 e0 40 08 1d+18:40:15.030 READ FPDMA QUEUED
61 00 10 00 a0 00 00 00 40 02 90 40 08 1d+18:40:15.030 WRITE FPDMA QUEUED
47 00 00 00 01 00 00 00 00 00 30 e0 08 1d+18:40:15.029 READ LOG DMA EXT

Error 24 [23] occurred at disk power-on lifetime: 55204 hours (2300 days + 4 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER – ST COUNT LBA_48 LH LM LL DV DC
– – – == – == == == – – – – –
40 – 51 00 00 00 01 cf 18 00 d8 40 00 Error: UNC at LBA = 0x1cf1800d8 = 7769424088

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
– == – == – == == == – – – – – --------------- --------------------
60 07 28 00 40 00 01 cf 18 00 b8 40 08 1d+18:40:07.948 READ FPDMA QUEUED
60 00 10 00 38 00 00 00 40 02 90 40 08 1d+18:40:07.948 READ FPDMA QUEUED
60 00 10 00 30 00 01 d1 c0 bc 90 40 08 1d+18:40:07.948 READ FPDMA QUEUED
60 00 10 00 08 00 01 d1 c0 ba 90 40 08 1d+18:40:07.948 READ FPDMA QUEUED
b0 00 d5 00 01 00 00 00 c2 4f 09 00 08 1d+18:40:07.947 SMART READ LOG

Error 23 [22] occurred at disk power-on lifetime: 55204 hours (2300 days + 4 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER – ST COUNT LBA_48 LH LM LL DV DC
– – – == – == == == – – – – –
40 – 51 00 00 00 01 cf 17 f2 50 40 00 Error: UNC at LBA = 0x1cf17f250 = 7769420368

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
– == – == – == == == – – – – – --------------- --------------------
60 08 00 00 f0 00 01 cf 17 f8 b8 40 08 1d+18:40:00.886 READ FPDMA QUEUED
60 08 00 00 e8 00 01 cf 17 f0 30 40 08 1d+18:40:00.886 READ FPDMA QUEUED
47 00 00 00 01 00 00 00 00 00 30 e0 08 1d+18:40:00.885 READ LOG DMA EXT
47 00 00 00 01 00 00 00 00 00 00 e0 08 1d+18:40:00.885 READ LOG DMA EXT
ef 00 10 00 02 00 00 00 00 00 00 a0 08 1d+18:40:00.885 SET FEATURES [Enable SATA feature]

SMART Extended Self-test Log Version: 1 (1 sectors)
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error

1 Extended offline Completed: read failure 10% 55301 7769504088

2 Short offline Completed: read failure 70% 55269 7813999392

3 Extended offline Completed: read failure 10% 55217 7768433368

4 Extended offline Completed: read failure 90% 55209 7769433448

5 Short offline Completed: read failure 70% 55190 7813999392

6 Extended offline Completed: read failure 10% 55134 7767129616

7 Short offline Completed: read failure 70% 55102 7813999392

8 Extended offline Completed: read failure 10% 54966 7813999392

9 Short offline Completed: read failure 70% 54934 7813999392

#10 Extended offline Completed: read failure 10% 54798 7813999392
#11 Short offline Completed: read failure 60% 54766 7813999392
#12 Extended offline Completed: read failure 10% 54630 7813999392
#13 Short offline Completed: read failure 10% 54598 7813999392
#14 Extended offline Completed: read failure 10% 54462 7813999392
#15 Short offline Completed: read failure 40% 54431 7813999392
#16 Extended offline Completed: read failure 10% 54295 7813999392
#17 Short offline Completed: read failure 40% 54263 7813999392
#18 Extended offline Completed: read failure 10% 54127 7813999392

SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

SCT Status Version: 3
SCT Version (vendor specific): 258 (0x0102)
Device State: Active (0)
Current Temperature: 27 Celsius
Power Cycle Min/Max Temperature: 26/29 Celsius
Lifetime Min/Max Temperature: 14/43 Celsius
Under/Over Temperature Limit Count: 0/0
Vendor specific:
01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

SCT Temperature History Version: 2
Temperature Sampling Period: 1 minute
Temperature Logging Interval: 1 minute
Min/Max recommended Temperature: 0/65 Celsius
Min/Max Temperature Limit: -41/85 Celsius
Temperature History Size (Index): 478 (40)

Index Estimated Time Temperature Celsius
41 2025-12-30 05:19 27 ********
… ..(108 skipped). .. ********
150 2025-12-30 07:08 27 ********
151 2025-12-30 07:09 28 *********
… ..(292 skipped). .. *********
444 2025-12-30 12:02 28 *********
445 2025-12-30 12:03 27 ********
… ..( 72 skipped). .. ********
40 2025-12-30 13:16 27 ********

SCT Error Recovery Control:
Read: 70 (7.0 seconds)
Write: 70 (7.0 seconds)

Device Statistics (GP Log 0x04)
Page Offset Size Value Flags Description
0x01 ===== = = === == General Statistics (rev 1) ==
0x01 0x008 4 104 — Lifetime Power-On Resets
0x01 0x010 4 55304 — Power-on Hours
0x01 0x018 6 125006517565 — Logical Sectors Written
0x01 0x020 6 1063927628 — Number of Write Commands
0x01 0x028 6 2110092057068 — Logical Sectors Read
0x01 0x030 6 12491630651 — Number of Read Commands
0x01 0x038 6 1525904384 — Date and Time TimeStamp
0x03 ===== = = === == Rotating Media Statistics (rev 1) ==
0x03 0x008 4 54536 — Spindle Motor Power-on Hours
0x03 0x010 4 52861 — Head Flying Hours
0x03 0x018 4 5799 — Head Load Events
0x03 0x020 4 0 — Number of Reallocated Logical Sectors
0x03 0x028 4 1942574 — Read Recovery Attempts
0x03 0x030 4 0 — Number of Mechanical Start Failures
0x03 0x038 4 0 — Number of Realloc. Candidate Logical Sectors
0x03 0x040 4 12 — Number of High Priority Unload Events
0x04 ===== = = === == General Errors Statistics (rev 1) ==
0x04 0x008 4 30 — Number of Reported Uncorrectable Errors
0x04 0x010 4 0 — Resets Between Cmd Acceptance and Completion
0x05 ===== = = === == Temperature Statistics (rev 1) ==
0x05 0x008 1 27 — Current Temperature
0x05 0x010 1 27 — Average Short Term Temperature
0x05 0x018 1 30 — Average Long Term Temperature
0x05 0x020 1 43 — Highest Temperature
0x05 0x028 1 17 — Lowest Temperature
0x05 0x030 1 39 — Highest Average Short Term Temperature
0x05 0x038 1 19 — Lowest Average Short Term Temperature
0x05 0x040 1 34 — Highest Average Long Term Temperature
0x05 0x048 1 25 — Lowest Average Long Term Temperature
0x05 0x050 4 0 — Time in Over-Temperature
0x05 0x058 1 65 — Specified Maximum Operating Temperature
0x05 0x060 4 0 — Time in Under-Temperature
0x05 0x068 1 0 — Specified Minimum Operating Temperature
0x06 ===== = = === == Transport Statistics (rev 1) ==
0x06 0x008 4 414 — Number of Hardware Resets
0x06 0x010 4 190 — Number of ASR Events
0x06 0x018 4 0 — Number of Interface CRC Errors
|||_ C monitored condition met
||__ D supports DSN
|___ N normalized value

Pending Defects log (GP Log 0x0c) not supported

SATA Phy Event Counters (GP Log 0x11)
ID Size Value Description
0x0001 2 0 Command failed due to ICRC error
0x0002 2 0 R_ERR response for data FIS
0x0003 2 0 R_ERR response for device-to-host data FIS
0x0004 2 0 R_ERR response for host-to-device data FIS
0x0005 2 0 R_ERR response for non-data FIS
0x0006 2 0 R_ERR response for device-to-host non-data FIS
0x0007 2 0 R_ERR response for host-to-device non-data FIS
0x0008 2 0 Device-to-host non-data FIS retries
0x0009 2 2 Transition from drive PhyRdy to drive PhyNRdy
0x000a 2 3 Device-to-host register FISes sent due to a COMRESET
0x000b 2 0 CRC errors within host-to-device FIS
0x000d 2 0 Non-CRC errors within host-to-device FIS
0x000f 2 0 R_ERR response for host-to-device data FIS, CRC
0x0012 2 0 R_ERR response for host-to-device non-data FIS, CRC
0x8000 4 339796 Vendor specific

root@truenas[~]#

Ok so one of your disks in the mirror is potentially on the way out. It’s not the end of the world but if this data is critical you should really have a backup of it somewhere else regardless of the current situation. Personally I’d get a replacement drive asap and proactively replace it. If you have another place to copy the data in the meantime then I’d suggest you do that to be safe. Long term you might want to consider backing up directly from your TrueNAS to a cloud provider such as StorJ to avoid any future concern when this happens.

Thanks for the advices. I’ll repeat what I explained in the main post - I’m currently unable to get a replacement since it’s holidays and stores are closed. So until everything open’s again, I’m really trying to salvage what I can and I’m looking for the safest way to do so - via zfs replication task or rsyncing the files manually.

The drives contain entertainment media - nothing critical, but I don’t want to spend my next 3 week re-downloading everything again.

So what I’m realling asking is advice if the safest approach would be ZFS replication, or rsync. And if I should offline the failing disk or leave it as it is.

When did you last perform a scrub? If the data has been at rest unread for years and there are thousands of checksum errors that would be a good thing to know before doing anything, particularly if the good drive has errors.

I can’t see any benefit in offlining the bad drive. Nor any reason to prefer one copy method over another, but I could be wrong on this last point.

Well ZFS replication obviously needs another ZFS pool or system.
Do you have that ready ?

Yes, I have another healthy pool with free space, ready to go. I’m just wondering which method would be the better option when it comes to file integrity, having in mind that one of the disks is most probably spitting out false data.

Then I would use replication.

2 Likes

There are regular scrub tasks every ~2 weeks. At this point I’m pretty sure that the drive is failing, I was just looking for a temporal solution till I purchase a replacement.

Another reply to this post, incase someone lands with the same problem as me: I used replication as @Farout suggested - the dataset replicated successfully and data seems intact. Then again, the “data” was just movies & TV shows.

2 Likes