Constant chksum errors on pool. Moved arrary to new server, errors persist

Hi,

I am pulling my hair out with a recent issue I am facing and looking for some more help.

I had a failing arrary, 4 disks. One by one I replaced the disks and resilvered. Errors persisted. I moved disks to spare server, imported pool, errors continue. Scrub has varied effectiveness.

Long form.

  • I have 4 disks, RAIDZ1, one of them started chucking errors, then another.
  • When I ran zpool status -v tank, it complained of irreparable damage to a files in a dataset (not important) and snapshots (also not important).
  • I deleted the files (rm) and deleted the snapshots through the UI.
  • The disks, WD REDs, are ten years old so I assumed it’s about time.
  • I bought 4 new, from Scan, reputable supplier, WD REDs. One by one I replaced them.
  • During the resilver, one of the old drives really struggled. Slow IO, loads of CRC errors etc. Really bad. But it completed eventually.
  • I then removed that drive next and replaced it.
  • Now the chksum errors started to build.
  • I finished the array rebuild, did a scrub. No fix. Another scrub. Fix.

Sweet, think I, it’s done.
Alas, errors return.

I eventually pull the disks, move to a new server and import. Same errors!
I scrub, errors change. I clear. Errors go away. I scrub, I now see this.
Note the perfectly aligned error count…

I’ve deleted the dataset too.

I’m lost. Please help :slight_smile:

root@nas:~ # zpool status -v tank
pool: tank
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: ZFS-8000-8A
scan: scrub repaired 0B in 02:53:33 with 25 errors on Sun Mar 9 09:32:27 2025
config:

    NAME                                            STATE     READ WRITE CKSUM
    tank                                            ONLINE       0     0     0
      raidz1-0                                      ONLINE       0     0     0
        gptid/f90f025e-f7a0-11ef-afce-941882371870  ONLINE       0     0    36
        gptid/6fc847a3-f743-11ef-8304-941882371870  ONLINE       0     0    36
        gptid/490c87b6-fa09-11ef-bf83-941882371870  ONLINE       0     0    36
        gptid/c3e0710d-f941-11ef-b91b-941882371870  ONLINE       0     0    36

errors: Permanent errors have been detected in the following files:

    <0xed>:<0xf158c>
    <0xffffffffffffffff>:<0xf158c>

root@nas:~ # zpool status -v tank
pool: tank
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: ZFS-8000-8A
scan: scrub repaired 0B in 02:53:33 with 25 errors on Sun Mar 9 09:32:27 2025
config:

    NAME                                            STATE     READ WRITE CKSUM
    tank                                            ONLINE       0     0     0
      raidz1-0                                      ONLINE       0     0     0
        gptid/f90f025e-f7a0-11ef-afce-941882371870  ONLINE       0     0   136
        gptid/6fc847a3-f743-11ef-8304-941882371870  ONLINE       0     0   136
        gptid/490c87b6-fa09-11ef-bf83-941882371870  ONLINE       0     0   136
        gptid/c3e0710d-f941-11ef-b91b-941882371870  ONLINE       0     0   136

errors: Permanent errors have been detected in the following files:

    <0xed>:<0xf158c>
    <0xffffffffffffffff>:<0xf158c>

root@nas:~ #

What model of REDs do you have? It sounds like you have SMR drives which do not work well with ZFS and die quickly. Also, give us the drive status with sudo smartctl -xi /dev/sd<?>

Thanks for the reply.

I’ll run the command this evening when home again.

In the meantime, these are the disks I bought. I can’t post links.

WD60EFPX
Product Features

Ideal for Home Offices, Power Users, Small to Medium Businesses and Consumer/Commercial NAS systems
For RAID-optimized NAS systems with up to 8 bays
Rated for 180TB/year workload1 and 1M hours MTBF

They’re cmr disks so don’t think that’s an issue?

Hi and welcome to the forums.

It’s odd to see checksum errors against all of your drives simultaneously especially the exact same amount so I wouldn’t be looking at drives atm.

Instead I think we need to delve deeper into your system, hardware specs please be as comprehensive as possible.

Also what version of TrueNAS are you running?

PS: You mentioned moving the disks to a spare server. Was this a completely different system or did you move any parts with the disks? Include specs of both systems please.

Hey,

It’s truenas core, all up to date.

The original server was a HP microserver, gen 8.
when I also saw intermittent issues with a USB disk (backup disk, been using this way for years. Each year I put a new disk in), I suspected psu issues. I swapped psu, same overall issue.

I then dusted off the previous hp microserver gen 7, swapped disks including the boot disk and booted up.
It came up OK, previous config as expected. All OK.
The errors on the pool persisted. Expected as I’d not changed anything really which would fix the errors.

Now I’m on totally different hardware.

I then scrubbed the pool completely expecting it to work out. Slightly dismayed at the sight of more errors.

I cleared the pool. Scrubbed again.
now we’re caught up.

I’m lost :cry:

To add, both servers have ecc memory and support ecc. I did not transfer the dimms over.

Hi Theo, here is the output of one of the disks.

For reference, the disks now have 69.2k chksum errors on them @Johnny_Fartpants Even, on each disk…

smartctl 7.2 2021-09-14 r5236 [FreeBSD 13.1-RELEASE-p9 amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, smartmontools

=== START OF INFORMATION SECTION ===
Device Model: WDC WD60EFPX-68C5ZN0
Serial Number: WD-WX52D84J7TUK
LU WWN Device Id: 5 0014ee 2c130d507
Firmware Version: 81.00A81
User Capacity: 6,001,175,126,016 bytes [6.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Form Factor: 3.5 inches
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is: Tue Mar 11 21:34:04 2025 GMT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is: Unavailable
APM feature is: Unavailable
Rd look-ahead is: Enabled
Write cache is: Enabled
DSN feature is: Unavailable
ATA Security is: Disabled, NOT FROZEN [SEC1]
Wt Cache Reorder: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (59220) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 614) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x3039) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
1 Raw_Read_Error_Rate POSR-K 200 200 051 - 0
3 Spin_Up_Time POS–K 225 225 021 - 3733
4 Start_Stop_Count -O–CK 100 100 000 - 7
5 Reallocated_Sector_Ct PO–CK 200 200 140 - 0
7 Seek_Error_Rate -OSR-K 200 200 000 - 0
9 Power_On_Hours -O–CK 100 100 000 - 74
10 Spin_Retry_Count -O–CK 100 253 000 - 0
11 Calibration_Retry_Count -O–CK 100 253 000 - 0
12 Power_Cycle_Count -O–CK 100 100 000 - 6
192 Power-Off_Retract_Count -O–CK 200 200 000 - 2
193 Load_Cycle_Count -O–CK 200 200 000 - 10
194 Temperature_Celsius -O—K 129 115 000 - 21
196 Reallocated_Event_Count -O–CK 200 200 000 - 0
197 Current_Pending_Sector -O–CK 200 200 000 - 0
198 Offline_Uncorrectable ----CK 100 253 000 - 0
199 UDMA_CRC_Error_Count -O–CK 200 200 000 - 0
200 Multi_Zone_Error_Rate —R-- 100 253 000 - 0
||||||_ K auto-keep
|||||__ C event count
||||___ R error rate
|||____ S speed/performance
||_____ O updated online
|______ P prefailure warning

General Purpose Log Directory Version 1
SMART Log Directory Version 1 [multi-sector log support]
Address Access R/W Size Description
0x00 GPL,SL R/O 1 Log Directory
0x01 SL R/O 1 Summary SMART error log
0x02 SL R/O 5 Comprehensive SMART error log
0x03 GPL R/O 6 Ext. Comprehensive SMART error log
0x04 GPL R/O 256 Device Statistics log
0x04 SL R/O 255 Device Statistics log
0x06 SL R/O 1 SMART self-test log
0x07 GPL R/O 1 Extended self-test log
0x09 SL R/W 1 Selective self-test log
0x0c GPL R/O 2048 Pending Defects log
0x10 GPL R/O 1 NCQ Command Error log
0x11 GPL R/O 1 SATA Phy Event Counters log
0x24 GPL R/O 307 Current Device Internal Status Data log
0x30 GPL,SL R/O 9 IDENTIFY DEVICE data log
0x80-0x9f GPL,SL R/W 16 Host vendor specific log
0xa0-0xa7 GPL,SL VS 16 Device vendor specific log
0xa8-0xb6 GPL,SL VS 1 Device vendor specific log
0xb7 GPL,SL VS 78 Device vendor specific log
0xbd GPL,SL VS 1 Device vendor specific log
0xc0 GPL,SL VS 1 Device vendor specific log
0xc1 GPL VS 93 Device vendor specific log
0xe0 GPL,SL R/W 1 SCT Command/Status
0xe1 GPL,SL R/W 1 SCT Data Transfer

SMART Extended Comprehensive Error Log Version: 1 (6 sectors)
No Errors Logged

SMART Extended Self-test Log Version: 1 (1 sectors)
No self-tests have been logged. [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

SCT Status Version: 3
SCT Version (vendor specific): 258 (0x0102)
Device State: Active (0)
Current Temperature: 21 Celsius
Power Cycle Min/Max Temperature: 20/32 Celsius
Lifetime Min/Max Temperature: 20/32 Celsius
Under/Over Temperature Limit Count: 0/0
Vendor specific:
01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

SCT Temperature History Version: 2
Temperature Sampling Period: 1 minute
Temperature Logging Interval: 1 minute
Min/Max recommended Temperature: 0/65 Celsius
Min/Max Temperature Limit: -41/85 Celsius
Temperature History Size (Index): 478 (187)

Index Estimated Time Temperature Celsius
188 2025-03-11 13:37 22 ***
… …(448 skipped). … ***
159 2025-03-11 21:06 22 ***
160 2025-03-11 21:07 21 **
… …( 26 skipped). … **
187 2025-03-11 21:34 21 **

SCT Error Recovery Control:
Read: 70 (7.0 seconds)
Write: 70 (7.0 seconds)

Device Statistics (GP Log 0x04)
Page Offset Size Value Flags Description
0x01 ===== = = === == General Statistics (rev 3) ==
0x01 0x008 4 6 — Lifetime Power-On Resets
0x01 0x010 4 74 — Power-on Hours
0x01 0x018 6 3282386700 — Logical Sectors Written
0x01 0x020 6 6990039 — Number of Write Commands
0x01 0x028 6 5437531311 — Logical Sectors Read
0x01 0x030 6 6861166 — Number of Read Commands
0x01 0x038 6 266400000 — Date and Time TimeStamp
0x02 ===== = = === == Free-Fall Statistics (rev 1) ==
0x02 0x010 4 0 — Overlimit Shock Events
0x03 ===== = = === == Rotating Media Statistics (rev 1) ==
0x03 0x008 4 74 — Spindle Motor Power-on Hours
0x03 0x010 4 39 — Head Flying Hours
0x03 0x018 4 13 — Head Load Events
0x03 0x020 4 0 — Number of Reallocated Logical Sectors
0x03 0x028 4 0 — Read Recovery Attempts
0x03 0x030 4 0 — Number of Mechanical Start Failures
0x03 0x038 4 0 — Number of Realloc. Candidate Logical Sectors
0x03 0x040 4 2 — Number of High Priority Unload Events
0x04 ===== = = === == General Errors Statistics (rev 1) ==
0x04 0x008 4 0 — Number of Reported Uncorrectable Errors
0x04 0x010 4 0 — Resets Between Cmd Acceptance and Completion
0x05 ===== = = === == Temperature Statistics (rev 1) ==
0x05 0x008 1 21 — Current Temperature
0x05 0x010 1 22 — Average Short Term Temperature
0x05 0x018 1 - — Average Long Term Temperature
0x05 0x020 1 32 — Highest Temperature
0x05 0x028 1 20 — Lowest Temperature
0x05 0x030 1 26 — Highest Average Short Term Temperature
0x05 0x038 1 22 — Lowest Average Short Term Temperature
0x05 0x040 1 - — Highest Average Long Term Temperature
0x05 0x048 1 - — Lowest Average Long Term Temperature
0x05 0x050 4 0 — Time in Over-Temperature
0x05 0x058 1 65 — Specified Maximum Operating Temperature
0x05 0x060 4 0 — Time in Under-Temperature
0x05 0x068 1 0 — Specified Minimum Operating Temperature
0x06 ===== = = === == Transport Statistics (rev 1) ==
0x06 0x008 4 31 — Number of Hardware Resets
0x06 0x010 4 7 — Number of ASR Events
0x06 0x018 4 0 — Number of Interface CRC Errors
0xff ===== = = === == Vendor Specific Statistics (rev 1) ==
0xff 0x008 7 0 — Vendor Specific
0xff 0x010 7 0 — Vendor Specific
0xff 0x018 7 0 — Vendor Specific
|||_ C monitored condition met
||__ D supports DSN
|___ N normalized value

Pending Defects log (GP Log 0x0c)
No Defects Logged

SATA Phy Event Counters (GP Log 0x11)
ID Size Value Description
0x0001 2 0 Command failed due to ICRC error
0x0002 2 0 R_ERR response for data FIS
0x0003 2 0 R_ERR response for device-to-host data FIS
0x0004 2 0 R_ERR response for host-to-device data FIS
0x0005 2 0 R_ERR response for non-data FIS
0x0006 2 0 R_ERR response for device-to-host non-data FIS
0x0007 2 0 R_ERR response for host-to-device non-data FIS
0x0008 2 0 Device-to-host non-data FIS retries
0x0009 2 5 Transition from drive PhyRdy to drive PhyNRdy
0x000a 2 9 Device-to-host register FISes sent due to a COMRESET
0x000b 2 0 CRC errors within host-to-device FIS
0x000d 2 0 Non-CRC errors within host-to-device FIS
0x000f 2 0 R_ERR response for host-to-device data FIS, CRC
0x0012 2 0 R_ERR response for host-to-device non-data FIS, CRC
0x8000 4 265482 Vendor specific

Did you move your HBA when you moved your drives? CMR drives should be fine, but this smells like a controller or cabling.

Hello @CrispinB I think what is happening is pretty similar to what I see on my system from several years. I opened a topic a long time ago (try to look for hp-hardware-or-hard-drive-issue.110601 on the legacy forum, sorry I am not allowed to post links yet) and I was suggested to change the cabling. I did it, I bought a new connector but I am still facing the same issue from time to time. I couldn’t identify the root cause. I can live several months with no issues and then a few happens. As in your case, I always get the same checksum error numbers on both disks (I am running a RAID1 array). Once I had to rebuild the pool, usually I have to manage the issue on the specific file/files using a backup. Right now I am scrubbing once again the pool due to another failure event.

I think something on the HP motherboard/controller can be wrong.