Write errors during pool extend - what do I do now?

I have a system currently running SCALE ElectricEel-24.10.2., built about 6 months ago with 5x 4TB WD Red Plus drives (WD40EFPX) in a single pool. We are adding another 3 drives (same model) with an HBA (LSI 9200-8I). Using the new “extend” feature in 24.10 to add these three drives to the pool.

Full system details:

Summary
  • Intel Core i7-3770, Asus Sabretooth Z77
    ** 5x WD Red Plus 4TB (Original Pool “DOLLY-ARRAY”)
    ** 2x 120GB SATA SSD (boot drives, mirrored) (one is Intel 320 Series, the other is Patriot Burst Elite)
    ** 240GB Kingston NVMe SSD (Cache drive for pool)
    ** LSI SAS HBA 9200-8i
    *** 3x WD Red Plus (new drives for pool)

First drive added to the pool without any issues, took about 18-20 hours. Second drive ran for a couple hours then paused - TrueNas web UI shows the job “paused for resilver or clear” and an alert:

Critical
Pool DOLLY-ARRAY state is ONLINE: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected.
2025-03-12 12:20:59 (America/Los_Angeles)

Did some digging online, ran zpool status and found that one of the new drives had 6 write errors. Grabbed smart info with smartctl, ran a short smart test, did not see anything amiss. Decided to zpool clear and let the expand job continue to see if it would alert again - it did, about an hour or two later, one more write error.

Current output of zpool status:

admin@DOLLY[~]$ sudo zpool status -LP DOLLY-ARRAY
[sudo] password for admin: 
  pool: DOLLY-ARRAY
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: scrub repaired 0B in 04:48:32 with 0 errors on Tue Mar 11 16:03:01 2025
expand: expansion of raidz2-0 in progress since Tue Mar 11 11:15:45 2025
        2.14T / 11.2T copied at 25.0M/s, 19.14% done, paused for resilver or clear
config:

        NAME           STATE     READ WRITE CKSUM
        DOLLY-ARRAY    ONLINE       0     0     0
          raidz2-0     ONLINE       0     0     0
            /dev/sde1  ONLINE       0     0     0
            /dev/sdg1  ONLINE       0     0     0
            /dev/sdi1  ONLINE       0     0     0
            /dev/sdc1  ONLINE       0     0     0
            /dev/sdh1  ONLINE       0     0     0
            /dev/sda1  ONLINE       0     0     0
            /dev/sdb1  ONLINE       0     1     0
        cache
          /dev/sdj1    ONLINE       0     0     0

errors: No known data errors
admin@DOLLY[~]$ 

Current SMART status:

admin@DOLLY[~]$ sudo smartctl -a /dev/sdb   
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.6.44-production+truenas] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     WDC WD40EFPX-68C6CN0
Serial Number:    WD-WX22DA411C81
LU WWN Device Id: 5 0014ee 26be6e8b4
Firmware Version: 81.00A81
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Form Factor:      3.5 inches
Device is:        Not in smartctl database 7.3/5528
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Wed Mar 12 12:18:59 2025 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever 
                                        been run.
Total time to complete Offline 
data collection:                (39840) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine 
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        ( 414) minutes.
Conveyance self-test routine
recommended polling time:        (   5) minutes.
SCT capabilities:              (0x3039) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   204   204   021    Pre-fail  Always       -       2758
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       22
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       50
 10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       21
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       19
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       59
194 Temperature_Celsius     0x0022   112   108   000    Old_age   Always       -       35
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   100   253   000    Old_age   Offline      -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%        47         -
# 2  Extended offline    Aborted by host               90%        46         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

The above only provides legacy SMART information - try 'smartctl -x' for more

admin@DOLLY[~]$ 

I think what I’d like to do now is stop the extend task (How do I do this? Is it possible?) and run the extended SMART test on the two remaining drives. (I realize now I ought to have tested the drives before adding them to the pool.) Or, another option, per the “action” suggested by zpool - could I zpool replace to the remaining, unused drive, and then would the “extend” job move to the replacement drive?

Thank you so much in advance for any guidance you can offer!

Thanks for the detailed info which will give us a fast start.

  • The pool is NOT degraded.

  • The SMART data doesn’t indicate any issues.

  • AFAIK you cannot stop an expansion once started.

So my advice is to wait until the expansion has finished, then run a scrub and once it completes run zpool status -v to see whether it corrected the unrecoverable write or not.

If the scrub runs clear, then do zpool clear to clear the one error and wait to see whether it was just a one-off glitch or whether something recurs.

2 Likes

The expansion is “paused” right now because of the error - is there a way to tell it to continue anyways? Or would I have to ‘zpool clear’ every time it hits an error?

Ah - I hadn’t spotted that.

If it were me I would run a SMART long test on /dev/sdb (the error drive) to see whether it completes cleanly. I might then do a scrub as well, and see whether that runs cleanly before doing a zpool clear and resuming the expansion.

Extended offline test for sdb came back with no errors, resuming expansion. Will update when complete.

1 Like

Expansion stopped again, 6 write errors on that same drive. I guess I don’t understand what the ‘write error’ indicates - how come they show up but the smart test is clean?
I am scrubbing the pool now - if I understand correctly, maybe there is an issue with the data being copied?

scrub repaired 0B in 06:17:49 with 0 errors on Tue Mar 18 18:26:00 2025

It’s about halfway through the expansion at this point.

The drive encountered an issue writing the data (check with smartctl) but managed to safely write the data (possibly reallocating sectors in the process). It’s doing its best, but is not healthy.
ZFS determined that data was not corrupted, but you should plan for a further resilver when replacing the drive…

Okay - the latest from zpool status is a bit worse:

admin@DOLLY[~]$ sudo zpool status -LP DOLLY-ARRAY
[sudo] password for admin: 
  pool: DOLLY-ARRAY
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
        repaired.
  scan: scrub repaired 0B in 06:17:49 with 0 errors on Tue Mar 18 18:26:00 2025
expand: expansion of raidz2-0 in progress since Tue Mar 11 11:15:45 2025
        6.03T / 11.0T copied at 9.07M/s, 54.69% done, paused for resilver or clear
config:

        NAME           STATE     READ WRITE CKSUM
        DOLLY-ARRAY    DEGRADED     0     0     0
          raidz2-0     DEGRADED     0     0     0
            /dev/sde1  ONLINE       0     0     0
            /dev/sdg1  ONLINE       0     0     0
            /dev/sdi1  ONLINE       0     0     0
            /dev/sdc1  ONLINE       0     0     0
            /dev/sdh1  ONLINE       0     0     0
            /dev/sda1  ONLINE       0     0     0
            /dev/sdb1  FAULTED      0   257     0  too many errors
        cache
          /dev/sdj1    ONLINE       0     0     0

errors: No known data errors

So, at this point I’d definitely like to replace the drive with the spare I haven’t yet used. Since the drive is in the middle of being expanded onto, can I replace it in my pool without affecting the expand job?

As before, expansion will pause while replacing/resilvering and should automagically resume when it’s done.

2 Likes

There is no way to stop an expansion. It will pause while you resolve your faulty disk issue.

I’ve marked @etorix response as the solution, since it answers my original question that this thread is technically about.
However, my array remains in a degraded state (should I move to a new thread?) - it has successfully replaced with the new drive (labeled sdd), but the expand job stopped again: 4 write errors now on sdd. Is it possible that the drives are fine and the HBA is to blame? Would explain why the smart reports are coming back clean. Again, there are three drives attached to the HBA, extending onto the first drive went fine, but the second and now the third have been encountering write errors.

Here’s the smart report from sdd, in case it’s helpful:

Summary
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.6.44-production+truenas] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Red (CMR)
Device Model:     WDC WD40EFPX-68C6CN0
Serial Number:    WD-WX22DA4114KH
LU WWN Device Id: 5 0014ee 2c13ce6d2
Firmware Version: 81.00A81
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database 7.3/5660
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Thu Mar 20 11:38:18 2025 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (  25) The self-test routine was aborted by
                                        the host.
Total time to complete Offline 
data collection:                (39600) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine 
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        ( 413) minutes.
Conveyance self-test routine
recommended polling time:        (   5) minutes.
SCT capabilities:              (0x3039) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   100   253   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   205   204   021    Pre-fail  Always       -       2716
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       22
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       241
 10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       21
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       19
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       33
194 Temperature_Celsius     0x0022   112   106   000    Old_age   Always       -       35
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   100   253   000    Old_age   Offline      -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Aborted by host               90%       220         -
# 2  Short offline       Completed without error       00%       134         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

The above only provides legacy SMART information - try 'smartctl -x' for more

If the drives pass SMART tests, it could be the HBA overheating or a damaged cable. The latter will be a pain to troubleshoot, especially in the middle of a resilver or raidz expansion.

alright, I have a spare cable I can try - is it safe to power down the system while the expansion is paused? Otherwise, the expansion is 84% done, so I could probably zpool clear and let it continue, and hope that it doesn’t encounter too many more errors, and then I could troubleshoot the HBA.
I doubt the card is overheating - there is a 200mm fan pointed directly at it. I have seen references to “clone” HBAs - I’m not sure how to tell if mine is legitimate.

Could also be a PSU issue. A RaidZ expansion is going to put a load on your system it’s never seen before.

hm. I could put a wattmeter on it, but again, that would necessitate shutting the system down, which I’m unsure if I can do mid expansion.

It should be safe to shutdown a ZFS system mid expansion or resilver.

1 Like

I’ve swapped to a spare SAS/SATA breakout, and plugged in a wattmeter. The zfs errors apparently cleared on reboot (which is … interesting), so the expansion is continuing. System is sitting at about 100-150W right now during expansion.

(all of my drive letter assignments changed when I rebooted, so I apologize in advance, but this is going to be confusing now. From here on, to keep things straight, I’m going to refer to the drives numerically: 1-5 are the original drives, 6-8 are the new drives in the order added)
To refresh, so far:

  • Expanded array onto drive 6 without issues
  • Attempted expanding onto drive 7 (formerly sdb), threw write errors repeatedly
  • Replaced drive 7 with drive 8, still threw write errors
  • replaced SAS/SATA cable, rebooted, resumed expansion to see if that fixed the problem

Since then:

  • after replacing the SAS/SATA pigtail cable, expansion onto dive 8 completed without error
  • thinking the cable must have been the problem, I decided to test that theory by replacing one of the “good” drives that was already in the array (we’re gonna say drive 1) with drive 7 (the one that first started giving errors) to see if it would get any write errors. Remember, as of yet, smart tests on every single drive have been clean - not even a single reallocated sector - the only thing that has generated any issue is the write errors showing up during expansion. I ran another long smart test on drive 7 beforehand just to be sure, and it still shows up healthy - the only way I have of knowing if anything is wrong is to try and move data onto it. (dangerous, maybe, but worst case I should be able to rebuild the data on drive 1). This replace action went fine, again, with zero errors, proving to me that drive 7 is likely healthy.
  • at this point I have 7 of 8 drives in the array, including all of the new drives, and the only remaining drive is known good. In theory, I should be able to expand back onto it without issue.
  • I ran the expand onto drive 1, and the resilver paused again because of write errors on drive 7 - the same drive that generated no errors when I copied to it just before.
zpool status
admin@DOLLY[~]$ sudo zpool status -LP DOLLY-ARRAY
[sudo] password for admin: 
  pool: DOLLY-ARRAY
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: resilvered 1.83T in 05:00:41 with 0 errors on Mon Mar 24 13:37:12 2025
expand: expansion of raidz2-0 in progress since Mon Mar 24 15:30:16 2025
        2.45T / 10.8T copied at 39.9M/s, 22.61% done, paused for resilver or clear
config:

        NAME           STATE     READ WRITE CKSUM
        DOLLY-ARRAY    ONLINE       0     0     0
          raidz2-0     ONLINE       0     0     0
            /dev/sda1  ONLINE       0     5     0
            /dev/sdf1  ONLINE       0     0     0
            /dev/sdi1  ONLINE       0     0     0
            /dev/sdg1  ONLINE       0     0     0
            /dev/sdh1  ONLINE       0     0     0
            /dev/sdb1  ONLINE       0     0     0
            /dev/sdc1  ONLINE       0     0     0
            /dev/sde1  ONLINE       0     0     0
        cache
          /dev/sdk1    ONLINE       0     0     0

errors: No known data errors

It seems to me that the HBA must be the problem. What can I do to troubleshoot this? Power consumption is nowhere near the capacity of the PSU, and I doubt the card is overheating, though I’m not sure how I’d check.

To be more specific, is there some way to “test” the HBA card?

I’ve moved all 8 drives onto the motherboard directly, just to verify that the whole array is working. This meant sticking the boot drives on the hba, but they will have significantly less io than the actual array, so I’m hoping they’ll be fine there temporarily until I source another card.

Running a scrub on the array for sanity, but it seems like that has fixed everything. Thanks for y’all’s help :+1:

2 Likes