How to replace a degraded disk in a striped pool with minimal data loss?

Hi

First time poster here :wave::wave:

I’m generally a technical person but the underlying details around ZFS and storage management are fairly new to me so excuse if some questions might sound a little on the wrong side.

I setup a simple NAS at home a little while back and decided to go for a setup with 2 disks (4 TB each) without any redundancy (I believe the correct term if a striped pool). The main reason for doing so is that the data in those disks is not critical and I don’t mind too much if I lose some data in them.

It’s been a few years since I set everything up and besides adding a third disk (still striped, no redundancy) I haven’t touched the setup much.

I have recently discovered that Truenas is saying the pool is degraded due to 1 disk (likely one of the old ones. Haven’t checked the exact details yet).

While I do not mind some lost data, I do hope to avoid total loss of the entire pool (a total of 12 TB).

So basically I am trying to figure out how to go about replacing that disk with as minimal data loss as possible and looking for whatever advice I can get since it is the first time I am doing this.

My first thought would be to add a fourth disk to the pool of the same size of the degrading disk and tell TrueNas to essentially “evacuate” that disk (copying all relevant data in it that it is able to, to the new disk). Is such a thing possible at all? If not can I somehow do it manually (and if so, how do Zi even know which files exist in that disk to begin with)?

I also came upon a way in the TrueNas web console to “replace” a disk but I am not sure that would be correct as I am not familiar with how ZFS stores the data under the hood. Might it not leave files half spread across the old disks and then missing parts that should be on the new one?

And if this all sounds just wrong on my part, I’m thinking as a last resort to just bite the bullet and buy a whole new set of disks, build a new pool at least as big as the current one (with redundancy this time so replacing a single disk becomes easier in the future) and then just copy all the data I can from the old pool to the new one. The question here is whether TrueNas can help with this at all or do I have to mount both pools and manually cp the data across them?

In any case I’d appreciate some pointers on how to best go about this.

Based on what you have written, I’m assuming you have three single drive vDEVs, not one vDEV made of 3 drives in a stripe.

Post the output of zpool status -v and we will see what you have going on.

While this would likely be the better way to go, we need to see what is going on to provide proper help.

Take a look at my links in my signature, click on Joes Rules. It is a short set of rules on what data we need at a minimum for a given problem. And it is best if you do not allow us to assume anything, so we are not going down the wrong path.

Thank you very much for the reply Joe

I think you are correct in terms of me having 3 vDevs.

Here are some command results which hopefully shed more light in the situation

Systems Specs are as follows:

  • Asus z390m-ITX Motherboard
  • 8 GB RAM
  • Intel(R) Pentium(R) Gold G5600 CPU @ 3.90GHz
  • 3 x Seagate IronWolf Pro ST4000NE001 256MB 4TB (Pook disks)
  • 1 x Kingston SA400M8 (120G) SSD (Truenas / Boot disk)
  • TrueNAS Version - TrueNAS-12.0-U8.1
root@freenas[~]# ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -vLtsc lsblk,serial,smartx,smart
  pool: freenas-boot
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
	still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 00:07:32 with 0 errors on Sun Aug 31 03:52:32 2025
config:

	NAME          STATE     READ WRITE CKSUM  SLOW  size  vendor  model  serial  hours_on  pwr_cyc  temp  health  ata_err  realloc  rep_ucor  cmd_to  pend_sec  off_ucor
	freenas-boot  ONLINE       0     0     0     -
	  ada1p2      ONLINE       0     0     0     0     -       -      -       -         -        -     -       -        -        -         -       -         -         -  (untrimmed)

errors: No known data errors

  pool: mainpool
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
	corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
	entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub in progress since Sun Aug 10 00:00:01 2025
	1.28T scanned at 6.54M/s, 943G issued at 1.25M/s, 7.59T total
	0B repaired, 12.14% done, no estimated completion time
config:

	NAME                                          STATE     READ WRITE CKSUM  SLOW  size  vendor  model  serial  hours_on  pwr_cyc  temp  health  ata_err  realloc  rep_ucor  cmd_to  pend_sec  off_ucor
	mainpool                                      DEGRADED     0     0     0     -
	  gptid/4e2f9530-c489-11ea-8287-7085c2a71bf8  DEGRADED     9     0    31    82  too many errors     -       -      -       -         -        -     -       -        -        -         -       -         -         -  (trim unsupported)
	  gptid/c99f65c8-db50-11eb-9a48-7085c2a71bf8  ONLINE       0     0     0     0     -       -      -       -         -        -     -       -        -        -         -       -         -         -  (trim unsupported)
	  gptid/457ad27c-3886-11f0-b5cb-7085c2a71bf8  ONLINE       0     0     0     0     -       -      -       -         -        -     -       -        -        -         -       -         -         -  (trim unsupported)

errors: Permanent errors have been detected in the following files:
# Redacted to keep the post somewhat readable as the list is somewhat long and I am guessing it is not very relevant 
root@freenas[~]# zpool list
NAME           SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
freenas-boot  95.5G  2.20G  93.3G        -         -     0%     2%  1.00x    ONLINE  -
mainpool      10.9T  7.66T  3.22T        -         -    41%    70%  1.00x  DEGRADED  /mnt

The command below is done for /dev/ada2 as it is the one reported to be degrading

root@freenas[~]# smartctl -x /dev/ada2
smartctl 7.2 2020-12-30 r5155 [FreeBSD 12.2-RELEASE-p14 amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate IronWolf Pro
Device Model:     ST4000NE001-2MA101
Serial Number:    WJG1CLBF
LU WWN Device Id: 5 000c50 0cfe836b4
Firmware Version: EN01
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-4 (minor revision not indicated)
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sun Aug 31 13:01:27 2025 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is:   Unavailable
APM feature is:   Unavailable
Rd look-ahead is: Enabled
Write cache is:   Enabled
DSN feature is:   Disabled
ATA Security is:  Disabled, frozen [SEC2]


Write SCT (Get) Feature Control Command failed: Input/output error
Wt Cache Reorder: Unknown (SCT Feature Control command failed)

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82)	Offline data collection activity
					was completed without error.
					Auto Offline Data Collection: Enabled.
Self-test execution status:      (  41)	The self-test routine was interrupted
					by the host with a hard or soft reset.
Total time to complete Offline
data collection: 		(  567) seconds.
Offline data collection
capabilities: 			 (0x7b) SMART execute Offline immediate.
					Auto Offline data collection on/off support.
					Suspend Offline collection upon new
					command.
					Offline surface scan supported.
					Self-test supported.
					Conveyance Self-test supported.
					Selective Self-test supported.
SMART capabilities:            (0x0003)	Saves SMART data before entering
					power-saving mode.
					Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine
recommended polling time: 	 (   1) minutes.
Extended self-test routine
recommended polling time: 	 ( 359) minutes.
Conveyance self-test routine
recommended polling time: 	 (   2) minutes.
SCT capabilities: 	       (0x50bd)	SCT Status supported.
					SCT Error Recovery Control supported.
					SCT Feature Control supported.
					SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
  1 Raw_Read_Error_Rate     POSR--   068   058   044    -    183465364
  3 Spin_Up_Time            PO----   100   100   000    -    0
  4 Start_Stop_Count        -O--CK   100   100   020    -    29
  5 Reallocated_Sector_Ct   PO--CK   099   099   010    -    4704
  7 Seek_Error_Rate         POSR--   100   253   045    -    77309648481
  9 Power_On_Hours          -O--CK   100   100   000    -    19
 10 Spin_Retry_Count        PO--C-   100   100   097    -    0
 12 Power_Cycle_Count       -O--CK   100   100   020    -    0
 18 Head_Health             PO-R--   100   100   050    -    0
187 Reported_Uncorrect      -O--CK   001   001   000    -    1335
188 Command_Timeout         -O--CK   014   014   000    -    110
190 Airflow_Temperature_Cel -O---K   052   052   040    -    48 (Min/Max 0/48)
192 Power-Off_Retract_Count -O--CK   100   100   000    -    0
193 Load_Cycle_Count        -O--CK   100   100   000    -    16
194 Temperature_Celsius     -O---K   048   048   000    -    48 (0 40 0 0 0)
195 Hardware_ECC_Recovered  -O-RC-   083   070   000    -    183465364
197 Current_Pending_Sector  -O--C-   099   099   000    -    576
198 Offline_Uncorrectable   ----C-   099   099   000    -    576
199 UDMA_CRC_Error_Count    -OSRCK   200   200   000    -    0
240 Head_Flying_Hours       ------   100   253   000    -    19h+12m+50.573s
241 Total_LBAs_Written      ------   100   253   000    -    108928
242 Total_LBAs_Read         ------   100   253   000    -    183356436
                            ||||||_ K auto-keep
                            |||||__ C event count
                            ||||___ R error rate
                            |||____ S speed/performance
                            ||_____ O updated online
                            |______ P prefailure warning

General Purpose Log Directory Version 1
SMART           Log Directory Version 1 [multi-sector log support]
Address    Access  R/W   Size  Description
0x00       GPL,SL  R/O      1  Log Directory
0x01           SL  R/O      1  Summary SMART error log
0x02           SL  R/O      5  Comprehensive SMART error log
0x03       GPL     R/O      5  Ext. Comprehensive SMART error log
0x04       GPL     R/O    256  Device Statistics log
0x04       SL      R/O      8  Device Statistics log
0x06           SL  R/O      1  SMART self-test log
0x07       GPL     R/O      1  Extended self-test log
0x08       GPL     R/O      2  Power Conditions log
0x09           SL  R/W      1  Selective self-test log
0x0a       GPL     R/W      8  Device Statistics Notification
0x0c       GPL     R/O   2048  Pending Defects log
0x10       GPL     R/O      1  NCQ Command Error log
0x11       GPL     R/O      1  SATA Phy Event Counters log
0x13       GPL     R/O      1  SATA NCQ Send and Receive log
0x15       GPL     R/W      1  Rebuild Assist log
0x21       GPL     R/O      1  Write stream error log
0x22       GPL     R/O      1  Read stream error log
0x24       GPL     R/O    768  Current Device Internal Status Data log
0x2f       GPL     -        1  Set Sector Configuration
0x30       GPL,SL  R/O      9  IDENTIFY DEVICE data log
0x80-0x9f  GPL,SL  R/W     16  Host vendor specific log
0xa1       GPL,SL  VS      24  Device vendor specific log
0xa2       GPL     VS   16320  Device vendor specific log
0xa4       GPL,SL  VS     160  Device vendor specific log
0xa6       GPL     VS     192  Device vendor specific log
0xa8-0xa9  GPL,SL  VS     136  Device vendor specific log
0xab       GPL     VS       1  Device vendor specific log
0xad       GPL     VS      16  Device vendor specific log
0xbe-0xbf  GPL     VS   65535  Device vendor specific log
0xc1       GPL,SL  VS       8  Device vendor specific log
0xc3       GPL,SL  VS      32  Device vendor specific log
0xc9       GPL,SL  VS       8  Device vendor specific log
0xca       GPL,SL  VS      16  Device vendor specific log
0xcd       GPL,SL  VS       8  Device vendor specific log
0xd1       GPL     VS     336  Device vendor specific log
0xd2       GPL     VS   10000  Device vendor specific log
0xd4       GPL     VS    2048  Device vendor specific log
0xda       GPL,SL  VS       1  Device vendor specific log
0xe0       GPL,SL  R/W      1  SCT Command/Status
0xe1       GPL,SL  R/W      1  SCT Data Transfer

SMART Extended Comprehensive Error Log Version: 1 (5 sectors)
Device Error Count: 21674 (device log contains only the most recent 20 errors)
	CR     = Command Register
	FEATR  = Features Register
	COUNT  = Count (was: Sector Count) Register
	LBA_48 = Upper bytes of LBA High/Mid/Low Registers ]  ATA-8
	LH     = LBA High (was: Cylinder High) Register    ]   LBA
	LM     = LBA Mid (was: Cylinder Low) Register      ] Register
	LL     = LBA Low (was: Sector Number) Register     ]
	DV     = Device (was: Device/Head) Register
	DC     = Device Control Register
	ER     = Error register
	ST     = Status register
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 21674 [13] occurred at disk power-on lifetime: 19 hours (0 days + 19 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 00 b9 1c ba 50 00 00  Error: UNC at LBA = 0xb91cba50 = 3105667664

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 00 00 00 10 00 00 97 ef d7 f8 40 00     19:19:24.046  READ FPDMA QUEUED
  60 00 00 00 08 00 00 98 28 c8 60 40 00     19:19:24.046  READ FPDMA QUEUED
  60 00 00 00 08 00 00 98 28 c8 78 40 00     19:19:24.038  READ FPDMA QUEUED
  60 00 00 00 08 00 00 98 28 c4 d8 40 00     19:19:24.029  READ FPDMA QUEUED
  60 00 00 00 08 00 00 98 28 c4 a8 40 00     19:19:24.023  READ FPDMA QUEUED

Error 21673 [12] occurred at disk power-on lifetime: 19 hours (0 days + 19 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 00 59 e3 bd 20 00 00  Error: UNC at LBA = 0x59e3bd20 = 1508097312

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 00 00 01 00 00 00 b9 1c 33 68 40 00     19:17:17.979  READ FPDMA QUEUED
  60 00 00 00 08 00 00 59 e3 bd 20 40 00     19:17:10.031  READ FPDMA QUEUED
  60 00 00 00 08 00 00 59 e3 bd 50 40 00     19:17:10.031  READ FPDMA QUEUED
  60 00 00 08 00 00 00 b9 1c 2b 68 40 00     19:17:10.027  READ FPDMA QUEUED
  60 00 00 08 00 00 00 b9 1c 23 68 40 00     19:17:10.022  READ FPDMA QUEUED

Error 21672 [11] occurred at disk power-on lifetime: 19 hours (0 days + 19 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 00 59 e3 bc b8 00 00  Error: UNC at LBA = 0x59e3bcb8 = 1508097208

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 00 00 08 00 00 00 b9 1b c5 68 40 00     19:15:43.775  READ FPDMA QUEUED
  60 00 00 00 08 00 00 59 e3 bc b8 40 00     19:15:36.002  READ FPDMA QUEUED
  60 00 00 08 00 00 00 b9 1b bd 68 40 00     19:15:35.973  READ FPDMA QUEUED
  60 00 00 08 00 00 00 b9 1b b5 68 40 00     19:15:35.960  READ FPDMA QUEUED
  60 00 00 00 08 00 00 aa e8 d9 20 40 00     19:15:35.960  READ FPDMA QUEUED

Error 21671 [10] occurred at disk power-on lifetime: 19 hours (0 days + 19 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 00 59 e4 0f 28 00 00  Error: UNC at LBA = 0x59e40f28 = 1508118312

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 00 00 01 00 00 00 b9 1b 8b 68 40 00     19:14:56.506  READ FPDMA QUEUED
  60 00 00 00 08 00 00 59 e4 0f 28 40 00     19:14:56.492  READ FPDMA QUEUED
  ea 00 00 00 00 00 00 00 00 00 00 40 00     19:14:56.464  FLUSH CACHE EXT
  61 00 00 00 08 00 01 d1 c0 be 50 40 00     19:14:48.817  WRITE FPDMA QUEUED
  61 00 00 00 08 00 01 d1 c0 bc 50 40 00     19:14:48.817  WRITE FPDMA QUEUED

Error 21670 [9] occurred at disk power-on lifetime: 19 hours (0 days + 19 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 00 d5 0e fd a8 00 00  Error: WP at LBA = 0xd50efda8 = 3574529448

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  61 00 00 00 08 00 01 d1 c0 be 20 40 00     19:11:16.722  WRITE FPDMA QUEUED
  61 00 00 00 08 00 01 d1 c0 bc 20 40 00     19:11:16.722  WRITE FPDMA QUEUED
  61 00 00 00 08 00 00 00 40 04 20 40 00     19:11:16.722  WRITE FPDMA QUEUED
  61 00 00 00 08 00 00 00 40 02 20 40 00     19:11:16.721  WRITE FPDMA QUEUED
  60 00 00 00 08 00 00 4b f7 f7 30 40 00     19:11:16.495  READ FPDMA QUEUED

Error 21669 [8] occurred at disk power-on lifetime: 19 hours (0 days + 19 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 01 42 36 de 28 00 00  Error: UNC at LBA = 0x14236de28 = 5405859368

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 00 00 00 08 00 00 5d 92 ec b8 40 00     19:10:05.486  READ FPDMA QUEUED
  60 00 00 00 08 00 01 42 36 de 28 40 00     19:10:05.461  READ FPDMA QUEUED
  60 00 00 08 00 00 00 b9 1a 80 68 40 00     19:10:05.442  READ FPDMA QUEUED
  60 00 00 08 00 00 00 b9 1a 78 68 40 00     19:10:05.434  READ FPDMA QUEUED
  60 00 00 00 08 00 00 ba 3c 58 68 40 00     19:10:05.422  READ FPDMA QUEUED

Error 21668 [7] occurred at disk power-on lifetime: 19 hours (0 days + 19 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 00 ce 19 96 38 00 00  Error: WP at LBA = 0xce199638 = 3457783352

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  61 00 00 00 08 00 01 d1 c0 be 10 40 00     19:09:34.590  WRITE FPDMA QUEUED
  61 00 00 00 08 00 01 d1 c0 bc 10 40 00     19:09:34.590  WRITE FPDMA QUEUED
  61 00 00 00 08 00 00 00 40 04 10 40 00     19:09:34.590  WRITE FPDMA QUEUED
  61 00 00 00 08 00 00 00 40 02 10 40 00     19:09:34.590  WRITE FPDMA QUEUED
  60 00 00 00 08 00 00 60 55 6d 78 40 00     19:09:34.334  READ FPDMA QUEUED

Error 21667 [6] occurred at disk power-on lifetime: 19 hours (0 days + 19 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 00 d5 0e fe 18 00 00  Error: UNC at LBA = 0xd50efe18 = 3574529560

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 00 00 00 08 00 01 b7 bd 2a f0 40 00     19:05:04.640  READ FPDMA QUEUED
  60 00 00 00 10 00 00 d5 0e fe 18 40 00     19:05:04.558  READ FPDMA QUEUED
  60 00 00 08 00 00 00 b9 19 76 b8 40 00     19:05:04.550  READ FPDMA QUEUED
  60 00 00 00 08 00 00 d5 0e fe 28 40 00     19:05:04.540  READ FPDMA QUEUED
  60 00 00 01 00 00 00 b9 19 75 b8 40 00     19:05:04.540  READ FPDMA QUEUED

SMART Extended Self-test Log Version: 1 (1 sectors)
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Interrupted (host reset)      00%         8         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

SCT Status Version:                  3
SCT Version (vendor specific):       522 (0x020a)
Device State:                        Active (0)
Current Temperature:                    48 Celsius
Power Cycle Min/Max Temperature:     44/48 Celsius
Lifetime    Min/Max Temperature:     23/54 Celsius
Under/Over Temperature Limit Count:   0/18
SMART Status:                        0xc24f (PASSED)
Vendor specific:
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00

SCT Temperature History Version:     2
Temperature Sampling Period:         3 minutes
Temperature Logging Interval:        59 minutes
Min/Max recommended Temperature:     10/40 Celsius
Min/Max Temperature Limit:            5/60 Celsius
Temperature History Size (Index):    128 (48)

Index    Estimated Time   Temperature Celsius
  49    2025-08-26 07:42    43  ************************
  50    2025-08-26 08:41    44  *************************
 ...    ..(  3 skipped).    ..  *************************
  54    2025-08-26 12:37    44  *************************
  55    2025-08-26 13:36    45  **************************
 ...    ..( 11 skipped).    ..  **************************
  67    2025-08-27 01:24    45  **************************
  68    2025-08-27 02:23    46  ***************************
  69    2025-08-27 03:22    46  ***************************
  70    2025-08-27 04:21    46  ***************************
  71    2025-08-27 05:20    47  ****************************
  72    2025-08-27 06:19    46  ***************************
  73    2025-08-27 07:18    47  ****************************
  74    2025-08-27 08:17    47  ****************************
  75    2025-08-27 09:16    47  ****************************
  76    2025-08-27 10:15    46  ***************************
  77    2025-08-27 11:14    47  ****************************
  78    2025-08-27 12:13    46  ***************************
  79    2025-08-27 13:12    46  ***************************
  80    2025-08-27 14:11    45  **************************
  81    2025-08-27 15:10    45  **************************
  82    2025-08-27 16:09    44  *************************
 ...    ..(  2 skipped).    ..  *************************
  85    2025-08-27 19:06    44  *************************
  86    2025-08-27 20:05    45  **************************
  87    2025-08-27 21:04    44  *************************
 ...    ..(  4 skipped).    ..  *************************
  92    2025-08-28 01:59    44  *************************
  93    2025-08-28 02:58    45  **************************
  94    2025-08-28 03:57    44  *************************
  95    2025-08-28 04:56    44  *************************
  96    2025-08-28 05:55    45  **************************
  97    2025-08-28 06:54    46  ***************************
  98    2025-08-28 07:53    46  ***************************
  99    2025-08-28 08:52    46  ***************************
 100    2025-08-28 09:51    47  ****************************
 ...    ..(  6 skipped).    ..  ****************************
 107    2025-08-28 16:44    47  ****************************
 108    2025-08-28 17:43    51  ********************************
 109    2025-08-28 18:42    49  ******************************
 110    2025-08-28 19:41    48  *****************************
 ...    ..(  3 skipped).    ..  *****************************
 114    2025-08-28 23:37    48  *****************************
 115    2025-08-29 00:36    49  ******************************
 116    2025-08-29 01:35    49  ******************************
 117    2025-08-29 02:34    50  *******************************
 118    2025-08-29 03:33    50  *******************************
 119    2025-08-29 04:32    49  ******************************
 120    2025-08-29 05:31    48  *****************************
 ...    ..(  5 skipped).    ..  *****************************
 126    2025-08-29 11:25    48  *****************************
 127    2025-08-29 12:24    47  ****************************
   0    2025-08-29 13:23    48  *****************************
   1    2025-08-29 14:22    48  *****************************
   2    2025-08-29 15:21    47  ****************************
   3    2025-08-29 16:20    48  *****************************
 ...    ..(  9 skipped).    ..  *****************************
  13    2025-08-30 02:10    48  *****************************
  14    2025-08-30 03:09    49  ******************************
 ...    ..(  2 skipped).    ..  ******************************
  17    2025-08-30 06:06    49  ******************************
  18    2025-08-30 07:05    48  *****************************
  19    2025-08-30 08:04    49  ******************************
  20    2025-08-30 09:03    49  ******************************
  21    2025-08-30 10:02    48  *****************************
  22    2025-08-30 11:01    49  ******************************
  23    2025-08-30 12:00    49  ******************************
  24    2025-08-30 12:59    49  ******************************
  25    2025-08-30 13:58    48  *****************************
  26    2025-08-30 14:57    48  *****************************
  27    2025-08-30 15:56    48  *****************************
  28    2025-08-30 16:55    47  ****************************
  29    2025-08-30 17:54    47  ****************************
  30    2025-08-30 18:53     ?  -
  31    2025-08-30 19:52    46  ***************************
  32    2025-08-30 20:51    48  *****************************
  33    2025-08-30 21:50    47  ****************************
  34    2025-08-30 22:49    47  ****************************
  35    2025-08-30 23:48    46  ***************************
  36    2025-08-31 00:47    46  ***************************
  37    2025-08-31 01:46    45  **************************
  38    2025-08-31 02:45    45  **************************
  39    2025-08-31 03:44    44  *************************
  40    2025-08-31 04:43    45  **************************
  41    2025-08-31 05:42    46  ***************************
  42    2025-08-31 06:41    47  ****************************
 ...    ..(  5 skipped).    ..  ****************************
  48    2025-08-31 12:35    47  ****************************

Read SCT Status failed: Input/output error
SCT (Get) Error Recovery Control command failed

Device Statistics (GP Log 0x04)
Page  Offset Size        Value Flags Description
0x01  =====  =               =  ===  == General Statistics (rev 1) ==
0x01  0x008  4               0  ---  Lifetime Power-On Resets
0x01  0x010  4              19  ---  Power-on Hours
0x01  0x018  6     42234751783  ---  Logical Sectors Written
0x01  0x020  6      1247645337  ---  Number of Write Commands
0x01  0x028  6   1316887053700  ---  Logical Sectors Read
0x01  0x030  6      5075166084  ---  Number of Read Commands
0x01  0x038  6               -  ---  Date and Time TimeStamp
0x03  =====  =               =  ===  == Rotating Media Statistics (rev 1) ==
0x03  0x008  4           44917  ---  Spindle Motor Power-on Hours
0x03  0x010  4           44917  ---  Head Flying Hours
0x03  0x018  4              16  ---  Head Load Events
0x03  0x020  4            4704  ---  Number of Reallocated Logical Sectors
0x03  0x028  4            9664  ---  Read Recovery Attempts
0x03  0x030  4               0  ---  Number of Mechanical Start Failures
0x03  0x038  4             576  ---  Number of Realloc. Candidate Logical Sectors
0x03  0x040  4              13  ---  Number of High Priority Unload Events
0x04  =====  =               =  ===  == General Errors Statistics (rev 1) ==
0x04  0x008  4            5693  ---  Number of Reported Uncorrectable Errors
0x04  0x010  4           16519  ---  Resets Between Cmd Acceptance and Completion
0x05  =====  =               =  ===  == Temperature Statistics (rev 1) ==
0x05  0x008  1              48  ---  Current Temperature
0x05  0x010  1              47  ---  Average Short Term Temperature
0x05  0x018  1              46  ---  Average Long Term Temperature
0x05  0x020  1              54  ---  Highest Temperature
0x05  0x028  1              28  ---  Lowest Temperature
0x05  0x030  1              52  ---  Highest Average Short Term Temperature
0x05  0x038  1              33  ---  Lowest Average Short Term Temperature
0x05  0x040  1              50  ---  Highest Average Long Term Temperature
0x05  0x048  1              35  ---  Lowest Average Long Term Temperature
0x05  0x050  4               0  ---  Time in Over-Temperature
0x05  0x058  1              60  ---  Specified Maximum Operating Temperature
0x05  0x060  4               0  ---  Time in Under-Temperature
0x05  0x068  1               5  ---  Specified Minimum Operating Temperature
0x06  =====  =               =  ===  == Transport Statistics (rev 1) ==
0x06  0x008  4           16601  ---  Number of Hardware Resets
0x06  0x010  4              12  ---  Number of ASR Events
0x06  0x018  4               0  ---  Number of Interface CRC Errors
0xff  =====  =               =  ===  == Vendor Specific Statistics (rev 1) ==
0xff  0x010  7               0  ---  Vendor Specific
0xff  0x018  7               0  ---  Vendor Specific
                                |||_ C monitored condition met
                                ||__ D supports DSN
                                |___ N normalized value

Pending Defects log (GP Log 0x0c)
Index                LBA    Hours
    0         5104431464    44570
    1         5104431465    44570
    2         5104431466    44570
    3         5104431467    44570
    4         5104431468    44570
    5         5104431469    44570
    6         5104431470    44570
    7         5104431471    44570
    8         7812649448    44888
    9         7812649449    44888
   10         7812649450    44888
   11         7812649451    44888
   12         7812649452    44888
   13         7812649453    44888
   14         7812649454    44888
   15         7812649455    44888
   16         7813033992    44887
   17         7813033993    44887
   18         7813033994    44887
   19         7813033995    44887
   20         7813033996    44887
   21         7813033997    44887
   22         7813033998    44887
   23         7813033999    44887
   24         7813034008    44887
   25         7813034009    44887
   26         7813034010    44887
   27         7813034011    44887
   28         7813034012    44887
   29         7813034013    44887
   30         7813034014    44887

SATA Phy Event Counters (GP Log 0x11)
ID      Size     Value  Description
0x000a  2          114  Device-to-host register FISes sent due to a COMRESET
0x0001  2            0  Command failed due to ICRC error
0x0003  2            0  R_ERR response for device-to-host data FIS
0x0004  2            0  R_ERR response for host-to-device data FIS
0x0006  2            0  R_ERR response for device-to-host non-data FIS
0x0007  2            0  R_ERR response for host-to-device non-data FIS

Is there anything else that could help here that I might have missed?

If you have room for another drive (an unused sata and power cable available) you can put the new drive in, select the failing disk and pick Replace from the GUI. That will attempt to resilver the data from the bad disk to the new disk.

Since that operation will stress the disk in question there’s a non-zero risk it will further degrade, possible getting FAULTED. If that happens your pool is likely completely lost.

5 Likes

With a stripe, losing ALL is exactly what will happen if any single drive fails.
For now, replace the drive from GUI (without removing/offllining it, of course).
Then consider moving to a different pool layout with redundancy.

1 Like

Thank you. So the replace function sounds like there is some hope. So, if I understand correctly, I could add a new drive to the machine, use the “replace” function without offlining the current one and it will attempt to copy whatever it can from the failing one into the new one?

Am I correct in the assumption that if I want to change the layout to add some redundancy (for example so I can tolerate 1 disk loss and make this type of problem much simpler) I would still be looking at data loss (if I am unable to first copy the data off the disks first) or is it possible to change the layout of a pool on the fly? If that’s the case I am seriously considering just setting up a brand new pool and copy whatever I can off the old one instead.

You can change “on the fly” if you go for striped mirrors: Add three drives, and “Extend” each of the original drives.
To go for raidz#, you have to make a new pool and replicate, or backup and restore.

Correct.

As Etorix notes, you can convert to and from mirrors on the fly. You can also add and remove single-disk and mirrored vdevs on the fly. So if you wanted, e.g., to change your pool to two 20 TB disks mirrored, removing all the others, you could do that, though you would need a healthy pool to start IIRC.

Understood. Mirroring my current setup is not something I am planning on as it would essentially require me to buy 3 new disks which would not only be costly but also exhaust my Sata slots leaving me no room for future expansion (finding a consumer grade motherboard with 6 slots instead of 4 was not easy to begin with).

Given all this information I’m thinking it might be more be better to go ahead and get 3 x 8 TB disks and setup a RAIDz1 pool on the side which would give me a large enough pool to move my data into and if I understand correctly, should I run into disk trouble like this in the future again, I just replace the bad disk with a new one and no more hassle.

Correct. You can now also use raidz expansion to widen the pool.
But the wider the vdev and the larger the drives, the more you’d want to have created the pool as raidz2 rather than raidz1.

Welcome to the forum! You’re thinking about this the right way striped pools are risky because any single disk failure affects the whole pool. TrueNAS does allow you to replace a disk in a pool, but since striped pools have no redundancy, it can’t fully protect you from data loss. Your safest approach might be to back up as much data as possible before attempting a replacement. Adding a new disk and trying to ‘evacuate’ data manually isn’t really supported in a striped setup. Many users in your situation end up creating a new pool, ideally with some redundancy like mirrored or RAIDZ, and then copy over whatever data they can from the old pool. That way, future disk failures are much less stressful.

If the list shows actual files, you’ve lost data.
If the list shows entries as pairs of hex numbers (<0x0>:<0xabcd>), you have corrupted ZFS metadata. Sometimes, this may be fixed by deleting all snapshots were there are corrupted files. Sometimes it remains unfixable, other than by destroying the pool and restoring what was rescued to a new pool.

1 Like

Can you elaborate on this? Why would I want to configure Raidz2 instead? As I understand Raidz1 Vs Raidz2 is an upfront choice that I cannot chance without recreating the pool in the future right? So I’m curious to dig a little into your thoughts on the matter

Elaborate on what? Since you’re apparently responsing to yourself.

The key is how paranoid you are and how much resiliency you want on your main pool before you have to resort to restoring from backup. (You HAVE a backup, right?)

Basic assumptions: Shit happens. Drives fails.
Raidz1 can lose one drive… but then any further incident during resilver loses somes files (URE) or loses all. So raidz1 is not “safe from losing one drive”, it is “at risk when losing one drive”.
Raidz2 can loose one drive and resilver safely (because it still has one degree of redundancy left). Raidz2 is truly safe from losing one drive, and only gets at risk after losing two—implying that one must react at the first incident.
The wider the vdev, and the larger the drives, the longer it takes to resilver and the higher the risk of losing something.

1 Like

Thank you for all the help.

I’ve been doing some hard thinking on what’s the solution for my predicament and I think I ha e arrived at something I am happy with, assuming it works.

After getting a better understanding on how RAID and the whole resilvering process works, I’m thinking that a simple mirrored setup might be the way to go after all. If a disk fails no stress on the existing disks to spread data and not a lot of time needed to do so either.

So I’m thinking I can buy two 8 TB disks and, if I understand the terminology here, create a mirrored vdev with them. I’d then use “replace” function to replace the degraded disk with this mirrored vdev. If this works, fine, if the pool is lost and I have to create it from scratch, I’ll then turn the two healthy 4 TB disks into another mirrored vdev and create a new pool consisting of one vdev with 2 mirrored 4 TB disks and another with 2 mirrored 8 TB disks for a total of 12 TB of storage giving me a much better base going forward.

Would that work or have I misunderstood something?

What’s unclear to me here is that, how exactly to go about once I start running out of space on this new pool. Because I only have a total of 6 SATA slots on the machine, if I simply add another mirrored vdev I’ll be stuck once I need more space again so I’m thinking that those last 2 slots can be used as a temporary place to allow me to increase the original pool.

What I mean by the above is, for example, that I add another set of mirrored 8 TB disks and then “replace” the 4 TB mirrored vdev at which point I remove the 4 TB disks from the machine leaving me once again with 2 SATA slots to repeat this process when needed. My main worry here is that this means we’re back to stressing the disks in the “replace” process though this time it would be done in healthy disks in theory so maybe not so much of a problem?

Does that all sound too crazy?

1 Like

Your understanding is good. With 6 ports for currently 2 drives, you have some room to manoeuvre before you have to bring in a HBA.