Device: /dev/ada0, 504 currently unreadable (pending) sectors

I am beside myself that someone setup a TrueNAS server with a non-redundant, single-drive vdev.

Why go with TrueNAS, then? Why even bother with ZFS?

If the pool was a single disk, you’re deep into trouble: First hardware failure, and then logical corruption.
Klennet ZFS Recovery might be able to recover some data, but the price tag for actual recovery is unlikey to be palatable.
You can try zpool import -f , as suggested, and potentially escalate from there.

Have another drive at hand to immediately copy everything you can accces!

The hard drive is 3TB x 10 making about 30TB


root@TrueNAS[~]# zpool import -f
   pool: Volume
     id: 10277565860005492699
  state: FAULTED
status: The pool metadata is corrupted.
 action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-72
 config:

        Volume                                        FAULTED  corrupted data
          gptid/7d557156-e4c1-11ee-9009-6805ca0a988e  ONLINE

The zpool import shows a single drive pool - or possibly 10 drives hidden behind a hardware RAID5 card.

Either way the pool is toast.

Please post the results from lsblk -o name,model,serial,partuuid,fstype,size.

Also please post a full description of the hardware for this system especially any SATA interface cards, and the exact models of the hard drives, and (of course) the full output from the smartctl -x command previously requested.

It may not be that simple a cock-up - it may be a way way worse setup than this e.g. use of hardware RAID etc. (by someone who seriously didn’t know what cock-ups they were making).

1 Like

lsblk command not found

The system CPU Spec

root@TrueNAS[~]# dmesg | grep -i cpu
CPU: Intel(R) Xeon(R) CPU E3-1220 v5 @ 3.00GHz (3000.00-MHz K8-class CPU)
FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs
cpu0: <ACPI CPU> on acpi0
coretemp0: <CPU On-Die Thermal Sensors> on cpu0
hwpstate_intel0: <Intel Speed Shift> on cpu0
hwpstate_intel1: <Intel Speed Shift> on cpu1
hwpstate_intel2: <Intel Speed Shift> on cpu2
hwpstate_intel3: <Intel Speed Shift> on cpu3
hw.physmem: 17069703168
CPU: Intel(R) Xeon(R) CPU E3-1220 v5 @ 3.00GHz (2999.99-MHz K8-class CPU)

root@TrueNAS[~]# smartctl -x
smartctl 7.2 2021-09-14 r5236 [FreeBSD 13.1-RELEASE-p9 amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

ERROR: smartctl requires a device name as the final command-line argument.


Use smartctl -h to get a usage summary

Then what’s the 500 GB drive you’ve posted the SMART report?

This is post #29 and you’ve still not answered that. All we know is the CPU, which is tof limited relevance.
What’s the server? Pre-built system? Which model?
Home build? Please list all parts: Motherboard, RAM, any card/controller.
And all drives.

If you have rebooted, ada0 may have got a different number.
With CORE, use camcontrol devlist instead of lsblk,
and then smartctl -x /dev/adaN (or /dev/daN) for the failing drive. In doubt, get SMART reports for all drives.

Wouldn’t that be a da device rather than ada then?

1 Like

And when I said run smartctl -x and didn’t tell him to append the device name and it gives an error saying “you need to add a device name” they can’t do that.

And as for why lsblk doesn’t work, I hadn’t spotted that this is a Core question, and I have no idea what the FreeBSD equivalent is.

Since I don’t know Core, I am out of here (though given the lack of engagement by the OP I might well have been out of here for other reasons).

1 Like

Can get you something fairly close:

sudo camcontrol devlist

There’s also gpart list, which will give you the most comprehensive information, yet it the OP might not paste the full text.


root@TrueNAS[~]# camcontrol devlist
<LSI 9750-4i    DISK 5.12>         at scbus0 target 0 lun 0 (pass0,da0)
<LSI SAS2X28 0e12>                 at scbus0 target 64 lun 0 (ses0,pass1)
<HGST HTS545050A7E680 GR2OA230>    at scbus1 target 0 lun 0 (ada0,pass2)
<AHCI SGPIO Enclosure 2.00 0001>   at scbus9 target 0 lun 0 (ses1,pass3)



root@TrueNAS[~]# smartctl -x /dev/ada0
smartctl 7.2 2021-09-14 r5236 [FreeBSD 13.1-RELEASE-p9 amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Hitachi/HGST Travelstar Z5K500
Device Model:     HGST HTS545050A7E680
Serial Number:    RB250AM5HHZ9PP
LU WWN Device Id: 5 000cca 7e3d55adc
Firmware Version: GR2OA230
User Capacity:    500,107,862,016 bytes [500 GB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Form Factor:      2.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS T13/1699-D revision 6
SATA Version is:  SATA 2.6, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Tue Sep 17 09:36:54 2024 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is:   Unavailable
APM feature is:   Disabled
Rd look-ahead is: Enabled
Write cache is:   Enabled
DSN feature is:   Unavailable
ATA Security is:  Disabled, NOT FROZEN [SEC1]
Wt Cache Reorder: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (   45) seconds.
Offline data collection
capabilities:                    (0x5b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        No Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        ( 121) minutes.
SCT capabilities:              (0x003d) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
  1 Raw_Read_Error_Rate     PO-R--   100   100   062    -    0
  2 Throughput_Performance  P-S---   100   100   040    -    0
  3 Spin_Up_Time            POS---   215   215   033    -    1
  4 Start_Stop_Count        -O--C-   100   100   000    -    527
  5 Reallocated_Sector_Ct   PO--CK   100   100   005    -    0
  7 Seek_Error_Rate         PO-R--   100   100   067    -    0
  8 Seek_Time_Performance   P-S---   100   100   040    -    0
  9 Power_On_Hours          -O--C-   044   044   000    -    24963
 10 Spin_Retry_Count        PO--C-   100   100   060    -    0
 12 Power_Cycle_Count       -O--CK   100   100   000    -    527
191 G-Sense_Error_Rate      -O-R--   100   100   000    -    0
192 Power-Off_Retract_Count -O--CK   099   099   000    -    207
193 Load_Cycle_Count        -O--C-   001   001   000    -    2756045
194 Temperature_Celsius     -O----   250   250   000    -    24 (Min/Max 13/47)
196 Reallocated_Event_Count -O--CK   100   100   000    -    38
197 Current_Pending_Sector  -O---K   088   088   000    -    504
198 Offline_Uncorrectable   ---R--   100   100   000    -    0
199 UDMA_CRC_Error_Count    -O-R--   200   200   000    -    0
223 Load_Retry_Count        -O-R--   100   100   000    -    0
                            ||||||_ K auto-keep
                            |||||__ C event count
                            ||||___ R error rate
                            |||____ S speed/performance
                            ||_____ O updated online
                            |______ P prefailure warning

General Purpose Log Directory Version 1
SMART           Log Directory Version 1 [multi-sector log support]
Address    Access  R/W   Size  Description
0x00       GPL,SL  R/O      1  Log Directory
0x01           SL  R/O      1  Summary SMART error log
0x02           SL  R/O      1  Comprehensive SMART error log
0x03       GPL     R/O      1  Ext. Comprehensive SMART error log
0x06           SL  R/O      1  SMART self-test log
0x07       GPL     R/O      1  Extended self-test log
0x09           SL  R/W      1  Selective self-test log
0x10       GPL     R/O      1  NCQ Command Error log
0x11       GPL     R/O      1  SATA Phy Event Counters log
0x80-0x9f  GPL,SL  R/W     16  Host vendor specific log
0xe0       GPL,SL  R/W      1  SCT Command/Status
0xe1       GPL,SL  R/W      1  SCT Data Transfer

SMART Extended Comprehensive Error Log Version: 1 (1 sectors)
No Errors Logged

SMART Extended Self-test Log Version: 1 (1 sectors)
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed: read failure       90%     22162         8201416

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

SCT Status Version:                  3
SCT Version (vendor specific):       256 (0x0100)
Device State:                        Active (0)
Current Temperature:                    23 Celsius
Power Cycle Min/Max Temperature:     23/32 Celsius
Lifetime    Min/Max Temperature:     13/47 Celsius
Specified Max Operating Temperature:    24 Celsius
Under/Over Temperature Limit Count:   0/0

SCT Temperature History Version:     2
Temperature Sampling Period:         1 minute
Temperature Logging Interval:        1 minute
Min/Max recommended Temperature:      0/60 Celsius
Min/Max Temperature Limit:           -40/65 Celsius
Temperature History Size (Index):    128 (87)

Index    Estimated Time   Temperature Celsius
  88    2024-09-17 07:29    23  ****
  89    2024-09-17 07:30    23  ****
  90    2024-09-17 07:31    24  *****
 ...    ..(  2 skipped).    ..  *****
  93    2024-09-17 07:34    24  *****
  94    2024-09-17 07:35    23  ****
  95    2024-09-17 07:36    23  ****
  96    2024-09-17 07:37    24  *****
  97    2024-09-17 07:38    23  ****
 ...    ..(  2 skipped).    ..  ****
 100    2024-09-17 07:41    23  ****
 101    2024-09-17 07:42    24  *****
 102    2024-09-17 07:43    23  ****
 103    2024-09-17 07:44    23  ****
 104    2024-09-17 07:45    24  *****
 105    2024-09-17 07:46    24  *****
 106    2024-09-17 07:47    23  ****
 107    2024-09-17 07:48    24  *****
 108    2024-09-17 07:49    23  ****
 109    2024-09-17 07:50    24  *****
 110    2024-09-17 07:51    23  ****
 ...    ..( 21 skipped).    ..  ****
   4    2024-09-17 08:13    23  ****
   5    2024-09-17 08:14    24  *****
 ...    ..( 11 skipped).    ..  *****
  17    2024-09-17 08:26    24  *****
  18    2024-09-17 08:27    23  ****
  19    2024-09-17 08:28    24  *****
  20    2024-09-17 08:29    23  ****
  21    2024-09-17 08:30    24  *****
 ...    ..( 12 skipped).    ..  *****
  34    2024-09-17 08:43    24  *****
  35    2024-09-17 08:44    23  ****
  36    2024-09-17 08:45    24  *****
  37    2024-09-17 08:46    24  *****
  38    2024-09-17 08:47    23  ****
  39    2024-09-17 08:48    24  *****
  40    2024-09-17 08:49    23  ****
  41    2024-09-17 08:50    24  *****
  42    2024-09-17 08:51    23  ****
  43    2024-09-17 08:52    23  ****
  44    2024-09-17 08:53    24  *****
  45    2024-09-17 08:54    23  ****
  46    2024-09-17 08:55    23  ****
  47    2024-09-17 08:56    23  ****
  48    2024-09-17 08:57    24  *****
 ...    ..(  3 skipped).    ..  *****
  52    2024-09-17 09:01    24  *****
  53    2024-09-17 09:02    23  ****
  54    2024-09-17 09:03    24  *****
 ...    ..(  2 skipped).    ..  *****
  57    2024-09-17 09:06    24  *****
  58    2024-09-17 09:07    23  ****
  59    2024-09-17 09:08    24  *****
 ...    ..(  3 skipped).    ..  *****
  63    2024-09-17 09:12    24  *****
  64    2024-09-17 09:13    23  ****
  65    2024-09-17 09:14    24  *****
  66    2024-09-17 09:15    24  *****
  67    2024-09-17 09:16    23  ****
  68    2024-09-17 09:17    23  ****
  69    2024-09-17 09:18    23  ****
  70    2024-09-17 09:19    24  *****
  71    2024-09-17 09:20    23  ****
  72    2024-09-17 09:21    24  *****
  73    2024-09-17 09:22    23  ****
  74    2024-09-17 09:23    24  *****
  75    2024-09-17 09:24    24  *****
  76    2024-09-17 09:25    24  *****
  77    2024-09-17 09:26    23  ****
 ...    ..(  8 skipped).    ..  ****
  86    2024-09-17 09:35    23  ****
  87    2024-09-17 09:36    24  *****

SCT Error Recovery Control:
           Read: Disabled
          Write: Disabled

Device Statistics (GP/SMART Log 0x04) not supported

Pending Defects log (GP Log 0x0c) not supported

SATA Phy Event Counters (GP Log 0x11)
ID      Size     Value  Description
0x0001  2            0  Command failed due to ICRC error
0x0002  2            0  R_ERR response for data FIS
0x0003  2            0  R_ERR response for device-to-host data FIS
0x0004  2            0  R_ERR response for host-to-device data FIS
0x0005  2            0  R_ERR response for non-data FIS
0x0006  2            0  R_ERR response for device-to-host non-data FIS
0x0007  2            0  R_ERR response for host-to-device non-data FIS
0x0009  2            2  Transition from drive PhyRdy to drive PhyNRdy
0x000a  2            3  Device-to-host register FISes sent due to a COMRESET
0x000b  2            0  CRC errors within host-to-device FIS
0x000d  2            0  Non-CRC errors within host-to-device FIS

root@TrueNAS[~]#                                                               

The 500-GiB Hitachi (which the console told you has SMART errors) is very likely your boot-pool device.

This is a red herring. Because if you lose your boot-pool, you can still import your data pool(s) into another ZFS server or TrueNAS system. (Reinstalling TrueNAS to a good boot device, and then uploading your config file, will also restore you to a working condition, if your data pool is truly intact.)

Besides your boot device, there are no individual data drives listed in your output: only host bus adapters (HBA).

This was an early suspicion that TrueNAS (and really, ZFS) was not supplied with the drives directly, but rather fed some sort of “hardware RAID” as the sole device to construct a pool. You might have a bigger issue beyond “one of my drives failed”, considering that you’re dealing with some sort of hardware RAID setup.


Further evidence of the above is the output from your zpool import command from earlier (assuming you didn’t cut out any text from the bottom):

Non-importable data pool named "Volume"
zpool import
   pool: Volume
     id: 10277565860005492699
  state: FAULTED
status: The pool metadata is corrupted.
 action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-72
 config:

        Volume                                        FAULTED  corrupted data
          gptid/7d557156-e4c1-11ee-9009-6805ca0a988e  ONLINE

There is only a single device in a single “striped” vdev. This “device” may not be a harddrive or partition, but rather supplied from da0, which is your LSI 9750-4i.

1 Like

Although I have no experience with CORE, but have some knowledge of ZFS, here is my current take on this (speaking pretty bluntly):

  1. This TrueNAS Core system appears to have been setup by someone who had almost no knowledge of TrueNAS or ZFS. They set it up completely incorrectly.

  2. In particular, the LSI 9750-4i HBA should have been flashed to IT mode in order to act as an AHCI controller rather than a RAID controller and thus expose the disks individually as required by ZFS. Unfortunately, they did not do this, instead leaving it in RAID mode and (presumably) configuring the 10x 3TB disks to appear to FreeBSD and ZFS as a single 27GB device.

  3. Clearly something has happened under-the-covers of the RAID device which is sufficient that the RAID HBA cannot continue to present the disk correctly despite some degradation in the underlying physical disks. (I suspect that one drive failed, and then a second drive failed and thus the RAID controller can no longer deliver the data - but who knows without any information.)

  4. You may be able to see more information from the BIOS RAID configuration screens or by running some sort of LSI 9750 utility which can pierce the RAID controller veil and see what is happening underneath, but I doubt that the data will be recoverable that way either.

Unfortunately I think you need to assume that the data on the 10x 3TB drives is irrecoverably lost.

I would now suggest that the steps from here are:

  1. Flash the LSI 9750-4i into IT mode to expose the underlying disks. Then post the output of camcontrol devlist again.
  2. Run SMART SHORT tests on all drives shown in step 1. (in parallel) to confirm whether they are working at all.
  3. For all disks that pass the SHORT test run a SMART LONG test (in parallel) and wait for these to complete.
  4. Post the results here for smartctl -x /dev/xxx run individually for all drives shown in step 1.
  5. Create a RAIDZ2 pool from the remaining 3TB disks (giving you c. 24TB of useable space).
  6. Recover data from backups.
2 Likes

10-wide RAID 5 would be bad, even without throwing ZFS on top of hardware RAID. But “3TB x 10 making about 30TB” suggests it might even have been 10-wide RAID 0. :scream:

1 Like

This should be the first step. @soonytech It appears that your single disk is a RAID volume from your LSI 9750 controller. You’ll want to get into the LSI BIOS (check during the boot process for a prompt to press Ctrl+C or similar) and check the status of the volume.

But if @etorix is correct with their assumption here:

Then you should prepare yourself for the possibility of restoring from backup.

If the hardware RAID was degraded it would still work in TN. Since the hardware RAID is toast, to me that implies that the underlying RAID is toast.

So, IMO, prepare yourself for the almost certainty of restoring from backup.

Or prepare to send the drives and controller and a briefcase with enough cash to purchase a nice water skiing boat to a very competent recovery company, if you really really want to try to recover the data.

For whatever reason the forum rules lack content relative to the old forum. In particular the latter contained specific advice how to ask questions in a way that increases your chance to get a helpful response.

So I would recommend to @soonytech to have a good look at Forum Rules | TrueNAS Community .

In addition, as others have indicated, with the information given it is not possible to give good advice. There is a lot of articles on the internet how to ask such questions. Perhaps reading up on this will help as well.

That is very specific, but likely an under-estimate for a hardware raid of 10x disks. Perhaps a luxury 50’ yacht or a Lamborghini?

1 Like