Accidently started install on wrong HDD -> help appreciated for data recovery

Hello

I’ve been using FreeNAS and later TrueNAS on an old Thecus N2810 for some time now, and it has always been working. It used to have 2x 2TB HDD, until last year one failed and I bought an SSD to replace it. This rather quickly failed as well and was not replaced (I know: no redundancy). Hence, the FreeNAS_data pool was only on the 2TB HDD (Seagate BarraCuda) but the system was not powered.

Today I decided to do a fresh install of TrueNAS Core on a 128GB SSD, but during the installation, unfortunately, I have selected the 2TB HDD instead of the 128GB SSD (I know…). I realized the error shortly after the disk started spinning and cut the power, but the damage is done I’m afraid.

I finished the install of TrueNAS-13.0-U6.7 on the 128GB SSD and imported the old configuration. Now it is showing the FreeNAS_data pool as OFFLINE under storage/pool while it is showing both the 2TB HDD and the 2TB SSD under storage/disks. The system does not want to boot when the HDD is inside.

What I’ve tried so far:

  1. import an existing pool in the GUI: the FreeNAS_data pool does not show
  2. gpart show /dev/ada1 returns
    root@freenas:~ # gpart show /dev/ada1
    => 34 3907029101 ada1 GPT (1.8T)
    34 4062 - free - (2.0M)
    4096 2048 1 bios-boot (1.0M)
    6144 1048576 2 efi (512M)
    1054720 3905974415 3 apple-zfs (1.8T)

I truly hope that someone can help me recover from this stupidity (I have some hope as there is still the apple-zfs partition), as there are quite some old family pictures on the drive…

Thanks in advance!

Unfortunately, the apple-zfs partition may be the one for the boot pool, not the old data partition, which would be accompanied by the 2 GB padding partition…
Maybe some BSD experts could help recreate a suitable partition table. I’d suggest trying Klennet ZFS Recovery to see if the data is still there.

If the 2x2TB was a mirror, and you only installed onto one of the 2TB - can’t you open up the unharmed 2TB on another TrueNAS installation?

Failing that you could always try photorec

Wow, that does not look good to me, but I’m not claiming to be an expert.

What is drive ada1, the HDD or SSD? You do not specify which leads to assumptions. Please do not let us assume anything. Assuming just slows things down.

What is the output of glabel list and zpool status?
And finish the data dump, gpart show /dev/ada0 and gpart show /dev/ada2

While we are at it, smartctl -a /dev/ada1 and if you have any other drives, all of those smartctl -a /dev/ada0 and smartctl -a /dev/ada2 and that should be it if you only have 3 drives installed total.

Was the pool a Mirror? I assume so but that is an assumption. If both drives were operational, and you installed TrueNAS on one of the mirrored drives, your data “should be” in-tact.

Before trying to mount the ZFS drive, lets wait to see what the data you provide tells us. Below is a possible action you could take, this is only to show you that it may be possible, I don’t really want you to jump into it and do it. Wait for specific instructions which tell you to do something like this.

Let’s assume the listing gives you the apple-zfs partition (all 1.8TB) and a gptid for it. You might be able to use the following command to manually mount it, assuming the data is in-tact.

zpool import -d /dev/gptid/<your_gptid>
You may need to use force (odds are good) but baby steps.

It was indeed a mirror, but I’m afraid that the other disk (2 TB SSD) had already failed. I’ll post the output of gpart on this disk further along.

Thanks for the suggestion on Photorec, will try this later on for sure if recovery of the disk fails!

Ok i didnt follow that part. Gotchya.

Since it was a mirror the data is at least flat. Photorec generally works really well with standard filesystems. I’ve no experience with ZFS at that sort of block level though. GL!

You probably don’t want to hear it but… a) You wont ever again want to install an OS with other drives plugged in. and b) i doubt you’ll ever use a single device, no matter the redundancy, as a single source of safety, ever again.

1 Like

You are absolutely right about not leaving anything to assumptions, sorry!

  1. ada1 is the 2TB HDD, ada0 is the 2TB SSD. The pool was indeed a mirror. I’m afraid that the pool was already degraded, so recovering from the SSD might not be an option. Then there is da0, which is a 128GB SSD for the operating system.
  2. Output of glabel list:
root@freenas:~ # glabel list
Geom name: ada0p1
Providers:
1. Name: gptid/ad6a362d-2a70-11f0-a967-0014fd190deb
   Mediasize: 272629760 (260M)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   secoffset: 0
   offset: 0
   seclength: 532480
   length: 272629760
   index: 0
Consumers:
1. Name: ada0p1
   Mediasize: 272629760 (260M)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0

Geom name: ada0p2
Providers:
1. Name: gptid/ad7a47fb-2a70-11f0-a967-0014fd190deb
   Mediasize: 2030949105664 (1.8T)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 17452519424
   Mode: r0w0e0
   secoffset: 0
   offset: 0
   seclength: 3966697472
   length: 2030949105664
   index: 0
Consumers:
1. Name: ada0p2
   Mediasize: 2030949105664 (1.8T)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 17452519424
   Mode: r0w0e0

Geom name: da0p1
Providers:
1. Name: gptid/8432812e-2a76-11f0-957c-0014fd190deb
   Mediasize: 272629760 (260M)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   secoffset: 0
   offset: 0
   seclength: 532480
   length: 272629760
   index: 0
Consumers:
1. Name: da0p1
   Mediasize: 272629760 (260M)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0

Geom name: ada0p3
Providers:
1. Name: gptid/ad729985-2a70-11f0-a967-0014fd190deb
   Mediasize: 17179869184 (16G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 272650240
   Mode: r0w0e0
   secoffset: 0
   offset: 0
   seclength: 33554432
   length: 17179869184
   index: 0
Consumers:
1. Name: ada0p3
   Mediasize: 17179869184 (16G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 272650240
   Mode: r0w0e0
  1. Output of ‘zpool status’
root@freenas:~ # zpool status
  pool: boot-pool
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          da0p2     ONLINE       0     0     0

errors: No known data errors
  1. Output of ‘gpart show /dev/ada0’ (SSD)
root@freenas:~ # gpart show /dev/ada0
=>        40  4000797280  ada0  GPT  (1.9T)
          40      532480     1  efi  (260M)
      532520    33554432     3  freebsd-swap  (16G)
    34086952  3966697472     2  freebsd-zfs  (1.8T)
  4000784424       12896        - free -  (6.3M)
  1. Output of smartctl -a /dev/ada1 (HDD)
root@freenas:~ # smartctl -a /dev/ada1
smartctl 7.2 2021-09-14 r5236 [FreeBSD 13.1-RELEASE-p9 amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate BarraCuda 3.5 (CMR)
Device Model:     ST2000DM006-2DM164
Serial Number:    Z4Z88YZB
LU WWN Device Id: 5 000c50 0a311e3bb
Firmware Version: CC26
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Tue May  6 18:20:28 2025 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (   89) seconds.
Offline data collection
capabilities:                    (0x73) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        No Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   1) minutes.
Extended self-test routine
recommended polling time:        ( 218) minutes.
Conveyance self-test routine
recommended polling time:        (   2) minutes.
SCT capabilities:              (0x1085) SCT Status supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   096   092   006    Pre-fail  Always       -       94864701
  3 Spin_Up_Time            0x0003   096   096   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   052   052   020    Old_age   Always       -       49560
  5 Reallocated_Sector_Ct   0x0033   098   098   010    Pre-fail  Always       -       1912
  7 Seek_Error_Rate         0x000f   079   060   030    Pre-fail  Always       -       100072244
  9 Power_On_Hours          0x0032   040   040   000    Old_age   Always       -       53220
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       232
183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   001   001   000    Old_age   Always       -       268
188 Command_Timeout         0x0032   100   096   000    Old_age   Always       -       3 3 7
189 High_Fly_Writes         0x003a   057   057   000    Old_age   Always       -       43
190 Airflow_Temperature_Cel 0x0022   077   050   045    Old_age   Always       -       23 (Min/Max 23/23)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       101
193 Load_Cycle_Count        0x0032   001   001   000    Old_age   Always       -       343063
194 Temperature_Celsius     0x0022   023   050   000    Old_age   Always       -       23 (0 17 0 0 0)
197 Current_Pending_Sector  0x0012   001   001   000    Old_age   Always       -       58192
198 Offline_Uncorrectable   0x0010   001   001   000    Old_age   Offline      -       58192
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       39230h+47m+57.653s
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       124345827939
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       587082243014

SMART Error Log Version: 1
ATA Error Count: 268 (device log contains only the most recent five errors)
        CR = Command Register [HEX]
        FR = Features Register [HEX]
        SC = Sector Count Register [HEX]
        SN = Sector Number Register [HEX]
        CL = Cylinder Low Register [HEX]
        CH = Cylinder High Register [HEX]
        DH = Device/Head Register [HEX]
        DC = Device Command Register [HEX]
        ER = Error register [HEX]
        ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 268 occurred at disk power-on lifetime: 53189 hours (2216 days + 5 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 53 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 08 ff ff ff 4f 00      00:08:02.023  READ FPDMA QUEUED
  60 00 88 ff ff ff 4f 00      00:08:02.011  READ FPDMA QUEUED
  60 00 08 ff ff ff 4f 00      00:08:02.003  READ FPDMA QUEUED
  60 00 88 ff ff ff 4f 00      00:08:02.003  READ FPDMA QUEUED
  60 00 08 ff ff ff 4f 00      00:08:01.998  READ FPDMA QUEUED

Error 267 occurred at disk power-on lifetime: 53189 hours (2216 days + 5 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 53 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 08 ff ff ff 4f 00      00:07:58.308  READ FPDMA QUEUED
  60 00 08 ff ff ff 4f 00      00:07:58.308  READ FPDMA QUEUED
  60 00 08 ff ff ff 4f 00      00:07:58.304  READ FPDMA QUEUED
  60 00 08 ff ff ff 4f 00      00:07:58.301  READ FPDMA QUEUED
  60 00 30 ff ff ff 4f 00      00:07:58.290  READ FPDMA QUEUED

Error 266 occurred at disk power-on lifetime: 53189 hours (2216 days + 5 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 53 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 48 ff ff ff 4f 00      00:07:53.912  READ FPDMA QUEUED
  60 00 08 ff ff ff 4f 00      00:07:53.904  READ FPDMA QUEUED
  60 00 30 ff ff ff 4f 00      00:07:53.899  READ FPDMA QUEUED
  60 00 10 ff ff ff 4f 00      00:07:53.898  READ FPDMA QUEUED
  61 00 10 ff ff ff 4f 00      00:07:53.898  WRITE FPDMA QUEUED

Error 265 occurred at disk power-on lifetime: 53189 hours (2216 days + 5 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 53 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 10 ff ff ff 4f 00      00:07:50.224  READ FPDMA QUEUED
  60 00 28 ff ff ff 4f 00      00:07:50.216  READ FPDMA QUEUED
  60 00 28 ff ff ff 4f 00      00:07:50.216  READ FPDMA QUEUED
  60 00 30 ff ff ff 4f 00      00:07:50.210  READ FPDMA QUEUED
  60 00 08 ff ff ff 4f 00      00:07:50.208  READ FPDMA QUEUED

Error 264 occurred at disk power-on lifetime: 53189 hours (2216 days + 5 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 53 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 08 ff ff ff 4f 00      00:07:45.972  READ FPDMA QUEUED
  60 00 30 ff ff ff 4f 00      00:07:45.972  READ FPDMA QUEUED
  60 00 38 ff ff ff 4f 00      00:07:45.962  READ FPDMA QUEUED
  60 00 30 ff ff ff 4f 00      00:07:45.941  READ FPDMA QUEUED
  61 00 10 ff ff ff 4f 00      00:07:45.937  WRITE FPDMA QUEUED

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed: read failure       30%     52301         2923913056

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
  1. Output of smartctl -a /dev/ada0 (SSD, was the mirror of ada1. If this could be recovered, it would also be a solution)
root@freenas:~ # smartctl -a /dev/ada0
smartctl 7.2 2021-09-14 r5236 [FreeBSD 13.1-RELEASE-p9 amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     TIMETEC SATA 2TB
Serial Number:    AT231020A0327
Firmware Version: W0714A0
User Capacity:    2,048,408,248,320 bytes [2.04 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Form Factor:      mSATA
TRIM Command:     Available
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ACS-2 T13/2015-D revision 3
SATA Version is:  SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Tue May  6 18:22:28 2025 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (  120) seconds.
Offline data collection
capabilities:                    (0x11) SMART execute Offline immediate.
                                        No Auto Offline data collection support.
                                        Suspend Offline collection upon new
                                        command.
                                        No Offline surface scan supported.
                                        Self-test supported.
                                        No Conveyance Self-test supported.
                                        No Selective Self-test supported.
SMART capabilities:            (0x0002) Does not save SMART data before
                                        entering power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        (  10) minutes.
SCT capabilities:              (0x0001) SCT Status supported.

SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x0032   100   100   050    Old_age   Always       -       0
  5 Reallocated_Sector_Ct   0x0032   100   100   050    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   100   100   050    Old_age   Always       -       1665
 12 Power_Cycle_Count       0x0032   100   100   050    Old_age   Always       -       74
160 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       0
161 Unknown_Attribute       0x0033   100   100   050    Pre-fail  Always       -       100
163 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       89
164 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       7950
165 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       13
166 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       1
167 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       4
168 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       5050
169 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       100
175 Program_Fail_Count_Chip 0x0032   100   100   050    Old_age   Always       -       0
176 Erase_Fail_Count_Chip   0x0032   100   100   050    Old_age   Always       -       0
177 Wear_Leveling_Count     0x0032   100   100   050    Old_age   Always       -       0
178 Used_Rsvd_Blk_Cnt_Chip  0x0032   100   100   050    Old_age   Always       -       0
181 Program_Fail_Cnt_Total  0x0032   100   100   050    Old_age   Always       -       0
182 Erase_Fail_Count_Total  0x0032   100   100   050    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   050    Old_age   Always       -       67
194 Temperature_Celsius     0x0022   100   100   050    Old_age   Always       -       32
195 Hardware_ECC_Recovered  0x0032   100   100   050    Old_age   Always       -       0
196 Reallocated_Event_Count 0x0032   100   100   050    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   100   100   050    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0032   100   100   050    Old_age   Always       -       0
199 UDMA_CRC_Error_Count    0x0032   100   100   050    Old_age   Always       -       0
232 Available_Reservd_Space 0x0032   100   100   050    Old_age   Always       -       100
241 Total_LBAs_Written      0x0030   100   100   050    Old_age   Offline      -       85171
242 Total_LBAs_Read         0x0030   100   100   050    Old_age   Offline      -       91263
245 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       49536

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%      1663         -
# 2  Extended offline    Completed without error       00%       639         -
# 3  Extended offline    Completed without error       00%       471         -
# 4  Extended offline    Completed without error       00%       303         -
# 5  Extended offline    Completed without error       00%       135         -
# 6  Extended offline    Completed without error       00%         0         -

Selective Self-tests/Logging not supported

I hope this helps. I appreciate the time and effort all of you are putting into this!

This drive is dying.

If you have another drive lying around, make a full sector copy of ada1. Then you could try copying the partition table from ada0.

Looks much more promising. From your first post, I though this drive was dead and long removed. (Assumptions…)
How was the mirror degraded? What’s the issue with ada0?

What’s the output of zpool import?

1 Like

I should buy another drive. Any quick recommendations? Should I stick to HDD or SDD?

Unfortunately, it’s been more than 6 months since I last looked at the system, so don’t know what’s the issue with ada0.

Output of zpool import is the following:

root@freenas:~ # zpool import
   pool: boot-pool
     id: 10387515358846651739
  state: ONLINE
status: Some supported features are not enabled on the pool.
        (Note that they may be intentionally disabled if the
        'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
 config:

        boot-pool                                     ONLINE
          gptid/151a8c08-9a1e-4176-a45e-0009df2805fe  ONLINE

   pool: boot-pool
     id: 7896787326943506794
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        boot-pool   ONLINE
          ada0p2    ONLINE

Whatever you find… You need some safe space to copy whatever could be recovered.

ada0, which should be the (data) SSD is seen as holding a boot pool.
Either you have rebooted, and ada0 is now the reformatted HDD, or… I don’t know what’s happening.
Maybe try disconecting the HDD, leaving only the SSD, and run zpool import again.

I still have the HDD that failed lying around and inserted it to see what’s going on. This is the output of gpart show /dev/ada1 (it might be confusion, but now I had to put the old failed HDD on ada1) and zpool import:

root@freenas:~ # gpart show /dev/ada1
=>        34  3907029101  ada1  GPT  (1.8T)
          34        4062        - free -  (2.0M)
        4096     4194304     1  freebsd-swap  (2.0G)
     4198400  3902828544     2  freebsd-zfs  (1.8T)
  3907026944        2191        - free -  (1.1M)

root@freenas:~ # zpool import
   pool: FreeNAS_data
     id: 13274349292440223490
  state: DEGRADED
status: The pool was last accessed by another system.
 action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        FreeNAS_data                                    DEGRADED
          mirror-0                                      DEGRADED
            gptid/8402ecbd-8025-11e7-9cc7-0014fd190deb  UNAVAIL  cannot open
            gptid/a5bab846-874d-11e6-baa1-0014fd190deb  ONLINE

   pool: boot-pool
     id: 7896787326943506794
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        boot-pool   ONLINE
          ada0p2    ONLINE

What does this mean? Can this drive be used to restore a part of the data (current data minus the changes since the drive was replaced minus the damaged data)?

zpool import with only the 128GB SSD and the 2TB SSD connected:

root@freenas:~ # zpool import
   pool: boot-pool
     id: 7896787326943506794
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        boot-pool   ONLINE
          ada0p2    ONLINE

When I take out the 2TB SSD and run zpool import again, this is the output:

root@freenas:~ # zpool import
no pools available to import

Yes, and this looks like the best and easiest option.

Always track by serial number. That was one of the reasons I asked for the smartctl data, not just to find out how bad the Barracuda drive was doing.

@Tom_Van_Acker With the old drive installed, have you tried to import FreeNAS_data? You may need to force it, but as @etorix said, this looks like the option you have, and I’m thinking only option.

You may or maynot have corrupt data, don’t quite know, but you have no other option right now.

As for recovery “AFTER” you have copied the data you want off the drive, you can replace the UNAVAIL drive with the SSD. You will need to wipe the SSD (remove the partition data) and then add it to the pool. If the drive has enough space, it should work.

As I said, I’m not an expert but we can get someone like @etorix to join in and provide you some good advice.

This is the output of zpool import -f. Please advise how to proceed.

root@freenas:~ # zpool import -f
   pool: FreeNAS_data
     id: 13274349292440223490
  state: DEGRADED
status: The pool was last accessed by another system.
 action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        FreeNAS_data                                    DEGRADED
          mirror-0                                      DEGRADED
            gptid/8402ecbd-8025-11e7-9cc7-0014fd190deb  UNAVAIL  cannot open
            gptid/a5bab846-874d-11e6-baa1-0014fd190deb  ONLINE

It looks like one of the mirror disks is good?

1 Like

The pool doesn’t show up in the GUI, so I’m not sure how I can import it safely through the CLI. Should I use the following command?

zpool import -F -d /dev/gptid/a5bab846-874d-11e6-baa1-0014fd190deb

I think that’s wrong and you would use the pool name, not the disk id, but wait for someone else to chime in since you don’t have much room for error.

That’s the idea BUT:

  • If possible, import from GUI. Do you have an ‘Import’ button, and then a choice to import “FreeNAS_data”?
  • If using CLI, always add -R /mnt
  • Be careful with -f, -F, -fF, -fFX and try these in order until one succeeds; add -n to do a dry run before potentially destructive imports including -F.
  • Drive ID is fine, but it seems you could just go with pool name or ID.
2 Likes