TrueNas Scale - How do I recover a disk from broken system into a new one? No import disk option

Hi, I’m relatively new to TrueNas, please bare with me.

I had a system running TrueNas scale 24.04. I was having issues with the OS drive, so I got a new OS drive and decided to start fresh. (Also truenas was running in a VM and i went to bare metal for it).

I got the os set up, and i imported my configuration no problem. But none of my drives are showing up as being part of the storages.

On the old system, I had 2 4TB drives in a mirror dataset called DataStorage. I have those same drives in the new system. I have searched high and low and I cannot figure out how the hell to get them imported and connected back to the DataStorage dataset.

TrueNas detects these drives in the disks list, but the storage shows the devices as being offline. Clicking manage devices shows me option to Add VDEV, but if I do that and select my drives, it says it will wipe the data on the drives.

I’m clearly fundamentally misunderstand something, because if I can’t recover a drive like this, what would be the purpose of even making a backup? Every source I’ve found has said “just put it in another machine and use SMB to copy the data”. There’s no way that’s the best solution when i have the data on the drive in the Nas and I just want to start using it again.

Please help I’m pulling my hair out! :slight_smile:

You shouldn’t be looking to import a disk, you should be looking to import a pool. Storage → Import Pool.

1 Like

We need to know some details:

  1. Your hardware - MB, memory, storage controller(s), disks and how they are connected.

  2. The output from the following commands:

  • lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
  • lspci
  • sas2flash -list
  • sas3flash -list
  • sudo zpool status -v
  • sudo zpool import
  • for disk in /dev/sd?; do; sudo hdparm -W $disk; done
  • for disk in /dev/sd?; do; sudo smartctl -x $disk; done

Please use the </> button in the post editor to create a separate block for the output of each command.

Slight typo correction, (note the “l” ell at the beginning). Plus, it is meant to be one line, but the forum breaks it up.
lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID

Afk for now, I’ll get the details in a bit. But import pool showed no pool options

admin@truenas[~]$ lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID 
NAME          MODEL        ROTA PTTYPE TYPE     START           SIZE PARTTYPENAME     PARTUUID
sda           ST12000DM000    1        disk           12000138625024                  
sdb           ST12000DM000    1        disk           12000138625024                  
sdc           ST4000DM004-    1 gpt    disk            4000787030016                  
└─sdc1                        1 gpt    part      2048  4000785964544 Linux filesystem e8c89cef-af46-4b02-a54e-5aa0e30ec731
sdd           ST4000DM004-    1 gpt    disk            4000787030016                  
└─sdd1                        1 gpt    part      2048  4000785964544 Linux filesystem 25e7b70d-bfc4-444c-9eb7-193cf45ba51e
nvme0n1       CT500P3PSSD8    0 gpt    disk             500107862016                  
├─nvme0n1p1                   0 gpt    part        40      272629760 EFI System       fa675dca-b989-11ef-89c4-244bfee0295f
├─nvme0n1p2                   0 gpt    part  34086952   482646949888 FreeBSD ZFS      fa68d229-b989-11ef-89c4-244bfee0295f
└─nvme0n1p3                   0 gpt    part    532520    17179869184 FreeBSD swap     fa684b6b-b989-11ef-89c4-244bfee0295f
  └─nvme0n1p3                 0        crypt             17179869184                  
admin@truenas[~]$ 
admin@truenas[~]$ lspci
00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Starship/Matisse IOMMU
00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)
00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse/Vermeer Data Fabric: Device 18h; Function 0
00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse/Vermeer Data Fabric: Device 18h; Function 1
00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse/Vermeer Data Fabric: Device 18h; Function 2
00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse/Vermeer Data Fabric: Device 18h; Function 3
00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse/Vermeer Data Fabric: Device 18h; Function 4
00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse/Vermeer Data Fabric: Device 18h; Function 5
00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse/Vermeer Data Fabric: Device 18h; Function 6
00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse/Vermeer Data Fabric: Device 18h; Function 7
01:00.0 Non-Volatile memory controller: Micron Technology Inc Device 5416 (rev 01)
02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse Switch Upstream
03:05.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
03:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
03:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
03:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)
05:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
05:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
05:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
06:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
07:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
08:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 980] (rev a1)
08:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller (rev a1)
09:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
0a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
0a:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP
0a:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
0a:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller

second batch of commands

admin@truenas[~]$ for disk in /dev/sd?; do; hdparm -W $disk; done
zsh: command not found: hdparm
zsh: command not found: hdparm
zsh: command not found: hdparm
zsh: command not found: hdparm
admin@truenas[~]$ for disk in /dev/sd?; do; smartctl -x $disk; done
zsh: command not found: smartctl
zsh: command not found: smartctl
zsh: command not found: smartctl
zsh: command not found: smartctl
admin@truenas[~]$ sudo zpool status -v
  pool: boot-pool
 state: ONLINE
status: One or more features are enabled on the pool despite not being
        requested by the 'compatibility' property.
action: Consider setting 'compatibility' to an appropriate value, or
        adding needed features to the relevant file in
        /etc/zfs/compatibility.d or /usr/share/zfs/compatibility.d.
config:

        NAME         STATE     READ WRITE CKSUM
        boot-pool    ONLINE       0     0     0
          nvme0n1p2  ONLINE       0     0     0

errors: No known data errors
admin@truenas[~]$ sudo zpool import   
no pools available to import
admin@truenas[~]$ for disk in /dev/sd?; do; hdparm -W $disk; done
zsh: command not found: hdparm
zsh: command not found: hdparm
zsh: command not found: hdparm
zsh: command not found: hdparm
admin@truenas[~]$ for disk in /dev/sd?; do; smartctl -x $disk; done
zsh: command not found: smartctl
zsh: command not found: smartctl
zsh: command not found: smartctl
zsh: command not found: smartctl

Oops - thanks. I will correct it for the record. Weird - I did a copy and paste and it copied characters both before and after the missing one. Possibly the cat walking across my keyboard pressed delete. :wink:

There also needed to be a sudo in the hdparm and smartctl commands:

  • for disk in /dev/sd?; do; sudo hdparm -W $disk; done
  • for disk in /dev/sd?; do; sudo smartctl -x $disk; done

@honeybadger can you help here?

Which drives are the ones from your old system? If they’re sdc and sdd (and those look to be two 4 TB disks), those don’t look like they were part of a ZFS pool; they rather look like they were partitioned and used in a generic Linux system in some way.

Yes I suspect that @dan is correct.

My suggestion would be to create a mirrored pool using the 2x 12TB disks, and then try to mount the 2x 4TB disks using the command line and a linux sudo mount command and then try to cp the data from these drives to your new pool.

How was that set up? Did truenas have full access to the SATA/HBA?

1 Like

@prez02 is down the right path for troubleshooting this one.

@DrLeh please describe the previous configuration of your hypervisor, and how the storage was presented to your TrueNAS VM. Based on the lsblk output I would assume that you were using a Linux-based hypervisor (Proxmox/KVM) and had formatted your disks as ext4 or similar, and then created a virtual disk on each.

In addition:

ST4000DM004

SMR Alert :rotating_light: these drives are not recommended for use with ZFS.

Based on the lsblk output I would assume that you were using a Linux-based hypervisor (Proxmox/KVM) and had formatted your disks as ext4 or similar, and then created a virtual disk on each.

Ok this was the piece i was missing in my brain. Yes, I was hosting TrueNas in a proxmox VM. As a result, the drives my TrueNas instance had were virtual drives and thus aren’t formatted properly to be put onto bare metal truenas instance. Which explains why they wouldn’t show up as a pool to be imported.

Luckily my other drive is still technically functional, so I could boot back into proxmox > truenas to transfer data. I was hoping to avoid it but I might have to bite the bullet and just transfer the files onto another system and back to the bare metal truenas again.

Thanks for all the replies, I’ll report back. If anyone has any suggestions how to better transfer the data off the drives, let me know! But i’m guessing best way will be to get back into the vm instance.

Bummer, I’m having trouble getting my proxmox instance to start up again, as i’m getting I/O errors from the boot drive (the one i’m trying to replace). Unfortunately I don’t have a backup of the VM or proxmox configuration, and i don’t know the exact configuration of the drive to attempt to replicate it in a new instance a proxmox. I might have to look around for data recovery options to see if i can use a flash drive boot to pull the proxmox metadata off the boot drive to recreate it with a new instance. if anyone happens to have any tips on that please let me know!

As long as you don’t reformat the physical drives that had the Proxmox data stored on them, you should be able to re-import the virtual disks into a new virtual TrueNAS installation - and then transfer the data out at the file-level to another physical disk or two.

1 Like