My pool got destroyed after upgrading from CORE to SCALE, can I get any data back?

Hi,
well, I messed something up and after I upgraded to SCALE one of my pools got somehow corrupted and I cannot bring it back online.
I had 2 pools, TrueNAS and temp-share.
temp-share is online, 1 disk, no mirror.
TrueNAS is offliine, 2 disks, mirrored.

Storage tab shows TrueNAS pool as offline with no disks attached to it. There are 2 unassigned disks.

root@truenas[~]# zpool status
  pool: boot-pool
 state: ONLINE
status: One or more features are enabled on the pool despite not being
        requested by the 'compatibility' property.
action: Consider setting 'compatibility' to an appropriate value, or
        adding needed features to the relevant file in
        /etc/zfs/compatibility.d or /usr/share/zfs/compatibility.d.
  scan: scrub repaired 0B in 00:00:07 with 0 errors on Thu May 15 03:45:07 2025
config:

        NAME                                        STATE     READ WRITE CKSUM
        boot-pool                                   ONLINE       0     0     0
          ata-PLEXTOR_PX-128M5S_P02343118241-part2  ONLINE       0     0     0

errors: No known data errors

  pool: temp-share
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 01:48:32 with 0 errors on Sun May 11 01:48:32 2025
config:

        NAME        STATE     READ WRITE CKSUM
        temp-share  ONLINE       0     0     0
          sdd2      ONLINE       0     0     0

errors: No known data errors
root@truenas[~]# lsblk
NAME     MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda        8:0    0   3.6T  0 disk  
├─sda1     8:1    0     2G  0 part  
└─sda2     8:2    0   3.6T  0 part  
sdb        8:16   0   3.6T  0 disk  
├─sdb1     8:17   0     2G  0 part  
└─sdb2     8:18   0   3.6T  0 part  
sdc        8:32   0 119.2G  0 disk  
├─sdc1     8:33   0   260M  0 part  
├─sdc2     8:34   0   103G  0 part  
└─sdc3     8:35   0    16G  0 part  
  └─sdc3 253:0    0    16G  0 crypt 
sdd        8:48   0   1.8T  0 disk  
├─sdd1     8:49   0     2G  0 part  
└─sdd2     8:50   0   1.8T  0 part  
root@truenas[~]# 

Both 3.6G disks are from the pool that is missing. None of them are showing any labels.

root@truenas[~]# zdb -l /dev/sda                            
failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
failed to unpack label 3
root@truenas[~]# zdb -l /dev/sda1
failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
failed to unpack label 3
root@truenas[~]# zdb -l /dev/sda2
failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
failed to unpack label 3
root@truenas[~]# zdb -l /dev/sdb 
failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
failed to unpack label 3
root@truenas[~]# zdb -l /dev/sdb1
failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
failed to unpack label 3
root@truenas[~]# zdb -l /dev/sdb2 
failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
failed to unpack label 3
root@truenas[~]# zpool status TrueNAS
cannot open 'TrueNAS': no such pool

zpool status shows that there’s no such pool, but it’s still showing on the GUI Storage tab.
What can I even do in this situation?

Were you using GELI encryption on Core?

No, no GELI encryption.

Additional data that might be important:

root@truenas[~]# sfdisk -d /dev/sda
label: gpt
label-id: 7D091752-C582-11EC-B1CC-408D5C86FBB1
device: /dev/sda
unit: sectors
first-lba: 40
last-lba: 7814037127
sector-size: 512

/dev/sda1 : start=         128, size=     4194304, type=516E7CB5-6ECF-11D6-8FF8-00022D09712B, uuid=7D21A9F8-C582-11EC-B1CC-408D5C86FBB1
/dev/sda2 : start=     4194432, size=  7809842696, type=516E7CBA-6ECF-11D6-8FF8-00022D09712B, uuid=A138AE0F-C582-11EC-B1CC-408D5C86FBB1
root@truenas[~]# sfdisk -d /dev/sdb
label: gpt
label-id: 7CF94059-C582-11EC-B1CC-408D5C86FBB1
device: /dev/sdb
unit: sectors
first-lba: 40
last-lba: 7814037127
sector-size: 512

/dev/sdb1 : start=         128, size=     4194304, type=516E7CB5-6ECF-11D6-8FF8-00022D09712B, uuid=7D1053B6-C582-11EC-B1CC-408D5C86FBB1
/dev/sdb2 : start=     4194432, size=  7809842696, type=516E7CBA-6ECF-11D6-8FF8-00022D09712B, uuid=7D2D0ED2-C582-11EC-B1CC-408D5C86FBB1

That zdb shows no ZFS labels on any partitions is concerning.

This could be due to GELI, but you said you didn’t use it on Core.

I can’t even tell you where to begin looking, as this problem is deeper than expected.

You could try specifying the path in your import command, but I doubt it will find anything, since no labels were detected on any partitions.

zpool import -d /dev/disk/by-partuuid

Does this show an available pool?


EDIT: You should also share your hardware.

No, importing disks from by-partuuid shows no available pools to import.

What is the hardware?

I doubt that upgrading to SCALE would cause this, unless for some reason the upgrade decided to issue a “wipe” on your partitions’ filesystem labels.

What happens if you boot core again?

How can I do that?

System → boot → Activate the core boot environment (better backup your configuration. I have not done that before).

Tried that, didn’t work, still boots into Scale, even with Core selected.

Do you have a monitor and keyboard attached to the server?

Can you hold down ESC or SHIFT when the system is booting up? It should bring up the Grub boot menu.

Just to be sure, selecting the boot environment isn’t enough, you specifically need to click the “Activate” button.

No, doesn’t work, either from web interface (and yes, I did select Activate) nor from the boot menu. I can select TrueNAS Core from grub, but it still boots into Scale.

My hardware is:
MB: Gigabyte 970A-DS3P
CPU: AMD FX-8320
16GB RAM

SCALE overwrite the FreeBSD bootloader with GRUB, so the way to revert to CORE would be to install on a new boot device.

2 Likes

I just checked one of the mirrored disks with UFS Exporer and I can see my whole folder structure intact, so the data is still there.

1 Like

UFS? From which CORE version did you try to sidegrade?

UFS Explorer, data recovery software that can read zfs.
I upgraded from 13.0-U6.7.

1 Like

Did you try to create a new CORE boot and use it to inspect/import the pool?

No, at this point I didn’t want to risk any more damage to the filesystems and I bought a license for UFS Explorer and recovered the files. They’re finishing copying now, so after they’re safe I’ll install CORE on a new drive and do some checks.

1 Like