Core to Scale migration removed partition tables

Hello everyone,

Upgraded from TrueNAS-13.0-U6.8 to TrueNAS Scale DragonFish 24.04, the media1 zpool is offline, everything else looks OK.

lsblk showed /dev/sda had lost its partition (rebuild it by copying from another disk… they’re all TOSHIBA HDWD130 3TB) and rebooted… hopefully offsets are same on each disk.

Disks offline and zpool not imported, it looks like /dev/sdd also has issues with partition type, GPTID & labels. Can anyone help me correct these?

(sdc is the boot/OS SSD, have removed that info from the lsblk output).

NAME     MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda        8:0    0   2.7T  0 disk  
├─sda1     8:1    0     2G  0 part  
└─sda2     8:2    0   2.7T  0 part  
sdb        8:16   0   2.7T  0 disk  
├─sdb1     8:17   0     2G  0 part  
└─sdb2     8:18   0   2.7T  0 part  
sdc        8:32   0 223.6G  0 disk  
├─sdc1     8:33   0   512K  0 part  
├─sdc2     8:34   0 207.6G  0 part  
└─sdc3     8:35   0    16G  0 part  
  └─sdc3 253:0    0    16G  0 crypt 
sdd        8:48   0   2.7T  0 disk  
├─sdd1     8:49   0     2G  0 part  
└─sdd2     8:50   0   2.7T  0 part  
sde        8:64   0   2.7T  0 disk  
├─sde1     8:65   0     2G  0 part  
└─sde2     8:66   0   2.7T  0 part

# lsblk -bo NAME,LABEL,MAJ:MIN,MODEL,SERIAL,PARTUUID,START,SIZE,PARTFLAGS,PARTLABEL,PARTTYPENAME,PTTYPE | grep -v sdc 
NAME     LABEL        MAJ:MIN MODEL           SERIAL       PARTUUID                                START          SIZE PARTFLAGS PARTLABEL PARTTYPENAME             PTTYPE
sda                     8:0   TOSHIBA HDWD130 Z093BBLAS                                                  3000592982016                                              gpt
├─sda1                  8:1                                70164c40-a5cf-11eb-b03d-94188238a39c      128    2147483648                     FreeBSD swap             gpt
└─sda2   media1         8:2                                703a7836-a5cf-11eb-b03d-94188238a39c  4194432 2998445412352                     FreeBSD ZFS              gpt
sdb                     8:16  TOSHIBA HDWD130 Z093BDZAS                                                  3000592982016                                              gpt
├─sdb1                  8:17                               7027a08e-a5cf-11eb-b03d-94188238a39c      128    2147483648                     FreeBSD swap             gpt
└─sdb2   media1         8:18                               705255db-a5cf-11eb-b03d-94188238a39c  4194432 2998445412352                     FreeBSD ZFS              gpt
sdd                     8:48  TOSHIBA HDWD130 96L9DB7AS                                                  3000592982016                                              gpt
├─sdd1                  8:49                               291ff7e8-6cf7-410d-8c45-b0e4c829f5b2      127    2147935232           primary   Linux filesystem         gpt
└─sdd2                  8:50                               0cc81c37-7607-4712-98aa-ca87fa6498c1  4195313 2998444964864           primary   Solaris /usr & Apple ZFS gpt
sde                     8:64  TOSHIBA HDWD130 96L9DHHAS                                                  3000592982016                                              gpt
├─sde1                  8:65                               74ef0f45-66ed-11ea-8461-94188238a39c      128    2147483648                     FreeBSD swap             gpt
└─sde2   media1         8:66                               75036ac6-66ed-11ea-8461-94188238a39c  4194432 2998445412352                     FreeBSD ZFS              gpt


# zpool import
   pool: media1
     id: 1795259462610817336
  state: UNAVAIL
status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

	media1                                        UNAVAIL  insufficient replicas
	  gptid/0a0e2a38-ee42-11e6-b96a-94188238a39c  UNAVAIL
	  sde2                                        ONLINE
	  sda2                                        ONLINE
	  sdb2                                        ONLINE

Thanks,

John.