need help after restart got offline pool

Hi Everyone,
need help here, I’m using TrueNAS-24.10.0.2 Electric Eel, this is a DIY home NAS built with several old disks
It has 8 disks, all mirror vdevs, and my new 2 disks just arrived, happily plug them in and added them to the pool.
it was added successfully I could see pool capacity increased, but the pool said 2 new disks are available to add, so I thought a restart should fix that.
gave it a restart, and suddenly my pool is offline, everything is gone.

I looked at the disks, they are still there but all of them ends with (exported)
tried sudo zpool import and I saw this

thought information about pool is still available, and checked this

tried to gdisk to repair GPT (previously some of them were missing)

but still I couldnt import my pool

please, I need help here…

Hello,

If you try zpool status -v command? It seems that a vdev is missing so you can’t import a pool.

Best Regards,
Antonio

What made you think it would repair anything rather than destroying access to whatever was left?

:scream:

Let’s hope you haven’t “fixed” the other drive.

We have some kind of an epidemic of corrupted drive labels. @HoneyBadger any clue?

3 Likes

Can you post some detailed hardware specifications? Particularly the system motherboard, storage controller, and models of disks that were recently added.

What exactly was done with gdisk here?

some of the disks got GPT damaged or missing, so I tried to gdisk hoping to repair it

lsi2308 an old dell sas card and an old intel gen3 motherboard

it was 2x 2TB seagate exos

You might have overwritten the partition table.


You ran zdb against the whole disk, not the partition. Your previous output shows that every drive in the vdev is made from a partition, as noted by their PARTUUID identifier in the zpool import command.

That’s why your attempt to check the ZFS labels appeared to have failed with a “bad” result.

Assuming this is a SCALE system, please show us lsblk output.

If it is CORE, then gpart list

This will help us confirm the partition layout, or at least what it believes to be the case.

I have used gdisk to fix GPT tables, and I would always recommend that you LIST the partitions BEFORE you rewrite the GPT table.

However, when you started gdisk it found a valid GPT partition table - because it didn’t give you a message about a corrupt table - and all you did was write the same table back out again, so fortunately I doubt that this will have done any harm. (But for the future, when things with ZFS go wrong, you need to ask here BEFORE you run any commands which might make things worse.)

I have a standard set of commands I ask people to run to provide a detailed breakdown of the hardware, so please run these and post the output here (with the output of each command inside a separate </> preformatted text box) so that we can all see the details:

  • lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
  • sudo zpool status -v
  • sudo zpool import
  • lspci
  • sudo storcli show all
  • sudo sas2flash -list
  • sudo sas3flash -list

These should tell us what disks are visible to Linux, give us the partuuid mappings we will need and tell us everything we might need to know about your controllers. We can then decide what zdb commands we need to run to examine the zpool labels.

1 Like

Thanks @Protopia , @HoneyBadger , @winnielinnie , @etorix for your input.
and apologies, this is a peasant-level home built NAS, I just put random old parts I had laying around in one box and installed truenas, it worked pretty well for the past few years for me until I decided to expand the pool with 2x 2TB exos drives, my profile picture shows how janky it is :frowning:

I think there were several mistakes I made when I added them both to the pool

  1. forgot to wipe before adding them to pool
  2. after adding them, I could see the capacity increased, but truenas said that I still have 2 disks available to add to the pool
  3. I thought restarting the machine could fix the visual bug, I restarted it around 1-2 mins after adding them to the pool, probably it was too fast it was still processing something :frowning:

OS : TrueNAS-24.10.0.2 scale, electric eel 24.10
upgraded from truenas core, so probably there are still some freebsd or jails remaining

Old Dell SAS Card 2308 total 8 port
boot pool and 2 other drives connected directly to board

lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
NAME   MODEL                   ROTA PTTYPE TYPE   START          SIZE PARTTYPENAME             PARTUUID
sda    ST3320418AS                1 gpt    disk          320072933376                          
├─sda1                            1 gpt    part     128    2147483648 FreeBSD swap             355a05e0-62ad-11e2-8a80-bc5ff4761b41
└─sda2                            1 gpt    part 4194432  317925363712 Solaris /usr & Apple ZFS 356a8d2c-62ad-11e2-8a80-bc5ff4761b41
sdb    WDC WD5000AAVS-00ZTB0      1 gpt    disk          500107862016                          
└─sdb1                            1 gpt    part    2048  500106788864 Solaris /usr & Apple ZFS 53355e30-8558-46d8-aba4-d5e87294d77e
sdc    ST3320418AS                1 gpt    disk          320072933376                          
├─sdc1                            1 gpt    part     128    2147483648 FreeBSD swap             35511176-62ad-11e2-8a80-bc5ff4761b41
└─sdc2                            1 gpt    part 4194432  317925363712 Solaris /usr & Apple ZFS 35646f27-62ad-11e2-8a80-bc5ff4761b41
sdd    ST2000NM0055-1V4104        1 gpt    disk         2000398934016                          
└─sdd1                            1 gpt    part    2048 2000397795328                          
sde    ST2000NM0055-1V4104        1 gpt    disk         2000398934016                          
└─sde1                            1 gpt    part    2048 2000397795328                          
sdf    ST9500420AS                1 gpt    disk          500107862016                          
└─sdf1                            1 gpt    part    2048  500106788864 Solaris /usr & Apple ZFS aaa3546b-71bd-4db5-8d8c-a3bf692db621
sdg    HGST HTS725050A7E630       1 gpt    disk          500107862016                          
├─sdg1                            1 gpt    part     128    2147483648 FreeBSD swap             597b6475-ddc4-11ec-a0bd-fc4dd4f406d5
└─sdg2                            1 gpt    part 4194432  497960292352 Solaris /usr & Apple ZFS 5a87f27b-ddc4-11ec-a0bd-fc4dd4f406d5
sdh    HITACHI HTS725050A7E630    1 gpt    disk          500107862016                          
├─sdh1                            1 gpt    part     128    2147483648 FreeBSD swap             59ff1eb4-ddc4-11ec-a0bd-fc4dd4f406d5
└─sdh2                            1 gpt    part 4194432  497960292352 Solaris /usr & Apple ZFS 5b9080f4-ddc4-11ec-a0bd-fc4dd4f406d5
sdi    KINGSTON SA400S37/120GB    0 gpt    disk          120034123776                          
├─sdi1                            0 gpt    part      40        524288 BIOS boot                52fddeeb-ddfe-11ec-9967-3417ebabd595
└─sdi2                            0 gpt    part    1064  120024203264 FreeBSD ZFS              531ebb0b-ddfe-11ec-9967-3417ebabd595
sdj    HITACHI HTS545050A7E380    1 gpt    disk          500107862016                          
└─sdj1                            1 gpt    part    2048  500106788864 Solaris /usr & Apple ZFS 6283f70a-c754-4be2-a98e-e4746cba54c6
sdk    TOSHIBA MQ01ACF032         1 gpt    disk          320072933376                          
└─sdk1                            1 gpt    part    2048  320071532544 Solaris /usr & Apple ZFS 0c51f52c-4198-4d24-89b7-91016149f3f2
root@truenas[~]# sudo zpool status -v
  pool: boot-pool
 state: ONLINE
status: One or more features are enabled on the pool despite not being
        requested by the 'compatibility' property.
action: Consider setting 'compatibility' to an appropriate value, or
        adding needed features to the relevant file in
        /etc/zfs/compatibility.d or /usr/share/zfs/compatibility.d.
  scan: scrub repaired 0B in 00:00:44 with 0 errors on Wed Feb 12 03:45:45 2025
config:

        NAME                                                      STATE     READ WRITE CKSUM
        boot-pool                                                 ONLINE       0     0     0
          ata-KINGSTON_SA400S37_120GB_AA333000000000001535-part2  ONLINE       0     0     0

errors: No known data errors
root@truenas[~]# sudo zpool import
  pool: poolmirror
    id: 10787887209231892375
 state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
config:

        poolmirror                                UNAVAIL  insufficient replicas
          mirror-0                                ONLINE
            sdh2                                  ONLINE
            sdg2                                  ONLINE
          mirror-1                                ONLINE
            sda2                                  ONLINE
            sdc2                                  ONLINE
          mirror-2                                ONLINE
            53355e30-8558-46d8-aba4-d5e87294d77e  ONLINE
            aaa3546b-71bd-4db5-8d8c-a3bf692db621  ONLINE
          mirror-3                                ONLINE
            0c51f52c-4198-4d24-89b7-91016149f3f2  ONLINE
            6283f70a-c754-4be2-a98e-e4746cba54c6  ONLINE
          mirror-4                                UNAVAIL  insufficient replicas
            02bbbfeb-8561-4f9e-bdb6-eed7b29f8b47  UNAVAIL
            270fd384-a744-4d88-bcfb-0c7b132805e9  UNAVAIL
root@truenas[~]# 
root@truenas[~]# lspci 
00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor DRAM Controller (rev 09)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09)
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor Graphics Controller (rev 09)
00:14.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller (rev 04)
00:1a.0 USB controller: Intel Corporation 7 Series/C216 Chipset Family USB Enhanced Host Controller #2 (rev 04)
00:1b.0 Audio device: Intel Corporation 7 Series/C216 Chipset Family High Definition Audio Controller (rev 04)
00:1c.0 PCI bridge: Intel Corporation 7 Series/C216 Chipset Family PCI Express Root Port 1 (rev c4)
00:1c.5 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 6 (rev c4)
00:1d.0 USB controller: Intel Corporation 7 Series/C216 Chipset Family USB Enhanced Host Controller #1 (rev 04)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a4)
00:1f.0 ISA bridge: Intel Corporation Q77 Express Chipset LPC Controller (rev 04)
00:1f.2 SATA controller: Intel Corporation 7 Series/C210 Series Chipset Family 6-port SATA Controller [AHCI mode] (rev 04)
00:1f.3 SMBus: Intel Corporation 7 Series/C216 Chipset Family SMBus Controller (rev 04)
01:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06)
root@truenas[~]# 
root@truenas[~]# sudo storcli show all
CLI Version = 007.2807.0000.0000 Dec 22, 2023
Operating system = Linux 6.6.44-production+truenas
Status Code = 0
Status = Success
Description = None

Number of Controllers = 0
Host Name = truenas
Operating System  = Linux 6.6.44-production+truenas
StoreLib IT Version = 07.2900.0200.0100


root@truenas[~]# 
root@truenas[~]# sudo sas2flash -list
LSI Corporation SAS2 Flash Utility
Version 20.00.00.00 (2014.09.18) 
Copyright (c) 2008-2014 LSI Corporation. All rights reserved 

        Adapter Selected is a LSI SAS: SAS2308_2(D1) 

        Controller Number              : 0
        Controller                     : SAS2308_2(D1) 
        PCI Address                    : 00:01:00:00
        SAS Address                    : 500605b-0-0657-c1a0
        NVDATA Version (Default)       : 14.01.00.06
        NVDATA Version (Persistent)    : 14.01.00.06
        Firmware Product ID            : 0x2214 (IT)
        Firmware Version               : 20.00.07.00
        NVDATA Vendor                  : LSI
        NVDATA Product ID              : SAS9207-8i
        BIOS Version                   : 07.39.02.00
        UEFI BSD Version               : 07.27.01.01
        FCODE Version                  : N/A
        Board Name                     : SAS9207-8i
        Board Assembly                 : N/A
        Board Tracer Number            : N/A

        Finished Processing Commands Successfully.
        Exiting SAS2Flash.
root@truenas[~]# 
root@truenas[~]# sudo sas3flash -list
Avago Technologies SAS3 Flash Utility
Version 16.00.00.00 (2017.05.02) 
Copyright 2008-2017 Avago Technologies. All rights reserved.

        No Avago SAS adapters found! Limited Command Set Available!
        ERROR: Command Not allowed without an adapter!
        ERROR: Couldn't Create Command -list
        Exiting Program.
root@truenas[~]# 
  1. Firstly it seems that the boot-pool has been upgraded because it has features that are supposed to be avoided by compatibility settings. I am not sure how that may have happened, but anecdotal reports suggest that this can cause problems.

  2. Matching the partuuids across lsblk and zpool status gives the following mapping:

        poolmirror                                UNAVAIL  insufficient replicas
          mirror-0                                ONLINE
sdh         sdh2                                  ONLINE
sdg         sdg2                                  ONLINE
          mirror-1                                ONLINE
sda         sda2                                  ONLINE
sdc         sdc2                                  ONLINE
          mirror-2                                ONLINE
sdb         53355e30-8558-46d8-aba4-d5e87294d77e  ONLINE
sdf         aaa3546b-71bd-4db5-8d8c-a3bf692db621  ONLINE
          mirror-3                                ONLINE
sdk         0c51f52c-4198-4d24-89b7-91016149f3f2  ONLINE
sdj         6283f70a-c754-4be2-a98e-e4746cba54c6  ONLINE
          mirror-4                                UNAVAIL  insufficient replicas
            02bbbfeb-8561-4f9e-bdb6-eed7b29f8b47  UNAVAIL
            270fd384-a744-4d88-bcfb-0c7b132805e9  UNAVAIL

That leaves sdd, sde, sdi. sdi is the boot pool, so sdd and sde seem to be the missing disks, and they don’t seem to have partuuids. (If you used gdisk on these devices, it is possible that you removed them, but I think it more likely that for some reason they were not written.

Note: Drive letters can change on a reboot, so any further comments will not apply if you have rebooted since without checking what the mapping is again.

Further information about drive labels might be useful so please run sudo zdb -l /dev/sdd and sudo zdb -l /dev/sde and post the output.

I think it should be possible to fix this using one of the following methods, but I am not expert enough to know whether this is advice and I would ask @HoneyBadger to comment please:

  1. Set partuuids on /dev/sdd1 and /dev/sde1 using information gained from the zdb commands above to set the partuuids the right way around. Then try to import.

  2. Run a sudo zpool import with the correct parameters to explicitly list the drives needed for import and see whether ZFS can sort out the labels.

Over to others more expert than me to comment…

1 Like

I’m curious as to if the drives are all visible under Storage → Disks including the newly added sdd and sde as that may give some clues as to what happened. A “new” 2T Exos drive is probably either old-stock or “new to you” which might mean we’re bumping into an upstream interaction with blkid mapping and ZFS.

This should be sudo zdb -l /dev/sdd1 and sudo zdb -l /dev/sde1 - it’s important to target the ZFS partition and not the entire disk here or we’ll get a false negative result - but yes, this is a good step to take

While we’re probing disks let’s see the results of this as well; note the --no-act on the wipefs command, it won’t do anything to your disks.

sudo wipefs --no-act -J /dev/sdd
sudo wipefs --no-act -J /dev/sde

This is assuming that those two names are the “missing” drives shown.

Yes - absolutely - I knew that but somehow failed to type it (due to senility setting in I fear).

Hi @Protopia @HoneyBadger
Apologies for the delayed response, I was caught in a nightmare office tasks and didn’t have any chance to look at my NAS
This weekend I just got some spare time to have a closer look, and thanks as always for your response.

Disk name changed after reboot, and here is today the lsblk result and as well the result for sudo zdb -l /dev/sda1 and sdh1 (both are the 2TB exos drives) as well as the result of sudo wipefs --no-act -J /dev/sdh

I’d like to try the next step to set the partuuid based on the zdb result, please, would you mind to show me how?

root@truenas[~]# lsblk -bo NAME,MODEL,ROTA,PTTYPE,TYPE,START,SIZE,PARTTYPENAME,PARTUUID
NAME   MODEL                   ROTA PTTYPE TYPE   START          SIZE PARTTYPENAME             PARTUUID
sda    ST2000NM0055-1V4104        1 gpt    disk         2000398934016                          
└─sda1                            1 gpt    part    2048 2000397795328                          
sdb    ST3320418AS                1 gpt    disk          320072933376                          
├─sdb1                            1 gpt    part     128    2147483648 FreeBSD swap             355a05e0-62ad-11e2-8a80-bc5ff4761b41
└─sdb2                            1 gpt    part 4194432  317925363712 Solaris /usr & Apple ZFS 356a8d2c-62ad-11e2-8a80-bc5ff4761b41
sdc    ST3320418AS                1 gpt    disk          320072933376                          
├─sdc1                            1 gpt    part     128    2147483648 FreeBSD swap             35511176-62ad-11e2-8a80-bc5ff4761b41
└─sdc2                            1 gpt    part 4194432  317925363712 Solaris /usr & Apple ZFS 35646f27-62ad-11e2-8a80-bc5ff4761b41
sdd    WDC WD5000AAVS-00ZTB0      1 gpt    disk          500107862016                          
└─sdd1                            1 gpt    part    2048  500106788864 Solaris /usr & Apple ZFS 53355e30-8558-46d8-aba4-d5e87294d77e
sde    HGST HTS725050A7E630       1 gpt    disk          500107862016                          
├─sde1                            1 gpt    part     128    2147483648 FreeBSD swap             597b6475-ddc4-11ec-a0bd-fc4dd4f406d5
└─sde2                            1 gpt    part 4194432  497960292352 Solaris /usr & Apple ZFS 5a87f27b-ddc4-11ec-a0bd-fc4dd4f406d5
sdf    HITACHI HTS725050A7E630    1 gpt    disk          500107862016                          
├─sdf1                            1 gpt    part     128    2147483648 FreeBSD swap             59ff1eb4-ddc4-11ec-a0bd-fc4dd4f406d5
└─sdf2                            1 gpt    part 4194432  497960292352 Solaris /usr & Apple ZFS 5b9080f4-ddc4-11ec-a0bd-fc4dd4f406d5
sdg    ST9500420AS                1 gpt    disk          500107862016                          
└─sdg1                            1 gpt    part    2048  500106788864 Solaris /usr & Apple ZFS aaa3546b-71bd-4db5-8d8c-a3bf692db621
sdh    ST2000NM0055-1V4104        1 gpt    disk         2000398934016                          
└─sdh1                            1 gpt    part    2048 2000397795328                          
sdi    KINGSTON SA400S37/120GB    0 gpt    disk          120034123776                          
├─sdi1                            0 gpt    part      40        524288 BIOS boot                52fddeeb-ddfe-11ec-9967-3417ebabd595
└─sdi2                            0 gpt    part    1064  120024203264 FreeBSD ZFS              531ebb0b-ddfe-11ec-9967-3417ebabd595
sdj    HITACHI HTS545050A7E380    1 gpt    disk          500107862016                          
└─sdj1                            1 gpt    part    2048  500106788864 Solaris /usr & Apple ZFS 6283f70a-c754-4be2-a98e-e4746cba54c6
sdk    TOSHIBA MQ01ACF032         1 gpt    disk          320072933376                          
└─sdk1                            1 gpt    part    2048  320071532544 Solaris /usr & Apple ZFS 0c51f52c-4198-4d24-89b7-91016149f3f2
root@truenas[~]# sudo zdb -l /dev/sdh1
------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'poolmirror'
    state: 0
    txg: 536797
    pool_guid: 10787887209231892375
    errata: 0
    hostid: 875705910
    hostname: 'truenas'
    top_guid: 260730707970784072
    guid: 8585525625932385404
    vdev_children: 5
    vdev_tree:
        type: 'mirror'
        id: 4
        guid: 260730707970784072
        metaslab_array: 972
        metaslab_shift: 34
        ashift: 12
        asize: 2000393076736
        is_log: 0
        create_txg: 536795
        children[0]:
            type: 'disk'
            id: 0
            guid: 2704095785525825329
            path: '/dev/disk/by-partuuid/02bbbfeb-8561-4f9e-bdb6-eed7b29f8b47'
            whole_disk: 0
            create_txg: 536795
        children[1]:
            type: 'disk'
            id: 1
            guid: 8585525625932385404
            path: '/dev/disk/by-partuuid/270fd384-a744-4d88-bcfb-0c7b132805e9'
            whole_disk: 0
            create_txg: 536795
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3 
root@truenas[~]# sudo zdb -l /dev/sda1
------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'poolmirror'
    state: 0
    txg: 536797
    pool_guid: 10787887209231892375
    errata: 0
    hostid: 875705910
    hostname: 'truenas'
    top_guid: 260730707970784072
    guid: 2704095785525825329
    vdev_children: 5
    vdev_tree:
        type: 'mirror'
        id: 4
        guid: 260730707970784072
        metaslab_array: 972
        metaslab_shift: 34
        ashift: 12
        asize: 2000393076736
        is_log: 0
        create_txg: 536795
        children[0]:
            type: 'disk'
            id: 0
            guid: 2704095785525825329
            path: '/dev/disk/by-partuuid/02bbbfeb-8561-4f9e-bdb6-eed7b29f8b47'
            whole_disk: 0
            create_txg: 536795
        children[1]:
            type: 'disk'
            id: 1
            guid: 8585525625932385404
            path: '/dev/disk/by-partuuid/270fd384-a744-4d88-bcfb-0c7b132805e9'
            whole_disk: 0
            create_txg: 536795
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3 
root@truenas[~]# 
root@truenas[~]# sudo wipefs --no-act -J /dev/sda
{
   "signatures": [
      {
         "device": "sda",
         "offset": "0x200",
         "type": "gpt",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x1d1c1115e00",
         "type": "gpt",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x1fe",
         "type": "PMBR",
         "uuid": null,
         "label": null
      }
   ]
}
root@truenas[~]# sudo wipefs --no-act -J /dev/sdh
{
   "signatures": [
      {
         "device": "sdh",
         "offset": "0x200",
         "type": "gpt",
         "uuid": null,
         "label": null
      },{
         "device": "sdh",
         "offset": "0x1d1c1115e00",
         "type": "gpt",
         "uuid": null,
         "label": null
      },{
         "device": "sdh",
         "offset": "0x1fe",
         "type": "PMBR",
         "uuid": null,
         "label": null
      }
   ]
}
root@truenas[~]# 

Summary as I see it:

  1. The GPT partition table seems to have entries for the ZFS partitions on /dev/sda and /dev/sdh but seems to have lost the partition types and partuuids.

  2. The ZFS labels in /dev/sda1 and /dev/sdh1 appear to be OK, and these give us the partuuids that should exist but AFAICS don’t tell us which partuuid should be given to which partition.

  3. The wipefs output does show a magic string at offset 0x200 which I think is the start of the ZFS partition, however I don’t know what it should look like. But for comparison here is the output from one of my disks, so it appears:to be missing all the magic strings you might expect, and I have no idea whether this means the partition contents are gone or not.

sudo wipefs --no-act -J /dev/sda
{
   "signatures": [
      {
         "device": "sda",
         "offset": "0x3f000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x3e000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x3d000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x3c000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x3b000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x3a000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x39000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x38000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x37000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x36000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x35000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x34000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x33000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x32000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x31000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x30000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x2f000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x2e000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x2d000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x2c000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x2b000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x2a000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x29000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x28000",
         "type": "zfs_member",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x200",
         "type": "gpt",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x3a3817d5e00",
         "type": "gpt",
         "uuid": null,
         "label": null
      },{
         "device": "sda",
         "offset": "0x1fe",
         "type": "PMBR",
         "uuid": null,
         "label": null
      }
   ]
}
  1. We know what the partition type should be and we know what the partuuids are, so if we can work out which way round they should be we can presumably use a utility to put these back in place and then try to import the pool and see if it works.

Perhaps @HoneyBadger will be able to give better and more expert insight. (But it is the weekend and as a TrueNAS employee he may only respond during working hours.)