URGENT: RAIDZ1 Pool UNAVAIL After Replace Attempt — All Disks Healthy, Labels Intact, Need Help Assembling Pool

URGENT: RAIDZ1 Pool UNAVAIL After Replace Attempt — All Disks Healthy, Labels Intact, Need Help Assembling Pool


EDIT:
Platform: Generic
Edition: Community
Version: 25.04.2.4
Hostname: NAS01


Hi everyone —
Long-time TrueNAS user here, and I’m in a deeply uncomfortable spot and hoping the ZFS gurus can help me recover a RAIDZ1 pool that refuses to import.

I’ll try to present this cleanly, with full logs and no guesswork.


:warning: Background

System:

  • TrueNAS SCALE
  • Dell R620 (LSI 9300 HBA flashed to IT Mode)
  • 10 × 4TB SAS (Seagate ST4000NM0034) in a single RAIDZ1 vdev
  • 2 × SSD mirror boot pool (PERC controller)
  • All drives show in lsblk with correct sizes

The pool is called: Storage


:warning: What happened

One disk in the RAIDZ1 started showing write errors and was marked REMOVED by ZFS.

I physically replaced the disk with a known-good spare (Seagate IronWolf 6TB).
I attempted to do a Replace operation in the SCALE UI.

During that process, the pool entered SUSPENDED state due to I/O errors.

After a reboot, the pool no longer imports and is stuck in:

Storage  UNAVAIL  insufficient replicas

Even though:

  • all 10 original SAS disks are present
  • the two “UNAVAIL” disks respond perfectly to SMART
  • all ZFS labels are intact on all 10 members

I have not destroyed the pool, re-created it, wiped any disks, or run any destructive commands.


:warning: Current State

Running zpool import shows:

pool: Storage
  id: 15453394492121721749
 state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
config:

    Storage                                   UNAVAIL  insufficient replicas
      raidz1-0                                UNAVAIL  insufficient replicas
        446ef2de-...                          ONLINE
        d4b65d26-...                          ONLINE
        9f3d9be8-...                          ONLINE
        34b50bb7-...                          ONLINE
        12356ecb-...                          ONLINE
        fac65638-...                          UNAVAIL
        4aa4ffb0-...                          ONLINE
        c1bb301d-...                          ONLINE
        a7ea0820-...                          UNAVAIL
        c6a49122-...                          ONLINE

Attempts to import:

zpool import -f Storage
zpool import -fF Storage
zpool import -fFX Storage
zpool import -o readonly=on -F -d /dev Storage

All return:

cannot import 'Storage': no such pool or dataset
Destroy and re-create the pool from a backup source.

:warning: SMART Results for the two “UNAVAIL” devices

Both disks (sdk and sdp) report:

  • SMART Health: OK
  • No reallocated sectors
  • No pending sectors
  • No uncorrected read errors
  • Full SMART logs readable
  • Normal age for SAS drives

Example excerpt:

Elements in grown defect list: 0
Non-medium error count: 5
SMART Health Status: OK

So neither disk is actually failed at a hardware level.


:warning: ZFS Label Mapping (from zdb -l)

sdk1 maps to:

guid: 17687009939021634546
(partuuid fac65638-95e1-4b7b-9add-1a60a9b3a52e)

sdp1 maps to:

guid: 11757704506353456861
(partuuid a7ea0820-a3c0-40c1-8ebd-96a10a4ccee3)

Both labels are intact and readable.


:warning: My goal

I need to recover this pool.
It contains critical personal data that is not duplicated elsewhere.

I’m hoping someone experienced with:

  • manual vdev assembly
  • Uberblock recovery
  • txg rewind
  • zpool import -c workflows

can help me determine:

  1. Whether the remaining good members of the RAIDZ1 can be assembled manually
  2. Whether the two “UNAVAIL” devices contain usable data but ZFS has flagged them incorrectly
  3. Whether a readonly import is possible
  4. What the next safe steps are before resorting to imaging the drives and professional recovery

I have not run:

  • zpool destroy
  • zpool labelclear
  • any writes to these disks
  • or anything destructive whatsoever.

Everything is still in original on-disk condition.


:floppy_disk: Full system is now powered OFF to prevent further changes.

I can power it back on to run diagnostics under guidance.


:folded_hands: Any help would be massively appreciated.

I will happily provide:

  • full zdb -l for every member
  • zpool history (if recoverable)
  • udevadm info
  • controller details
  • photos of backplane & cabling

I’m trying to avoid a professional lab unless absolutely necessary, but I’m committed to doing whatever is needed to get this pool imported.

Thanks in advance.

That will certainly be useful, for a start. Hoping that @HoneyBadger can have a look at it, but the crux is that you need to get at least on the drives back online.

Yes, sas2flash -list please.

Thanks etorix.

I have the Storage pool drives pulled out slightly, so they wouldn’t be touched while waiting for replies.

I will shut the system down, reseat all 10 Storage pool disks fully, boot back up, and then collect:

  • fresh zpool status
  • fresh zdb output
  • any additional commands you advise

I will post those here shortly.

Thanks again for taking a look, really appreciate your help!

Give me a few minutes.

Two drives missing in a RAIDZ1 is bad.

With the R620 being 1U and you having ten 3.5" drives, I assume there’s a shelf and a SAS expander in play here.

Have you checked that these cables are firmly seated, not damaged, etc? Also, is this setup using a single SFF cable for attachment or two? Concerned about potential for multipath interfering if the latter.

1 Like

Yeah. I have learned this!

So, I have a LSI 9300 SAS controller and an external 12bay disk caddy.

I have more drives and another 12 bay Caddy on order, so, when I get my current Pool back online, I am going to build a new Pool using a better design and move the content over to that.

And then build a backup pool for my most critical data!!

It’s just a silly idea, and based on everything you described you clearly know your way around ZFS… but is it possible that the wrong disk was accidentally removed physically?
As far as I know, it’s generally recommended to keep the failing disk connected (SATA/SAS/USB…anything temporary is fine) while attaching the replacement drive. This helps avoid accidentally pulling a healthy disk and ensures ZFS can still access all remaining replicas during the replace/resilver process.

It might also have been worth trying to bring the disk back online first, in case the “removed” state was only caused by a temporary issue.

If the failing disk is still online or at least partially readable, it can even speed up the resilvering, since ZFS can copy whatever data it can still read directly instead of reconstructing everything from parity.
(Once a disk is marked REMOVED, though, ZFS won’t use it for resilvering…)

Thanks. I am learning fast and ChatGPT has been helping me through diagnosis so far, but got to a point where it said “I am not confident now, go post this on the forum to get some proper expert advice before we make it worse”!

@etorix @HoneyBadger

Quick correction/update:

All 9 × 4TB ST4000NM0034 SAS drives belonging to the Storage pool are now visible and enumerating.

I previously thought one of the SAS drives was dead (Z4F134Z0), but after reseating and slot adjustments I now have all 9 original pool members online, plus the IronWolf replacement (ZAD6JBLN).

So ZFS has access to all 10 vdev members (9 SAS + 1 Ironwolf replacement).
Only the labels/topology are inconsistent.

truenas_admin@NAS01:~$ lsblk -o NAME,SIZE,SERIAL,MODEL
NAME          SIZE SERIAL                           MODEL
loop1       365.3M                                  
sda           931G 00706c980b519dc02f006b6cdce03f08 PERC H710P
└─sda1        931G                                  
sdb           931G 00b8698f0d719dc02f006b6cdce03f08 PERC H710P
└─sdb1        931G                                  
sdc           931G 00f4b3bf0e859dc02f006b6cdce03f08 PERC H710P
└─sdc1        931G                                  
sdd           931G 0049adc214ea9dc02f006b6cdce03f08 PERC H710P
└─sdd1        931G                                  
sde           931G 00fae8cd160d9ec02f006b6cdce03f08 PERC H710P
└─sde1        931G                                  
sdf           931G 001737c2171d9ec02f006b6cdce03f08 PERC H710P
└─sdf1        931G                                  
sdg         237.9G 00a9204f13729ac12f006b6cdce03f08 PERC H710P
├─sdg1          1M                                  
├─sdg2        512M                                  
└─sdg3      237.4G                                  
sdh           3.6T Z4F0RQ890000R633SJFD             ST4000NM0034
└─sdh1        3.6T                                  
sdi           3.6T Z4F0RZ560000R633RF5C             ST4000NM0034
└─sdi1        3.6T                                  
sdj           3.6T Z4F13LT50000R650BLK3             ST4000NM0034
└─sdj1        3.6T                                  
sdk           3.6T Z4F0YRAG0000R642C1R7             ST4000NM0034
└─sdk1        3.6T                                  
sdl           5.5T ZAD6JBLN                         ST6000VN0033-2EE110
└─sdl1        3.6T                                  
sdm           3.6T Z4F0S02M0000R633RCYT             ST4000NM0034
└─sdm1        3.6T                                  
sdn           3.6T Z4F0JX2B0000R632ZXB6             ST4000NM0034
└─sdn1        3.6T                                  
sdo           3.6T Z4F12ZBH0000R5225DPM             ST4000NM0034
└─sdo1        3.6T                                  
sdp           3.6T Z4F13N960000C6489W7H             ST4000NM0034
└─sdp1        3.6T                                  
sdq           3.6T Z4F0NL9N0000R628MC1V             ST4000NM0034
└─sdq1        3.6T                                  
zd0            40G                                  
zd16          100G                                  
zd32           40G                                  
zd48           60G                                  
nvme0n1       1.8T CVPF636200UE2P0KGN               INTEL SSDPEDMX020T7
└─nvme0n1p1   931G

NO. Please no AI-inspired commands, unless you really want to lose your data for good.

Drive letters can reshuffle at each restart. Track drives by serial.
Your first post had the drives attached to 9207. Then there was a 9300. And now a “PERC H710P” which may well be a RAID controller—although it appears to be another pool. A complete and accurate description of your setup is in order.

3 Likes

@etorix

My apologise for the confusion. Let me layout the setup clearly (with no em-dashes!).

I have an R620 that is powering my home-lab. I have been using TrueNas for a while and have an extensive IT background, but NAS, ZFS, TrueNas is most certainly not a core skill of mine, so i am very much learning on the job, but trying to build a robust home lab.

Here is my full setup:

I am running a Dell R620 (dual Xeon and 128GB RAM) which has an integrated Perc Controller. On that controller I have 7 SSD drives that I am using for my Applications pool. To be clear, that is, the storage for my TrueNas Apps. I also have two SSDs running in mirror mode for my boot pool. On the Perc Controller, I have a Virtual Disk per SSD so that I can see them in the OS. I now have learned that this is less than idea and when I get additional storage I will change this. However, Everything on the Perc controller is functioning 100% correctly for now, so thats not part of my issue, hence I omitted that in my Original Post.

I also have a plugin LSI 9300 SAS card with the SAS cable running out of the back to an external 12 bay HDD caddy. In here I originally had 10x 4TB ST4000NM0034 drives for my general Storage pool. Personal files, media files, Adobe Premiere / Lightroom archive, etc. These are visible straight in the OS with no abstraction layers at the hardware level.

A few days ago my Storage pool became degraded, and one of my drives completely died and left the pool. As I was researching the best way to fix it another disk started showing errors and became degraded.

My solution was to put in a spare 6TB Seagate Ironwolf drive to get it stable whilst I ordered a couple of new 4TB ST4000NM0034 drives.

I took out the dead drive, inserted the Seagate drive and tried to Replace the dead drive. The Seagate drive had been used in a previous TrueNas server which I had setup before I got the R620. It was part of a pool called Pool1 on the old server. There was never a Pool1 on this server.

Because of that, the UI told me to use the Force checkbox to add the new drive. I assumed (probably naively) that it would just wipe the drive and add it to the Storage pool. It was at this point that the Storage pool went Invalid and I panicked.

And that pretty much gets me up to where we are now.

Where I stand now:

  • All 9 remaining original Seagate ST4000NM0034 drives are now online and readable

  • The Ironwolf is also visible but was never resilvered into Storage

  • The only actually dead disk is the original Seagate that failed in Bay 5

  • ZFS sees all 9 remaining real members, but the pool cannot import due to inconsistent labels and suspended state from the forced replace

Does that give you a proper overview?

Why are you posting direct output of a LLM here ? Im just speaking for myself, but I prefer talking to a human.

No

I am a human. I used a LLM to try and articulate my problem more clearly in my original post.

Well it didnt work.

4 Likes

I am sorry if anything was unclear. I am not here to have an argument. I am here to try and recover my data. If you have any help to offer, then I appreciate your help, but if you are just commenting to troll, then I will happily wait for people who are willing to provide me with help in fixing my system.

2 Likes

So in the Replace process is where it threw up the prompt to use Force to replace the disk (because the target replacement disk had existing data)?

I don’t see the Z4F123Z0 serial in the lsblk output above. Is this the disk that was thought to be failed, and was physically removed and replaced with the 6TB ZAD6JBLN drive?

Question - is there an empty slot in your 12-bay JBOD that you can use to insert the previously FAULTED drive?

I’d like to see the output of the labels, specifically for any drives showing INVALID or that were physically swapped/missing/reinserted. You can attach them as text files here if that’s easier, or dump them as preformatted text with the </> button.

1 Like

So in the Replace process is where it threw up the prompt to use Force to replace the disk (because the target replacement disk had existing data)?

Yes, exactly then.

I don’t see the Z4F123Z0 serial in the lsblk output above. Is this the disk that was thought to be failed, and was physically removed and replaced with the 6TB ZAD6JBLN drive?

This is also correct.

And yes, I have two empty bays in the JBOD so I can put it back in and get any information you need.

I’m just sorting the kids bedtime, so I’ll get this to you later this evening (I’m UK based)

Are there any specific commands I should run?

tia.

Put any physically removed drives back into the system, and show the previous lsblk output along with the zdb -ul /dev/sdh1 for each of the drives that has an INVALID label, as well as at least one that shows ONLINE.

2 Likes

Oh…

Now here is a twist in the story. I just put the “dead” disk back in, and TrueNas has identified it.

root@NAS01[~]# lsblk -o NAME,SIZE,SERIAL,MODEL
NAME          SIZE SERIAL                           MODEL
loop1       365.3M                                  
sda           931G 00706c980b519dc02f006b6cdce03f08 PERC H710P
└─sda1        931G                                  
sdb           931G 00b8698f0d719dc02f006b6cdce03f08 PERC H710P
└─sdb1        931G                                  
sdc           931G 00f4b3bf0e859dc02f006b6cdce03f08 PERC H710P
└─sdc1        931G                                  
sdd           931G 0049adc214ea9dc02f006b6cdce03f08 PERC H710P
└─sdd1        931G                                  
sde           931G 00fae8cd160d9ec02f006b6cdce03f08 PERC H710P
└─sde1        931G                                  
sdf           931G 001737c2171d9ec02f006b6cdce03f08 PERC H710P
└─sdf1        931G                                  
sdg         237.9G 00a9204f13729ac12f006b6cdce03f08 PERC H710P
├─sdg1          1M                                  
├─sdg2        512M                                  
└─sdg3      237.4G                                  
sdh           3.6T Z4F0RQ890000R633SJFD             ST4000NM0034
└─sdh1        3.6T                                  
sdi           3.6T Z4F0RZ560000R633RF5C             ST4000NM0034
└─sdi1        3.6T                                  
sdj           3.6T Z4F13LT50000R650BLK3             ST4000NM0034
└─sdj1        3.6T                                  
sdk           3.6T Z4F0YRAG0000R642C1R7             ST4000NM0034
└─sdk1        3.6T                                  
sdl           3.6T Z4F134Z00000R650BPWJ             ST4000NM0034
└─sdl1        3.6T                                  
sdm           3.6T Z4F0S02M0000R633RCYT             ST4000NM0034
└─sdm1        3.6T                                  
sdn           3.6T Z4F0JX2B0000R632ZXB6             ST4000NM0034
└─sdn1        3.6T                                  
sdo           3.6T Z4F12ZBH0000R5225DPM             ST4000NM0034
└─sdo1        3.6T                                  
sdp           3.6T Z4F13N960000C6489W7H             ST4000NM0034
└─sdp1        3.6T                                  
sdq           3.6T Z4F0NL9N0000R628MC1V             ST4000NM0034
└─sdq1        3.6T                                  
sdr           5.5T ZAD6JBLN                         ST6000VN0033-2EE110
└─sdr1        3.6T                                  
zd0            40G                                  
zd16          100G                                  
zd32           40G                                  
zd48           60G                                  
nvme0n1       1.8T CVPF636200UE2P0KGN               INTEL SSDPEDMX020T7
└─nvme0n1p1   931G

I ran zdb against all the drives and the ONLY one that is coming back as INVALID now is the Ironwolf drive, which makes sense because it was never actually added to the Pool.

root@NAS01[~]# zdb -ul /dev/sdr1
failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
failed to unpack label 3

This is an example of one of the other drives. I can give you more if you need.

root@NAS01[~]# zdb -ul /dev/sdh1
------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'Storage'
    state: 0
    txg: 3080813
    pool_guid: 15453394492121721749
    errata: 0
    hostid: 501342666
    hostname: 'NAS01'
    top_guid: 16497267782320508150
    guid: 8972301939548831838
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 16497267782320508150
        nparity: 1
        metaslab_array: 141
        metaslab_shift: 34
        ashift: 12
        asize: 40007803863040
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 18345868647548731482
            path: '/dev/disk/by-partuuid/446ef2de-40ea-4abb-b0f3-e71997a07126'
            whole_disk: 0
            DTL: 8993
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 8972301939548831838
            path: '/dev/disk/by-partuuid/d4b65d26-3c51-418f-8fa2-bce96fec2b17'
            whole_disk: 0
            DTL: 8992
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 16988671770009383476
            path: '/dev/disk/by-partuuid/9f3d9be8-1fa9-4e50-843b-2b5ff8a2fc7b'
            whole_disk: 0
            DTL: 8991
            create_txg: 4
        children[3]:
            type: 'disk'
            id: 3
            guid: 375005367646577382
            path: '/dev/disk/by-partuuid/34b50bb7-9977-411d-b87a-16c7bdc4dccf'
            whole_disk: 0
            DTL: 8990
            create_txg: 4
        children[4]:
            type: 'disk'
            id: 4
            guid: 6547671983581350257
            path: '/dev/disk/by-partuuid/12356ecb-983f-42af-a58c-7451808bce77'
            whole_disk: 0
            DTL: 8989
            create_txg: 4
        children[5]:
            type: 'disk'
            id: 5
            guid: 10788248099747593764
            path: '/dev/disk/by-partuuid/fac65638-95e1-4b7b-9add-1a60a9b3a52e'
            whole_disk: 0
            DTL: 8988
            create_txg: 4
        children[6]:
            type: 'disk'
            id: 6
            guid: 17687009939021634546
            path: '/dev/disk/by-partuuid/4aa4ffb0-a380-44a8-9f8c-c9b80c5793b8'
            whole_disk: 0
            DTL: 8987
            create_txg: 4
        children[7]:
            type: 'disk'
            id: 7
            guid: 5178184648467546815
            path: '/dev/disk/by-partuuid/c1bb301d-b32b-4800-9d22-f77cc5bdae59'
            whole_disk: 0
            DTL: 8986
            create_txg: 4
        children[8]:
            type: 'disk'
            id: 8
            guid: 6664518652191294436
            path: '/dev/disk/by-partuuid/a7ea0820-a3c0-40c1-8ebd-96a10a4ccee3'
            whole_disk: 0
            DTL: 8985
            create_txg: 4
            removed: 1
        children[9]:
            type: 'disk'
            id: 9
            guid: 11757704506353456861
            path: '/dev/disk/by-partuuid/c6a49122-bb54-4a6a-b647-e7d7cbdc0594'
            whole_disk: 0
            DTL: 8982
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
        com.klarasystems:vdev_zaps_v2
    labels = 0 1 2 3 
    Uberblock[0]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080704
	guid_sum = 6128639312591769444
	timestamp = 1764671156 UTC = Tue Dec  2 10:25:56 2025
	bp = DVA[0]=<0:17e4d06f4000:2000> DVA[1]=<0:1ee90298c000:2000> DVA[2]=<0:15f3002b6000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080704L/3080704P fill=2431 cksum=00000003602580ed:00000d2688061b10:0019a4bcaeb82bab:215f5f1ffa191251
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[1]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080705
	guid_sum = 6128639312591769444
	timestamp = 1764671159 UTC = Tue Dec  2 10:25:59 2025
	bp = DVA[0]=<0:19c659e5a000:2000> DVA[1]=<0:18dd71572000:2000> DVA[2]=<0:15f300302000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080705L/3080705P fill=2430 cksum=000000053f7b5377:00001453895f7eb1:0027686a5c41debc:32fdfb2c6b535caf
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[2]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080674
	guid_sum = 6128639312591769444
	timestamp = 1764671002 UTC = Tue Dec  2 10:23:22 2025
	bp = DVA[0]=<0:17e4c532c000:2000> DVA[1]=<0:1ee9027b6000:2000> DVA[2]=<0:1ccc51f56000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080674L/3080674P fill=2438 cksum=000000040dcd39e5:00000fb5e2b32dc7:001e7a04bfdb86e7:277576d42adeae20
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[3]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080675
	guid_sum = 6128639312591769444
	timestamp = 1764671008 UTC = Tue Dec  2 10:23:28 2025
	bp = DVA[0]=<0:17e4c5334000:2000> DVA[1]=<0:1ee9027be000:2000> DVA[2]=<0:1ccc51fa2000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080675L/3080675P fill=2433 cksum=00000002e0e2a469:00000b358daf3ae9:0015d963d09ff649:1c6c75995135ff30
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[4]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080676
	guid_sum = 6128639312591769444
	timestamp = 1764671013 UTC = Tue Dec  2 10:23:33 2025
	bp = DVA[0]=<0:17f032858000:2000> DVA[1]=<0:18ea70490000:2000> DVA[2]=<0:1ccc52006000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080676L/3080676P fill=2432 cksum=000000043f3d120c:0000107262e4afd4:001fe1c1d4d75d1c:293f9e327aad272c
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[5]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080805
	guid_sum = 6128639312591769444
	timestamp = 1764671657 UTC = Tue Dec  2 10:34:17 2025
	bp = DVA[0]=<0:19c659eda000:2000> DVA[1]=<0:18dd715e2000:2000> DVA[2]=<0:15f300416000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080805L/3080805P fill=2431 cksum=000000043b1d2820:0000106ba7450495:001fe68db80e9d71:295cd3577426ffc8
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[6]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080806
	guid_sum = 6128639312591769444
	timestamp = 1764671657 UTC = Tue Dec  2 10:34:17 2025
	bp = DVA[0]=<0:17e4d073a000:2000> DVA[1]=<0:1ee9029d2000:2000> DVA[2]=<0:15f30045c000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080806L/3080806P fill=2431 cksum=0000000583e756ed:0000155911e04108:00295cc15e38a14f:357cc1bd53267220
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[7]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080679
	guid_sum = 6128639312591769444
	timestamp = 1764671027 UTC = Tue Dec  2 10:23:47 2025
	bp = DVA[0]=<0:17e4c9752000:2000> DVA[1]=<0:1ee902826000:2000> DVA[2]=<0:1ccc520ee000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080679L/3080679P fill=2431 cksum=0000000540d2733f:0000144d58e7d524:00274685fe4148ec:32b606b878bec5c5
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[8]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080680
	guid_sum = 6128639312591769444
	timestamp = 1764671032 UTC = Tue Dec  2 10:23:52 2025
	bp = DVA[0]=<0:17e4c9760000:2000> DVA[1]=<0:1ee902834000:2000> DVA[2]=<0:1ccc52140000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080680L/3080680P fill=2432 cksum=0000000425dea08e:0000101437b3a037:001f32fcd1baf246:286767153ada16b7
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[9]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080809
	guid_sum = 6128639312591769444
	timestamp = 1764671663 UTC = Tue Dec  2 10:34:23 2025
	bp = DVA[0]=<0:17e4d074e000:2000> DVA[1]=<0:1ee9029e6000:2000> DVA[2]=<0:15f3004c2000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080809L/3080809P fill=2431 cksum=00000004a768740d:0000120cc0c5130e:00230a957a9ed413:2d671cf56622467c
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[10]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080810
	guid_sum = 6128639312591769444
	timestamp = 1764673759 UTC = Tue Dec  2 11:09:19 2025
	bp = DVA[0]=<0:17f0168a0000:2000> DVA[1]=<0:1ee80000a000:2000> DVA[2]=<0:1ccc003dc000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080810L/3080810P fill=2431 cksum=0000000403e2ec85:00000f8a91b472ec:001e1c896bfe9274:26f03da30336816f
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[11]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080811
	guid_sum = 6128639312591769444
	timestamp = 1764673760 UTC = Tue Dec  2 11:09:20 2025
	bp = DVA[0]=<0:d00000e000:2000> DVA[1]=<0:17e4057de000:2000> DVA[2]=<0:1ccc2eabc000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080811L/3080811P fill=2430 cksum=00000004353eaeb5:00001047dc95a430:001f87e0c0f97cc0:28c1a8a7a34b3061
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[12]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080684
	guid_sum = 6128639312591769444
	timestamp = 1764671053 UTC = Tue Dec  2 10:24:13 2025
	bp = DVA[0]=<0:1bd1c6ed2000:2000> DVA[1]=<0:8614f4f6000:2000> DVA[2]=<0:1ccc5223c000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080684L/3080684P fill=2434 cksum=000000047dd30190:0000116243796dea:0021adf01c499e6b:2b8cc683140188d2
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[13]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080813
	guid_sum = 6128639312591769444
	timestamp = 1764673761 UTC = Tue Dec  2 11:09:21 2025
	bp = DVA[0]=<0:d000016000:2000> DVA[1]=<0:17e40580e000:2000> DVA[2]=<0:181c000ea000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080813L/3080813P fill=2431 cksum=000000043841c9b5:0000104b775eeeef:001f7f67168dc988:28a2e67a0ecf6f36
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[14]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080686
	guid_sum = 6128639312591769444
	timestamp = 1764671063 UTC = Tue Dec  2 10:24:23 2025
	bp = DVA[0]=<0:17e4c97fe000:2000> DVA[1]=<0:1ee90289a000:2000> DVA[2]=<0:1ccc522ba000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080686L/3080686P fill=2436 cksum=0000000449f3dc3e:0000109e2aa3cb5e:00203af3cbb581c7:29b8797f8cdac758
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[15]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080815
	guid_sum = 6128639312591769444
	timestamp = 1764673767 UTC = Tue Dec  2 11:09:27 2025
	bp = DVA[0]=<0:1ccc40c9e000:2000> DVA[1]=<0:15f030b6a000:2000> DVA[2]=<0:181c0025a000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080815L/3080815P fill=2431 cksum=000000045878942b:000010caa955eb48:00207a8a1401639e:29ed8466ec6c82e2
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[16]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080688
	guid_sum = 6128639312591769444
	timestamp = 1764671074 UTC = Tue Dec  2 10:24:34 2025
	bp = DVA[0]=<0:17e4c9832000:2000> DVA[1]=<0:1ee9028be000:2000> DVA[2]=<0:1ccc5232c000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080688L/3080688P fill=2436 cksum=00000004a9b4d16f:0000120e17d19b9b:0022fe87ee5469a9:2d4493dc8db04bcd
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[17]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080817
	guid_sum = 6128639312591769444
	timestamp = 1764673777 UTC = Tue Dec  2 11:09:37 2025
	bp = DVA[0]=<0:19c536eb6000:2000> DVA[1]=<0:860000b8000:2000> DVA[2]=<0:181c004ec000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080817L/3080817P fill=2431 cksum=000000037ba3ce7b:00000d79d3c7e21e:001a1983bc6ad65d:21bcf3b8f3fba384
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[18]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080690
	guid_sum = 6128639312591769444
	timestamp = 1764671085 UTC = Tue Dec  2 10:24:45 2025
	bp = DVA[0]=<0:17e4c9840000:2000> DVA[1]=<0:1ee9028cc000:2000> DVA[2]=<0:1ccc523a4000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080690L/3080690P fill=2436 cksum=000000047a674fad:00001155cc002094:0021973d77dfaeca:2b7140f0de3ad125
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[19]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080691
	guid_sum = 6128639312591769444
	timestamp = 1764671088 UTC = Tue Dec  2 10:24:48 2025
	bp = DVA[0]=<0:1bd292276000:2000> DVA[1]=<0:8614f5b8000:2000> DVA[2]=<0:1ccc523e2000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080691L/3080691P fill=2435 cksum=00000003c89e2e64:00000eac015b7ccb:001c7a9c301a9882:24e508df7ff2f8a6
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[20]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080692
	guid_sum = 6128639312591769444
	timestamp = 1764671093 UTC = Tue Dec  2 10:24:53 2025
	bp = DVA[0]=<0:17f032938000:2000> DVA[1]=<0:18ea70552000:2000> DVA[2]=<0:1ccc5242e000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080692L/3080692P fill=2434 cksum=00000004b3e2217e:0000123094baddce:0023380a05c28b25:2d8324e93622ef2b
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[21]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080693
	guid_sum = 6128639312591769444
	timestamp = 1764671101 UTC = Tue Dec  2 10:25:01 2025
	bp = DVA[0]=<0:17f03296a000:2000> DVA[1]=<0:18ea70584000:2000> DVA[2]=<0:1ccc5247a000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080693L/3080693P fill=2433 cksum=00000004e0c322ce:000012dc31b6b45d:0024807d1d70d82e:2f26a593c6f567ba
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[22]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080694
	guid_sum = 6128639312591769444
	timestamp = 1764671107 UTC = Tue Dec  2 10:25:07 2025
	bp = DVA[0]=<0:17f03299a000:2000> DVA[1]=<0:18ea705b4000:2000> DVA[2]=<0:1ccc524c2000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080694L/3080694P fill=2433 cksum=000000044bb77aeb:000010a212944eeb:00203cf7652cee36:29b4098e58702005
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[23]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080695
	guid_sum = 6128639312591769444
	timestamp = 1764671108 UTC = Tue Dec  2 10:25:08 2025
	bp = DVA[0]=<0:17e4cefd0000:2000> DVA[1]=<0:1ee9028f8000:2000> DVA[2]=<0:15f30002e000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080695L/3080695P fill=2434 cksum=00000005f7767cac:0000171315c68c9a:002caafa33b62ce3:39b5e6dab0889ce0
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[24]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080696
	guid_sum = 6128639312591769444
	timestamp = 1764671114 UTC = Tue Dec  2 10:25:14 2025
	bp = DVA[0]=<0:19c657764000:2000> DVA[1]=<0:18dd71448000:2000> DVA[2]=<0:15f30006e000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080696L/3080696P fill=2432 cksum=00000005716617bb:0000150f5e288f78:0028ca0e8179dd37:34ba2b826d59f56d
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[25]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080697
	guid_sum = 6128639312591769444
	timestamp = 1764671120 UTC = Tue Dec  2 10:25:20 2025
	bp = DVA[0]=<0:17e4cf00e000:2000> DVA[1]=<0:1ee902936000:2000> DVA[2]=<0:15f3000c0000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080697L/3080697P fill=2432 cksum=0000000402550c46:00000f938ed1551a:001e4af0e61ebc21:27519de29b0b3e43
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[26]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080762
	guid_sum = 6128639312591769444
	timestamp = 1764671451 UTC = Tue Dec  2 10:30:51 2025
	bp = DVA[0]=<0:17f035aa2000:2000> DVA[1]=<0:18ea70658000:2000> DVA[2]=<0:15f300344000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080762L/3080762P fill=2431 cksum=00000005b48478fc:000016146b6bd053:002ac61efd2a8bee:374de45a463b105d
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[27]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080763
	guid_sum = 6128639312591769444
	timestamp = 1764671452 UTC = Tue Dec  2 10:30:52 2025
	bp = DVA[0]=<0:19c659e98000:2000> DVA[1]=<0:18dd715a8000:2000> DVA[2]=<0:15f300388000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080763L/3080763P fill=2431 cksum=000000049cbf9ba6:000011e3054a4e6a:0022b8d85b4bca61:2cfc51717e5d98d4
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[28]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080700
	guid_sum = 6128639312591769444
	timestamp = 1764671134 UTC = Tue Dec  2 10:25:34 2025
	bp = DVA[0]=<0:19c659d40000:2000> DVA[1]=<0:18dd714b8000:2000> DVA[2]=<0:15f30018e000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080700L/3080700P fill=2431 cksum=00000004911bdde7:000011b5a6d3d079:0022605d60549b47:2c893310dae29d8a
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[29]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080701
	guid_sum = 6128639312591769444
	timestamp = 1764671140 UTC = Tue Dec  2 10:25:40 2025
	bp = DVA[0]=<0:17e4cf04c000:2000> DVA[1]=<0:1ee902974000:2000> DVA[2]=<0:15f3001d6000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080701L/3080701P fill=2430 cksum=0000000568a34b17:000014f12938504e:002896937ee17f2f:34808a60ffa598ae
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[30]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080766
	guid_sum = 6128639312591769444
	timestamp = 1764671458 UTC = Tue Dec  2 10:30:58 2025
	bp = DVA[0]=<0:1ccc532ce000:2000> DVA[1]=<0:8614f740000:2000> DVA[2]=<0:15f3003ee000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080766L/3080766P fill=2432 cksum=0000000488b43906:00001195c056a7b6:002223c54188d33e:2c3c65a19197afba
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 
    Uberblock[31]
	magic = 0000000000bab10c
	version = 5000
	txg = 3080703
	guid_sum = 6128639312591769444
	timestamp = 1764671150 UTC = Tue Dec  2 10:25:50 2025
	bp = DVA[0]=<0:1ccc52be8000:2000> DVA[1]=<0:8614f6d0000:2000> DVA[2]=<0:15f30026c000:2000> [L0 DMU objset] fletcher4 uncompressed unencrypted LE contiguous unique triple size=1000L/1000P birth=3080703L/3080703P fill=2431 cksum=00000005100bd194:0000139ddde8a88b:00260c2cf3e11d1a:31407fdcbb004014
	mmp_magic = 00000000a11cea11
	mmp_delay = 0
	mmp_valid = 0
	checkpoint_txg = 0
	raidz_reflow state=0 off=0
        labels = 0 1 2 3 

Mark.

Does zpool import only show one disk as INVALID now? I would attempt to re-import the pool (from the webUI) if so.