Need help with missing pool

Hi everyone, sorry for my english first of all )
I have baremetal instance of TrueNAS-SCALE-24.04.2.2
at asus p10s-i.
One day i find out, that i have a lot of checksum error at all 4 disks and some files are corrupted
after troubleshooting i came to the conclusion that it was either the power supply or the cable. After replacing them my pool disappeared. So, at this point a have

root@truenas[/home/admin]# lsblk -o name,size,type,partuuid
NAME SIZE TYPE PARTUUID
sda 3.6T disk
├─sda1 2G part a4722b12-5b0f-476d-a704-7f384653ad21
│ └─md127 2G raid1
│ └─md127 2G crypt
└─sda2 3.6T part 78768fa5-a92a-4548-9557-0f42401ccccf
sdb 3.6T disk
├─sdb1 2G part 02b02130-2f0b-415f-ae8c-85d849261c33
└─sdb2 3.6T part 83c95c75-7fe2-4fec-9845-d974fc3f637f
sdc 3.6T disk
├─sdc1 2G part 9ee250cd-dccf-494c-aab3-1f2cc2fbd84c
│ └─md127 2G raid1
│ └─md127 2G crypt
└─sdc2 3.6T part 7148f5b2-5cf3-4f20-8f67-3e13d6350289
sdd 223.6G disk
├─sdd1 1M part 0bfe2fd4-c7c6-48f7-95c3-9ac97455137b
├─sdd2 512M part c2e55895-c266-48e3-a249-8574d9502604
├─sdd3 207.1G part a8ca35d1-8aef-4b47-8784-68848999cfe0
└─sdd4 16G part ef366a6b-d331-4d0b-a08b-9c6e73abe626
sde 3.6T disk
├─sde1 2G part 9f7ed069-5e4e-4b7f-a806-5f9e40047067
│ └─md127 2G raid1
│ └─md127 2G crypt
└─sde2 3.6T part af598827-99d2-46cc-bb20-e3821e34ca31
root@truenas[/home/admin]#

root@truenas[/home/admin]# zpool import
pool: pool
id: 13997916364281612929
state: ONLINE
status: Some supported features are not enabled on the pool.
(Note that they may be intentionally disabled if the
‘compatibility’ property is set.)
action: The pool can be imported using its name or numeric identifier, though
some features will not be available without an explicit ‘zpool upgrade’.
config:

    pool                                      ONLINE
      raidz2-0                                ONLINE
        7148f5b2-5cf3-4f20-8f67-3e13d6350289  ONLINE
        78768fa5-a92a-4548-9557-0f42401ccccf  ONLINE
        af598827-99d2-46cc-bb20-e3821e34ca31  ONLINE
        83c95c75-7fe2-4fec-9845-d974fc3f637f  ONLINE

root@truenas[/home/admin]#

root@truenas[/home/admin]# zpool import pool
cannot import ‘pool’: insufficient replicas
Destroy and re-create the pool from
a backup source.

So. The heart of the matter is to get data back. I have a backup, but I’ll only consider that as a last resort.

At the old forum i saw the solution like this:
sysctl vfs.zfs.max_missing_tvds=1
sysctl vfs.zfs.spa.load_verify_metadata=0
sysctl vfs.zfs.spa.load_verify_data=0

zpool import -f -o readonly = on DATA

bus there is no /proc/sys/vfs… and so on.

How can i solve that puzzle? Any help?

The good news is that the disks and partitions all seem lined up with the pool definitions.

Try the following and see any of them work and if not whether you get any different error messages (and if you get no message that might indicate it worked):

  • sudo zpool import -d /dev/disk/by-partuuid -R /mnt pool
  • sudo zpool import -R /mnt -f pool
  • sudo zpool import -d /dev/disk/by-partuuid -R /mnt -f pool

P.S. Don’t try random stuff you read off web pages in case they make things worse. The above commands will either work or not, and should not make things worse.

1 Like

@svag Welcome here, and thanks for giving a reasoanbly good description of your steup and issue. But please use the formatted text button </> when pasting terminal output: It makes things easier to follow.

1 Like
cannot import 'pool': insufficient replicas
        Destroy and re-create the pool from
        a backup source.
root@truenas[/home/admin]#

all output is the same

btw

root@truenas[/home/admin]# zpool import -f -o readonly=on pool
cannot import 'pool': I/O error
Destroy and re-create the pool from
a backup source.
root@truenas[/home/admin]#

If you can, please supply the output from the following for all 4 disks in separate CODE tags:

zdb -l /dev/sda2
zdb -l /dev/sdb2
zdb -l /dev/sdc2
zdb -l /dev/sde2

With 2 disk’s worth of redundancy, it is unlikely that you have a totally corrupt pool. However, if one disk got detached before the errors on the other disks got worse, it is possible that ZFS can’t assemble the pool with common TXGs.

What I am looking for, it the txg: field. If their is one or 2 disks that are different, you should be okay. Or if they are all different, but not too far apart, you can roll your pool’s transaction group back to a common one. But, their is a limited number of roll backs possible.

Anyway, I await your update.

1 Like
admin@truenas[~]$ zdb -l /dev/sda2
zsh: command not found: zdb
admin@truenas[~]$ sudo zdb -l /dev/sda2
[sudo] password for admin: 
------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'pool'
    state: 0
    txg: 8847875
    pool_guid: 13997916364281612929
    errata: 0
    hostid: 1798924897
    hostname: 'truenas'
    top_guid: 11219338610695144943
    guid: 13217704900569720053
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 11219338610695144943
        nparity: 2
        metaslab_array: 134
        metaslab_shift: 34
        ashift: 12
        asize: 15994538950656
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 13217704900569720053
            path: '/dev/disk/by-partuuid/7148f5b2-5cf3-4f20-8f67-3e13d6350289'
            whole_disk: 0
            DTL: 23579
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
        children[1]:
            type: 'disk'
            id: 1
            guid: 10540558960207961743
            path: '/dev/disk/by-partuuid/78768fa5-a92a-4548-9557-0f42401ccccf'
            whole_disk: 0
            DTL: 3554
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
        children[2]:
            type: 'disk'
            id: 2
            guid: 9143061293094038254
            path: '/dev/disk/by-partuuid/af598827-99d2-46cc-bb20-e3821e34ca31'
            whole_disk: 0
            not_present: 1
            DTL: 3703
            create_txg: 4
            degraded: 1
        children[3]:
            type: 'disk'
            id: 3
            guid: 2924254637925524493
            path: '/dev/disk/by-partuuid/83c95c75-7fe2-4fec-9845-d974fc3f637f'
            whole_disk: 0
            DTL: 23578
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3 
admin@truenas[~]$ 



admin@truenas[~]$ 
admin@truenas[~]$ sudo zdb -l /dev/sdb2
------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'pool'
    state: 0
    txg: 8847849
    pool_guid: 13997916364281612929
    errata: 0
    hostid: 1798924897
    hostname: 'truenas'
    top_guid: 11219338610695144943
    guid: 9143061293094038254
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 11219338610695144943
        nparity: 2
        metaslab_array: 134
        metaslab_shift: 34
        ashift: 12
        asize: 15994538950656
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 13217704900569720053
            path: '/dev/disk/by-partuuid/7148f5b2-5cf3-4f20-8f67-3e13d6350289'
            whole_disk: 0
            DTL: 23579
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
        children[1]:
            type: 'disk'
            id: 1
            guid: 10540558960207961743
            path: '/dev/disk/by-partuuid/78768fa5-a92a-4548-9557-0f42401ccccf'
            whole_disk: 0
            DTL: 3554
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
        children[2]:
            type: 'disk'
            id: 2
            guid: 9143061293094038254
            path: '/dev/disk/by-partuuid/af598827-99d2-46cc-bb20-e3821e34ca31'
            whole_disk: 0
            DTL: 3703
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
        children[3]:
            type: 'disk'
            id: 3
            guid: 2924254637925524493
            path: '/dev/disk/by-partuuid/83c95c75-7fe2-4fec-9845-d974fc3f637f'
            whole_disk: 0
            not_present: 1
            DTL: 23578
            create_txg: 4
            degraded: 1
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3 
admin@truenas[~]$ 


admin@truenas[~]$ sudo zdb -l /dev/sdc2
------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'pool'
    state: 0
    txg: 8847875
    pool_guid: 13997916364281612929
    errata: 0
    hostid: 1798924897
    hostname: 'truenas'
    top_guid: 11219338610695144943
    guid: 2924254637925524493
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 11219338610695144943
        nparity: 2
        metaslab_array: 134
        metaslab_shift: 34
        ashift: 12
        asize: 15994538950656
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 13217704900569720053
            path: '/dev/disk/by-partuuid/7148f5b2-5cf3-4f20-8f67-3e13d6350289'
            whole_disk: 0
            DTL: 23579
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
        children[1]:
            type: 'disk'
            id: 1
            guid: 10540558960207961743
            path: '/dev/disk/by-partuuid/78768fa5-a92a-4548-9557-0f42401ccccf'
            whole_disk: 0
            DTL: 3554
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
        children[2]:
            type: 'disk'
            id: 2
            guid: 9143061293094038254
            path: '/dev/disk/by-partuuid/af598827-99d2-46cc-bb20-e3821e34ca31'
            whole_disk: 0
            not_present: 1
            DTL: 3703
            create_txg: 4
            degraded: 1
        children[3]:
            type: 'disk'
            id: 3
            guid: 2924254637925524493
            path: '/dev/disk/by-partuuid/83c95c75-7fe2-4fec-9845-d974fc3f637f'
            whole_disk: 0
            DTL: 23578
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3 
admin@truenas[~]$ 
admin@truenas[~]$ zdb -l /dev/sde2
zsh: command not found: zdb
admin@truenas[~]$ sudo zdb -l /dev/sde2
------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'pool'
    state: 0
    txg: 8847875
    pool_guid: 13997916364281612929
    errata: 0
    hostid: 1798924897
    hostname: 'truenas'
    top_guid: 11219338610695144943
    guid: 10540558960207961743
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 11219338610695144943
        nparity: 2
        metaslab_array: 134
        metaslab_shift: 34
        ashift: 12
        asize: 15994538950656
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 13217704900569720053
            path: '/dev/disk/by-partuuid/7148f5b2-5cf3-4f20-8f67-3e13d6350289'
            whole_disk: 0
            DTL: 23579
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
        children[1]:
            type: 'disk'
            id: 1
            guid: 10540558960207961743
            path: '/dev/disk/by-partuuid/78768fa5-a92a-4548-9557-0f42401ccccf'
            whole_disk: 0
            DTL: 3554
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
        children[2]:
            type: 'disk'
            id: 2
            guid: 9143061293094038254
            path: '/dev/disk/by-partuuid/af598827-99d2-46cc-bb20-e3821e34ca31'
            whole_disk: 0
            not_present: 1
            DTL: 3703
            create_txg: 4
            degraded: 1
        children[3]:
            type: 'disk'
            id: 3
            guid: 2924254637925524493
            path: '/dev/disk/by-partuuid/83c95c75-7fe2-4fec-9845-d974fc3f637f'
            whole_disk: 0
            DTL: 23578
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3 
admin@truenas[~]$ 

I don’t know what others think, but the labels appear good and consistent. What is worrying is that three of the partitions are shown in the labels as aux_state: 'err_exceeded'.

I guess we need to see the smart attributes. Please run the following commands and post the output:

  • sudo smartctl -x /dev/sda
  • sudo smartctl -x /dev/sdb
  • sudo smartctl -x /dev/sdc
  • sudo smartctl -x /dev/sdd
  • sudo smartctl -x /dev/sde

Thanks.

Yes, it looks like 1 disk has a last TXG 8847849, (ZFS Transaction Group number), different from the others, which is 8847875.

This means you can’t import the pool as is. Yet, since it is a redundant pool, you can survive without that 1 disk.

From what I can tell, you have 2 choices:

  1. Dis-connect or remove “sdb” and try importing the pool again. It should complain about the missing disk, but you should be able to import it. Potentially needing additional options to the zpool import command. This will mean zero data lost.
  2. Try rolling back the pool to the most common TXG, 8847849. This will mean some of the most recent data written could be lost.

But, I am not sure option 2 will work because it is possible that the 3 good disks have gone past the ring buffer size for TXGs.

Someone else may have additional suggestions, so you might wait and see what others say first.

That was a good spot by @arwen which I had missed.

I think that their suggestion to remove the disk with the lower txg is a sensible one. As they said, you will end up with a temporarily degraded pool, however assuming that sdb is not failing you should be able to resilver to it.

Before trying this import, I think it would be useful to get the smart attributes anyway so we can see what the state of the drives are and whether they have any other problems that might cause things to go wrong.

(The difference in TXG numbers is 26. With ashift=14 I think you get 32 uberblocks, in which case a recovery to txg 8847849 should hopefully be possible. But I still think an attempt to recover as a degraded pool would be better as @arwen suggested.)

so, according the first choice i need to
zpool offline pool gptid/83c95c75-7fe2-4fec-9845-d974fc3f637f
OR
just disconnect it from the bay physically.
Correct?

ill try “battle spell” first, if you dont mind )

Sorry - but I have no idea what that means.

But clearly something went wrong when writing to /dev/sdb and it might be sensible to see whether there is a hardware issue with this drive, and check that there aren’t hardware issues with the other drives.

I doubt you can offline a disk on an exported pool.

Yes, you would have to physically disconnect the drive from either data or power.

1 Like

finally i find out wich one sdX contain wrong txg. Everytime reassigning sdX is quite annoying.
so at this point i got this

root@truenas[/home/admin]# lsblk -o name,size,type,partuuid
NAME       SIZE TYPE  PARTUUID
sda        3.6T disk  
├─sda1       2G part  a4722b12-5b0f-476d-a704-7f384653ad21
└─sda2     3.6T part  78768fa5-a92a-4548-9557-0f42401ccccf
sdb      223.6G disk  
├─sdb1       1M part  0bfe2fd4-c7c6-48f7-95c3-9ac97455137b
├─sdb2     512M part  c2e55895-c266-48e3-a249-8574d9502604
├─sdb3   207.1G part  a8ca35d1-8aef-4b47-8784-68848999cfe0
└─sdb4      16G part  ef366a6b-d331-4d0b-a08b-9c6e73abe626
  └─sdb4    16G crypt 
sdc        3.6T disk  
├─sdc1       2G part  9ee250cd-dccf-494c-aab3-1f2cc2fbd84c
└─sdc2     3.6T part  7148f5b2-5cf3-4f20-8f67-3e13d6350289
sdd        3.6T disk  
├─sdd1       2G part  02b02130-2f0b-415f-ae8c-85d849261c33
└─sdd2     3.6T part  83c95c75-7fe2-4fec-9845-d974fc3f637f
root@truenas[/home/admin]# 
root@truenas[/home/admin]# zpool import
   pool: pool
     id: 13997916364281612929
  state: DEGRADED
status: One or more devices contains corrupted data.
 action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
 config:

        pool                                      DEGRADED
          raidz2-0                                DEGRADED
            7148f5b2-5cf3-4f20-8f67-3e13d6350289  ONLINE
            78768fa5-a92a-4548-9557-0f42401ccccf  ONLINE
            9143061293094038254                   UNAVAIL
            83c95c75-7fe2-4fec-9845-d974fc3f637f  ONLINE
root@truenas[/home/admin]# 
root@truenas[/home/admin]# 
root@truenas[/home/admin]# 
root@truenas[/home/admin]# zpool status -v
  pool: boot-pool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:19 with 0 errors on Fri May 30 03:45:20 2025
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          sdb3      ONLINE       0     0     0

errors: No known data errors
root@truenas[/home/admin]# 

but still got this

root@truenas[/home/admin]# 
root@truenas[/home/admin]# zpool import pool
cannot import 'pool': insufficient replicas
        Destroy and re-create the pool from
        a backup source.
root@truenas[/home/admin]# 

txg part

root@truenas[/home/admin]# zdb -l /dev/sda2       
------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'pool'
    state: 0
    txg: 8847875
    pool_guid: 13997916364281612929
    errata: 0
    hostid: 1798924897
    hostname: 'truenas'
    top_guid: 11219338610695144943
    guid: 10540558960207961743
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 11219338610695144943
        nparity: 2
        metaslab_array: 134
        metaslab_shift: 34
        ashift: 12
        asize: 15994538950656
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 13217704900569720053
            path: '/dev/disk/by-partuuid/7148f5b2-5cf3-4f20-8f67-3e13d6350289'
            whole_disk: 0
            DTL: 23579
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
        children[1]:
            type: 'disk'
            id: 1
            guid: 10540558960207961743
            path: '/dev/disk/by-partuuid/78768fa5-a92a-4548-9557-0f42401ccccf'
            whole_disk: 0
            DTL: 3554
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
        children[2]:
            type: 'disk'
            id: 2
            guid: 9143061293094038254
            path: '/dev/disk/by-partuuid/af598827-99d2-46cc-bb20-e3821e34ca31'
            whole_disk: 0
            not_present: 1
            DTL: 3703
            create_txg: 4
            degraded: 1
        children[3]:
            type: 'disk'
            id: 3
            guid: 2924254637925524493
            path: '/dev/disk/by-partuuid/83c95c75-7fe2-4fec-9845-d974fc3f637f'
            whole_disk: 0
            DTL: 23578
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3 
root@truenas[/home/admin]# zdb -l /dev/sdc2 
------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'pool'
    state: 0
    txg: 8847875
    pool_guid: 13997916364281612929
    errata: 0
    hostid: 1798924897
    hostname: 'truenas'
    top_guid: 11219338610695144943
    guid: 13217704900569720053
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 11219338610695144943
        nparity: 2
        metaslab_array: 134
        metaslab_shift: 34
        ashift: 12
        asize: 15994538950656
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 13217704900569720053
            path: '/dev/disk/by-partuuid/7148f5b2-5cf3-4f20-8f67-3e13d6350289'
            whole_disk: 0
            DTL: 23579
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
        children[1]:
            type: 'disk'
            id: 1
            guid: 10540558960207961743
            path: '/dev/disk/by-partuuid/78768fa5-a92a-4548-9557-0f42401ccccf'
            whole_disk: 0
            DTL: 3554
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
        children[2]:
            type: 'disk'
            id: 2
            guid: 9143061293094038254
            path: '/dev/disk/by-partuuid/af598827-99d2-46cc-bb20-e3821e34ca31'
            whole_disk: 0
            not_present: 1
            DTL: 3703
            create_txg: 4
            degraded: 1
        children[3]:
            type: 'disk'
            id: 3
            guid: 2924254637925524493
            path: '/dev/disk/by-partuuid/83c95c75-7fe2-4fec-9845-d974fc3f637f'
            whole_disk: 0
            DTL: 23578
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3 
`root@truenas[/home/admin]# 
root@truenas[/home/admin]# zdb -l /dev/sdd2
------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'pool'
    state: 0
    txg: 8847875
    pool_guid: 13997916364281612929
    errata: 0
    hostid: 1798924897
    hostname: 'truenas'
    top_guid: 11219338610695144943
    guid: 2924254637925524493
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 11219338610695144943
        nparity: 2
        metaslab_array: 134
        metaslab_shift: 34
        ashift: 12
        asize: 15994538950656
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 13217704900569720053
            path: '/dev/disk/by-partuuid/7148f5b2-5cf3-4f20-8f67-3e13d6350289'
            whole_disk: 0
            DTL: 23579
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
        children[1]:
            type: 'disk'
            id: 1
            guid: 10540558960207961743
            path: '/dev/disk/by-partuuid/78768fa5-a92a-4548-9557-0f42401ccccf'
            whole_disk: 0
            DTL: 3554
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
        children[2]:
            type: 'disk'
            id: 2
            guid: 9143061293094038254
            path: '/dev/disk/by-partuuid/af598827-99d2-46cc-bb20-e3821e34ca31'
            whole_disk: 0
            not_present: 1
            DTL: 3703
            create_txg: 4
            degraded: 1
        children[3]:
            type: 'disk'
            id: 3
            guid: 2924254637925524493
            path: '/dev/disk/by-partuuid/83c95c75-7fe2-4fec-9845-d974fc3f637f'
            whole_disk: 0
            DTL: 23578
            create_txg: 4
            degraded: 1
            aux_state: 'err_exceeded'
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3 `

about SMART

root@truenas[/home/admin]# sudo smartctl -x /dev/sda
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.6.32-production+truenas] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate BarraCuda 3.5 (SMR)
Device Model:     ST4000DM004-2CV104
Serial Number:    ZFN4V3XE
LU WWN Device Id: 5 000c50 0e721b60f
Firmware Version: 0001
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5425 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database 7.3/5528
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sun Jun  8 00:02:32 2025 MSK
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is:   Unavailable
APM feature is:   Unavailable
Rd look-ahead is: Enabled
Write cache is:   Enabled
DSN feature is:   Unavailable
ATA Security is:  Disabled, frozen [SEC2]
Wt Cache Reorder: Unavailable

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever 
                                        been run.
Total time to complete Offline 
data collection:                (    0) seconds.
Offline data collection
capabilities:                    (0x73) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        No Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine 
recommended polling time:        (   1) minutes.
Extended self-test routine
recommended polling time:        ( 493) minutes.
Conveyance self-test routine
recommended polling time:        (   2) minutes.
SCT capabilities:              (0x30a5) SCT Status supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
  1 Raw_Read_Error_Rate     POSR--   064   064   006    -    2510363
  3 Spin_Up_Time            PO----   096   096   000    -    0
  4 Start_Stop_Count        -O--CK   081   081   020    -    19479
  5 Reallocated_Sector_Ct   PO--CK   100   100   010    -    0
  7 Seek_Error_Rate         POSR--   093   060   045    -    2054917245
  9 Power_On_Hours          -O--CK   085   085   000    -    13631h+27m+53.638s
 10 Spin_Retry_Count        PO--C-   100   100   097    -    0
 12 Power_Cycle_Count       -O--CK   081   081   020    -    19479
183 Runtime_Bad_Block       -O--CK   100   100   000    -    0
184 End-to-End_Error        -O--CK   100   100   099    -    0
187 Reported_Uncorrect      -O--CK   100   100   000    -    0
188 Command_Timeout         -O--CK   100   100   000    -    0 0 0
189 High_Fly_Writes         -O-RCK   100   100   000    -    0
190 Airflow_Temperature_Cel -O---K   065   050   040    -    35 (Min/Max 33/35)
191 G-Sense_Error_Rate      -O--CK   100   100   000    -    0
192 Power-Off_Retract_Count -O--CK   091   091   000    -    19878
193 Load_Cycle_Count        -O--CK   090   090   000    -    20124
194 Temperature_Celsius     -O---K   035   050   000    -    35 (0 14 0 0 0)
195 Hardware_ECC_Recovered  -O-RC-   064   064   000    -    2510363
197 Current_Pending_Sector  -O--C-   100   100   000    -    0
198 Offline_Uncorrectable   ----C-   100   100   000    -    0
199 UDMA_CRC_Error_Count    -OSRCK   200   200   000    -    0
240 Head_Flying_Hours       ------   100   253   000    -    13456h+31m+40.621s
241 Total_LBAs_Written      ------   100   253   000    -    23712020194
242 Total_LBAs_Read         ------   100   253   000    -    85536509067
                            ||||||_ K auto-keep
                            |||||__ C event count
                            ||||___ R error rate
                            |||____ S speed/performance
                            ||_____ O updated online
                            |______ P prefailure warning

General Purpose Log Directory Version 1
SMART           Log Directory Version 1 [multi-sector log support]
Address    Access  R/W   Size  Description
0x00       GPL,SL  R/O      1  Log Directory
0x01           SL  R/O      1  Summary SMART error log
0x02           SL  R/O      5  Comprehensive SMART error log
0x03       GPL     R/O      5  Ext. Comprehensive SMART error log
0x04       GPL,SL  R/O      8  Device Statistics log
0x06           SL  R/O      1  SMART self-test log
0x07       GPL     R/O      1  Extended self-test log
0x08       GPL     R/O      2  Power Conditions log
0x09           SL  R/W      1  Selective self-test log
0x0c       GPL     R/O   2048  Pending Defects log
0x10       GPL     R/O      1  NCQ Command Error log
0x11       GPL     R/O      1  SATA Phy Event Counters log
0x21       GPL     R/O      1  Write stream error log
0x22       GPL     R/O      1  Read stream error log
0x24       GPL     R/O    512  Current Device Internal Status Data log
0x30       GPL,SL  R/O      9  IDENTIFY DEVICE data log
0x80-0x9f  GPL,SL  R/W     16  Host vendor specific log
0xa1       GPL,SL  VS      24  Device vendor specific log
0xa2       GPL     VS    8160  Device vendor specific log
0xa6       GPL     VS     192  Device vendor specific log
0xa8-0xa9  GPL,SL  VS     136  Device vendor specific log
0xab       GPL     VS       1  Device vendor specific log
0xb0       GPL     VS    9048  Device vendor specific log
0xbd       GPL     VS       8  Device vendor specific log
0xbe-0xbf  GPL     VS   65535  Device vendor specific log
0xc0       GPL,SL  VS       1  Device vendor specific log
0xc1       GPL,SL  VS      16  Device vendor specific log
0xc3       GPL,SL  VS       8  Device vendor specific log
0xc4       GPL,SL  VS      24  Device vendor specific log
0xd1       GPL     VS     264  Device vendor specific log
0xd3       GPL     VS    1920  Device vendor specific log
0xe0       GPL,SL  R/W      1  SCT Command/Status
0xe1       GPL,SL  R/W      1  SCT Data Transfer

SMART Extended Comprehensive Error Log Version: 1 (5 sectors)
No Errors Logged

SMART Extended Self-test Log Version: 1 (1 sectors)
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%     13626         -
# 2  Short offline       Interrupted (host reset)      00%     13611         -
# 3  Short offline       Completed without error       00%     13587         -
# 4  Short offline       Interrupted (host reset)      00%     13564         -
# 5  Short offline       Completed without error       00%     13540         -
# 6  Short offline       Completed without error       00%     13516         -
# 7  Short offline       Completed without error       00%     13492         -
# 8  Short offline       Completed without error       00%     13468         -
# 9  Short offline       Completed without error       00%     13444         -
#10  Short offline       Completed without error       00%     13420         -
#11  Short offline       Interrupted (host reset)      00%     13396         -
#12  Short offline       Completed without error       00%     13372         -
#13  Short offline       Completed without error       00%     13348         -
#14  Short offline       Completed without error       00%     13324         -
#15  Short offline       Completed without error       00%     13300         -
#16  Short offline       Completed without error       00%     13276         -
#17  Short offline       Completed without error       00%     13252         -
#18  Short offline       Completed without error       00%     13228         -
#19  Short offline       Completed without error       00%     13204         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

SCT Status Version:                  3
SCT Version (vendor specific):       522 (0x020a)
Device State:                        Active (0)
Current Temperature:                    35 Celsius
Power Cycle Min/Max Temperature:     33/35 Celsius
Lifetime    Min/Max Temperature:     14/50 Celsius
Under/Over Temperature Limit Count:   0/0

SCT Temperature History Version:     2
Temperature Sampling Period:         3 minutes
Temperature Logging Interval:        59 minutes
Min/Max recommended Temperature:     14/55 Celsius
Min/Max Temperature Limit:           10/60 Celsius
Temperature History Size (Index):    128 (80)

Index    Estimated Time   Temperature Celsius
  81    2025-06-02 19:06    41  **********************
  82    2025-06-02 20:05     ?  -
  83    2025-06-02 21:04    41  **********************
  84    2025-06-02 22:03     ?  -
  85    2025-06-02 23:02    41  **********************
  86    2025-06-03 00:01     ?  -
  87    2025-06-03 01:00    41  **********************
  88    2025-06-03 01:59     ?  -
  89    2025-06-03 02:58    41  **********************
  90    2025-06-03 03:57     ?  -
  91    2025-06-03 04:56    41  **********************
  92    2025-06-03 05:55     ?  -
  93    2025-06-03 06:54    41  **********************
  94    2025-06-03 07:53     ?  -
  95    2025-06-03 08:52    41  **********************
  96    2025-06-03 09:51     ?  -
  97    2025-06-03 10:50    41  **********************
  98    2025-06-03 11:49     ?  -
  99    2025-06-03 12:48    38  *******************
 100    2025-06-03 13:47     ?  -
 101    2025-06-03 14:46    38  *******************
 102    2025-06-03 15:45     ?  -
 103    2025-06-03 16:44    38  *******************
 104    2025-06-03 17:43     ?  -
 105    2025-06-03 18:42    38  *******************
 106    2025-06-03 19:41     ?  -
 107    2025-06-03 20:40    38  *******************
 108    2025-06-03 21:39     ?  -
 109    2025-06-03 22:38    38  *******************
 110    2025-06-03 23:37     ?  -
 111    2025-06-04 00:36    38  *******************
 112    2025-06-04 01:35     ?  -
 113    2025-06-04 02:34    38  *******************
 114    2025-06-04 03:33     ?  -
 115    2025-06-04 04:32    38  *******************
 116    2025-06-04 05:31     ?  -
 117    2025-06-04 06:30    38  *******************
 118    2025-06-04 07:29     ?  -
 119    2025-06-04 08:28    38  *******************
 120    2025-06-04 09:27     ?  -
 121    2025-06-04 10:26    38  *******************
 122    2025-06-04 11:25     ?  -
 123    2025-06-04 12:24    38  *******************
 124    2025-06-04 13:23     ?  -
 125    2025-06-04 14:22    38  *******************
 126    2025-06-04 15:21     ?  -
 127    2025-06-04 16:20    27  ********
   0    2025-06-04 17:19     ?  -
   1    2025-06-04 18:18    28  *********
   2    2025-06-04 19:17     ?  -
   3    2025-06-04 20:16    30  ***********
   4    2025-06-04 21:15     ?  -
   5    2025-06-04 22:14    30  ***********
   6    2025-06-04 23:13     ?  -
   7    2025-06-05 00:12    30  ***********
   8    2025-06-05 01:11     ?  -
   9    2025-06-05 02:10    30  ***********
  10    2025-06-05 03:09     ?  -
  11    2025-06-05 04:08    30  ***********
  12    2025-06-05 05:07     ?  -
  13    2025-06-05 06:06    30  ***********
  14    2025-06-05 07:05     ?  -
  15    2025-06-05 08:04    30  ***********
  16    2025-06-05 09:03     ?  -
  17    2025-06-05 10:02    30  ***********
  18    2025-06-05 11:01     ?  -
  19    2025-06-05 12:00    30  ***********
  20    2025-06-05 12:59     ?  -
  21    2025-06-05 13:58    31  ************
  22    2025-06-05 14:57     ?  -
  23    2025-06-05 15:56    31  ************
  24    2025-06-05 16:55     ?  -
  25    2025-06-05 17:54    31  ************
  26    2025-06-05 18:53     ?  -
  27    2025-06-05 19:52    31  ************
  28    2025-06-05 20:51     ?  -
  29    2025-06-05 21:50    31  ************
  30    2025-06-05 22:49     ?  -
  31    2025-06-05 23:48    31  ************
  32    2025-06-06 00:47     ?  -
  33    2025-06-06 01:46    31  ************
  34    2025-06-06 02:45     ?  -
  35    2025-06-06 03:44    32  *************
  36    2025-06-06 04:43     ?  -
  37    2025-06-06 05:42    32  *************
  38    2025-06-06 06:41     ?  -
  39    2025-06-06 07:40    32  *************
  40    2025-06-06 08:39     ?  -
  41    2025-06-06 09:38    32  *************
  42    2025-06-06 10:37     ?  -
  43    2025-06-06 11:36    33  **************
  44    2025-06-06 12:35     ?  -
  45    2025-06-06 13:34    33  **************
  46    2025-06-06 14:33     ?  -
  47    2025-06-06 15:32    33  **************
  48    2025-06-06 16:31     ?  -
  49    2025-06-06 17:30    33  **************
  50    2025-06-06 18:29     ?  -
  51    2025-06-06 19:28    33  **************
  52    2025-06-06 20:27     ?  -
  53    2025-06-06 21:26    33  **************
  54    2025-06-06 22:25     ?  -
  55    2025-06-06 23:24    33  **************
  56    2025-06-07 00:23     ?  -
  57    2025-06-07 01:22    30  ***********
  58    2025-06-07 02:21     ?  -
  59    2025-06-07 03:20    33  **************
  60    2025-06-07 04:19     ?  -
  61    2025-06-07 05:18    36  *****************
  62    2025-06-07 06:17    37  ******************
  63    2025-06-07 07:16     ?  -
  64    2025-06-07 08:15    37  ******************
  65    2025-06-07 09:14     ?  -
  66    2025-06-07 10:13    34  ***************
  67    2025-06-07 11:12     ?  -
  68    2025-06-07 12:11    26  *******
  69    2025-06-07 13:10     ?  -
  70    2025-06-07 14:09    28  *********
  71    2025-06-07 15:08     ?  -
  72    2025-06-07 16:07    30  ***********
  73    2025-06-07 17:06     ?  -
  74    2025-06-07 18:05    35  ****************
  75    2025-06-07 19:04     ?  -
  76    2025-06-07 20:03    35  ****************
  77    2025-06-07 21:02     ?  -
  78    2025-06-07 22:01    32  *************
  79    2025-06-07 23:00     ?  -
  80    2025-06-07 23:59    33  **************

SCT Error Recovery Control command not supported

Device Statistics (GP Log 0x04)
Page  Offset Size        Value Flags Description
0x01  =====  =               =  ===  == General Statistics (rev 1) ==
0x01  0x008  4           19479  ---  Lifetime Power-On Resets
0x01  0x010  4           13631  ---  Power-on Hours
0x01  0x018  6     23678801569  ---  Logical Sectors Written
0x01  0x020  6       597691496  ---  Number of Write Commands
0x01  0x028  6     82067919016  ---  Logical Sectors Read
0x01  0x030  6       278521266  ---  Number of Read Commands
0x01  0x038  6               -  ---  Date and Time TimeStamp
0x03  =====  =               =  ===  == Rotating Media Statistics (rev 1) ==
0x03  0x008  4           13488  ---  Spindle Motor Power-on Hours
0x03  0x010  4           13446  ---  Head Flying Hours
0x03  0x018  4           20124  ---  Head Load Events
0x03  0x020  4               0  ---  Number of Reallocated Logical Sectors
0x03  0x028  4               0  ---  Read Recovery Attempts
0x03  0x030  4               0  ---  Number of Mechanical Start Failures
0x03  0x038  4               0  ---  Number of Realloc. Candidate Logical Sectors
0x03  0x040  4           19878  ---  Number of High Priority Unload Events
0x04  =====  =               =  ===  == General Errors Statistics (rev 1) ==
0x04  0x008  4               0  ---  Number of Reported Uncorrectable Errors
0x04  0x010  4               0  ---  Resets Between Cmd Acceptance and Completion
0x05  =====  =               =  ===  == Temperature Statistics (rev 1) ==
0x05  0x008  1              35  ---  Current Temperature
0x05  0x010  1              39  ---  Average Short Term Temperature
0x05  0x018  1              38  ---  Average Long Term Temperature
0x05  0x020  1              50  ---  Highest Temperature
0x05  0x028  1               0  ---  Lowest Temperature
0x05  0x030  1              49  ---  Highest Average Short Term Temperature
0x05  0x038  1              33  ---  Lowest Average Short Term Temperature
0x05  0x040  1              45  ---  Highest Average Long Term Temperature
0x05  0x048  1              35  ---  Lowest Average Long Term Temperature
0x05  0x050  4               0  ---  Time in Over-Temperature
0x05  0x058  1              60  ---  Specified Maximum Operating Temperature
0x05  0x060  4               0  ---  Time in Under-Temperature
0x05  0x068  1               0  ---  Specified Minimum Operating Temperature
0x06  =====  =               =  ===  == Transport Statistics (rev 1) ==
0x06  0x008  4             121  ---  Number of Hardware Resets
0x06  0x010  4              12  ---  Number of ASR Events
0x06  0x018  4               0  ---  Number of Interface CRC Errors
                                |||_ C monitored condition met
                                ||__ D supports DSN
                                |___ N normalized value

Pending Defects log (GP Log 0x0c)
No Defects Logged

SATA Phy Event Counters (GP Log 0x11)
ID      Size     Value  Description
0x000a  2            4  Device-to-host register FISes sent due to a COMRESET
0x0001  2            0  Command failed due to ICRC error
0x0003  2            0  R_ERR response for device-to-host data FIS
0x0004  2            0  R_ERR response for host-to-device data FIS
0x0006  2            0  R_ERR response for device-to-host non-data FIS
0x0007  2            0  R_ERR response for host-to-device non-data FIS

Seagate FARM log (GP Log 0xa6) supported [try: -l farm]

other quite similar, cant post all at once cause char limit. maybe you need definite part of output?

tryed this

admin@truenas[~]$ sudo zpool import -F pool
cannot import 'pool': insufficient replicas
        Destroy and re-create the pool from
        a backup source.
admin@truenas[~]$ sudo zpool import -f pool
cannot import 'pool': insufficient replicas
        Destroy and re-create the pool from
        a backup source.
admin@truenas[~]$ sudo zpool import -f -m pool
cannot import 'pool': insufficient replicas
        Destroy and re-create the pool from
        a backup source.
admin@truenas[~]$ 

not shure about using -TX or maybe -N will help.

This is an issue. You should look into replacing these with CMR drives, or rebuild a new pool.
Even though it may not be the (sole) reason for your trouble, as it seems that 24.10.2 and/or 25.04 are perfectly capable of eating pools without SMR drives being involved.

But first, your data!
Check before trying potentially dangerous commands.
Since -f and -F have already failed, your next attempts would be -X, or at -T 8847849 with the fourth drive.
Try sudo zpool import -FXn pool in a tmux session, as this could take a long time to return… and the desirable result is actually nothing (no error). This should be safe due to the -n option. Wait for confirmation by @Arwen or @HoneyBadger before attempting real recovery (without -n but with -R /mnt).

I agree SMR drives are not good but we should focus on getting the pool back online first - but I would personally advise against trying to resilver to the same SMR drive if you get the pool online with one drive extracted.

We only have SMART data for one drive, and it isn’t showing any major defects but there are some oddities:

  1. 1.5 start/stops PER HOUR???!!! Without APM??? Could be an aspect of this being a drive with firmware designed for ad-hoc desktop usage rather than NAS usage.

  2. SMART short tests every 24 hours (too frequent?) but no long tests at all?

@etorix’s advice sounds right to me. Try doing a trial import with -FXn and see what the result is. But don’t try it for real until you have posted the results of the trial and the real experts have taklen a look.

it took about 10 min. during this time the ethernet was disconnected several times and the web was reset, so i have no any output.