Pool is Offline, not found

Hello:

I have lost my only pool and hope someone can help me recover the data.

Equipment
OS = 24.10-RC2
MBO = MSI B550 GEN3 Gaming Motherbord
CPU= AMD Ryzen 7 5700G
Ram = 32 Gb Non-ECC, DDR4 2X 16. 3200MT/s, CL16-18-18-38
Power Supply = Corsair RM750e, 750-watt
Boot Drive = Crucial P3 1TB PCIe, NVM3 M.2 SSD plugged into MBO slot
Array Drives = 5X WD Red Pro WD6003FFBX 6TB 7200 RPM 256MB Cache SATA 6.0Gb/s 3.5"
NIC = Built-in NIC on MBO and Dual Port PCIe X4 Intel 82576
I built the PC about a week ago and installed all 5 WD drives in one pool named “home”. All was working fine. I installed DDNS Updater, Web File Manager, and Frigate. (I had not yet got Frigate working) I shut down the unit to install the 2-port NIC in order to set up a bonded LAGG port. I had to reboot several times in order to get the bond working properly. When I had the NIC working properly I noted other things were not working and came up problem with storage.

GUI
According to the GUI, the pool panel named home had an offline VDEV, the ZFS Health panel said the Pool Status was offline. The Disk Health was good. At the top it said ,“Disks with exported pools 6”. IF you select “Add to Pool” and “Existing Pool” the drop-down menu for Existing Pool is empty.

CLI
Zpool import says “no pools available to import”

lsblk says
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 5.5T 0 disk
└─sda1 8:1 0 5.5T 0 part
sdb 8:16 0 5.5T 0 disk
└─sdb1 8:17 0 5.5T 0 part
sdc 8:32 0 5.5T 0 disk
└─sdc1 8:33 0 5.5T 0 part
sdd 8:48 0 5.5T 0 disk
└─sdd1 8:49 0 5.5T 0 part
sde 8:64 0 5.5T 0 disk
└─sde1 8:65 0 5.5T 0 part
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 1M 0 part
├─nvme0n1p2 259:2 0 512M 0 part
└─nvme0n1p3 259:3 0 931G 0 part

It seems odd to me that the GUI still shows the pool named home albeit offline and yet the pool is not found in the dropdown or in the zpool command.
What can I do?

Thanks for any help.

Ricke

What does
sudo zpool status -v say ?

When posting an output, please use the code brackets “</>” above. This maintains the formatting of the output.

Sorry for mistakenly thinking the output of lsblk would be readable with the html tags. Here is is again with tags:

</>truenas_admin@truenas[~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 5.5T 0 disk
└─sda1 8:1 0 5.5T 0 part
sdb 8:16 0 5.5T 0 disk
└─sdb1 8:17 0 5.5T 0 part
sdc 8:32 0 5.5T 0 disk
└─sdc1 8:33 0 5.5T 0 part
sdd 8:48 0 5.5T 0 disk
└─sdd1 8:49 0 5.5T 0 part
sde 8:64 0 5.5T 0 disk
└─sde1 8:65 0 5.5T 0 part
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 1M 0 part
├─nvme0n1p2 259:2 0 512M 0 part
└─nvme0n1p3 259:3 0 931G 0 part
</>

Thanks for the instruction.

truenas_admin@truenas[~]$ sudo zpool status -v
  pool: boot-pool
 state: ONLINE
config:

        NAME         STATE     READ WRITE CKSUM
        boot-pool    ONLINE       0     0     0
          nvme0n1p3  ONLINE       0     0     0

I hope I am doing the HTML tag correctly. When I look at my reply to an earlier post about using the tags I get the impression I am not using them properly.

Ricke

image

truenas_admin@truenas[~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 5.5T 0 disk
└─sda1 8:1 0 5.5T 0 part
sdb 8:16 0 5.5T 0 disk
└─sdb1 8:17 0 5.5T 0 part
sdc 8:32 0 5.5T 0 disk
└─sdc1 8:33 0 5.5T 0 part
sdd 8:48 0 5.5T 0 disk
└─sdd1 8:49 0 5.5T 0 part
sde 8:64 0 5.5T 0 disk
└─sde1 8:65 0 5.5T 0 part
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 1M 0 part
├─nvme0n1p2 259:2 0 512M 0 part
└─nvme0n1p3 259:3 0 931G 0 part
1 Like

I think I now know what happened but not how to fix it. When I added the new NIC card and rebooted, it changed the /dev/sd? order of the drives. I can see before and after in syslog by comparing the serial numbers. Is there a means of putting them back in the right order. Because of the syslog I know which dev device should go with which HDD.

Thanks

Ricke

1 Like

ZFS should not care about that.
How are the discs connected ? Directly to the mainboard SATA ports ?

The drive recognition order as already stated, is not a factor.

In the TrueNAS GUI, do you have the option to Import Pool ? If yes, click the button and hopefully you will see your pool there.

If not, then post the output of zpool import and this time, click the icon above </> and the edit screen will display what looks like three dots and the line below that another 3 dots. They are really the ’ symbol. But paste your data in the middle of the two lines so the top line is the three ` and the bottom line is three as well.

The disks are all connected with sata cables directly to the ports on the MBO

Nothing appears in the dropdown box to select a pool when you try to import from within the GUI.

Here is the output you requested

sudo zpool import
no pools available to import

Output of zpool history please.

truenas_admin@truenas[~]$ sudo zpool history
History for 'boot-pool':
2024-10-23.17:51:36 zpool create -f -o ashift=12 -o cachefile=none -o compatibility=grub2 -O acltype=off -O canmount=off -O compression=on -O devices=off -O mountpoint=none -O normalization=formD -O relatime=on -O xattr=sa boot-pool /dev/nvme0n1p3
2024-10-23.17:51:36 zfs create -o canmount=off boot-pool/ROOT
2024-10-23.17:51:36 zfs create -o canmount=off -o mountpoint=legacy boot-pool/grub
2024-10-23.17:51:36 zfs create -o mountpoint=legacy -o truenas:kernel_version=6.6.44-production+truenas -o zectl:keep=False boot-pool/ROOT/24.10-RC.2
2024-10-23.17:51:37 zfs create -u -o mountpoint=legacy -o canmount=noauto -o setuid=off -o devices=off -o exec=off -o acltype=off -o aclmode=discard -o atime=off boot-pool/ROOT/24.10-RC.2/audit
2024-10-23.17:51:37 zfs create -u -o mountpoint=legacy -o canmount=noauto -o setuid=off -o devices=off -o exec=off -o acltype=off -o aclmode=discard boot-pool/ROOT/24.10-RC.2/conf
2024-10-23.17:51:37 zfs create -u -o mountpoint=legacy -o canmount=noauto -o setuid=off -o devices=off -o exec=off -o acltype=off -o aclmode=discard -o atime=off boot-pool/ROOT/24.10-RC.2/data
2024-10-23.17:51:37 zfs create -u -o mountpoint=legacy -o canmount=noauto -o setuid=off -o devices=off -o exec=off -o acltype=off -o aclmode=discard -o atime=off boot-pool/ROOT/24.10-RC.2/mnt
2024-10-23.17:51:37 zfs create -u -o mountpoint=legacy -o canmount=noauto -o setuid=off -o devices=off -o acltype=off -o aclmode=discard boot-pool/ROOT/24.10-RC.2/etc
2024-10-23.17:51:37 zfs create -u -o mountpoint=legacy -o canmount=noauto -o setuid=off -o devices=off -o exec=off -o acltype=off -o aclmode=discard boot-pool/ROOT/24.10-RC.2/home
2024-10-23.17:51:37 zfs create -u -o mountpoint=legacy -o canmount=noauto -o setuid=off -o devices=off -o acltype=off -o aclmode=discard boot-pool/ROOT/24.10-RC.2/opt
2024-10-23.17:51:37 zfs create -u -o mountpoint=legacy -o canmount=noauto -o setuid=off -o devices=off -o acltype=off -o aclmode=discard boot-pool/ROOT/24.10-RC.2/root
2024-10-23.17:51:37 zfs create -u -o mountpoint=legacy -o canmount=noauto -o acltype=off -o aclmode=discard -o atime=off boot-pool/ROOT/24.10-RC.2/usr
2024-10-23.17:51:37 zfs create -u -o mountpoint=legacy -o canmount=noauto -o setuid=off -o devices=off -o acltype=off -o aclmode=discard -o atime=off boot-pool/ROOT/24.10-RC.2/var
2024-10-23.17:51:37 zfs create -u -o mountpoint=legacy -o canmount=noauto -o setuid=off -o devices=off -o exec=off -o acltype=off -o aclmode=discard boot-pool/ROOT/24.10-RC.2/var/ca-certificates
2024-10-23.17:51:37 zfs create -u -o mountpoint=legacy -o canmount=noauto -o setuid=off -o devices=off -o exec=off -o acltype=off -o aclmode=discard -o atime=off boot-pool/ROOT/24.10-RC.2/var/log
2024-10-23.17:51:37 zfs create -u -o mountpoint=legacy -o canmount=noauto -o setuid=off -o devices=off -o exec=off -o acltype=posixacl -o aclmode=discard -o atime=off boot-pool/ROOT/24.10-RC.2/var/log/journal
2024-10-23.17:52:23 zpool set bootfs=boot-pool/ROOT/24.10-RC.2 boot-pool
2024-10-23.17:52:23 zfs set readonly=off boot-pool/ROOT/24.10-RC.2/usr
2024-10-23.17:52:23 zfs set readonly=on boot-pool/ROOT/24.10-RC.2/usr
2024-10-23.17:52:25 zfs set readonly=off boot-pool/ROOT/24.10-RC.2/audit
2024-10-23.17:52:25 zfs set mountpoint=/audit boot-pool/ROOT/24.10-RC.2/audit
2024-10-23.17:52:25 zfs set org.zectl:bootloader="" boot-pool/ROOT/24.10-RC.2/audit
2024-10-23.17:52:25 zfs set readonly=on boot-pool/ROOT/24.10-RC.2/conf
2024-10-23.17:52:25 zfs snapshot boot-pool/ROOT/24.10-RC.2/conf@pristine
2024-10-23.17:52:25 zfs set mountpoint=/conf boot-pool/ROOT/24.10-RC.2/conf
2024-10-23.17:52:25 zfs set org.zectl:bootloader="" boot-pool/ROOT/24.10-RC.2/conf
2024-10-23.17:52:25 zfs set readonly=off boot-pool/ROOT/24.10-RC.2/data
2024-10-23.17:52:25 zfs set mountpoint=/data boot-pool/ROOT/24.10-RC.2/data
2024-10-23.17:52:25 zfs set org.zectl:bootloader="" boot-pool/ROOT/24.10-RC.2/data
2024-10-23.17:52:25 zfs set readonly=off boot-pool/ROOT/24.10-RC.2/mnt
2024-10-23.17:52:25 zfs set mountpoint=/mnt boot-pool/ROOT/24.10-RC.2/mnt
2024-10-23.17:52:25 zfs set org.zectl:bootloader="" boot-pool/ROOT/24.10-RC.2/mnt
2024-10-23.17:52:25 zfs set readonly=off boot-pool/ROOT/24.10-RC.2/etc
2024-10-23.17:52:25 zfs snapshot boot-pool/ROOT/24.10-RC.2/etc@pristine
2024-10-23.17:52:25 zfs set mountpoint=/etc boot-pool/ROOT/24.10-RC.2/etc
2024-10-23.17:52:25 zfs set org.zectl:bootloader="" boot-pool/ROOT/24.10-RC.2/etc
2024-10-23.17:52:25 zfs set readonly=off boot-pool/ROOT/24.10-RC.2/home
2024-10-23.17:52:25 zfs set mountpoint=/home boot-pool/ROOT/24.10-RC.2/home
2024-10-23.17:52:25 zfs set org.zectl:bootloader="" boot-pool/ROOT/24.10-RC.2/home
2024-10-23.17:52:25 zfs set readonly=on boot-pool/ROOT/24.10-RC.2/opt
2024-10-23.17:52:25 zfs snapshot boot-pool/ROOT/24.10-RC.2/opt@pristine
2024-10-23.17:52:25 zfs set mountpoint=/opt boot-pool/ROOT/24.10-RC.2/opt
2024-10-23.17:52:25 zfs set org.zectl:bootloader="" boot-pool/ROOT/24.10-RC.2/opt
2024-10-23.17:52:25 zfs set readonly=off boot-pool/ROOT/24.10-RC.2/root
2024-10-23.17:52:25 zfs set mountpoint=/root boot-pool/ROOT/24.10-RC.2/root
2024-10-23.17:52:25 zfs set org.zectl:bootloader="" boot-pool/ROOT/24.10-RC.2/root
2024-10-23.17:52:25 zfs set readonly=on boot-pool/ROOT/24.10-RC.2/usr
2024-10-23.17:52:25 zfs snapshot boot-pool/ROOT/24.10-RC.2/usr@pristine
2024-10-23.17:52:25 zfs set mountpoint=/usr boot-pool/ROOT/24.10-RC.2/usr
2024-10-23.17:52:25 zfs set org.zectl:bootloader="" boot-pool/ROOT/24.10-RC.2/usr
2024-10-23.17:52:25 zfs set readonly=off boot-pool/ROOT/24.10-RC.2/var
2024-10-23.17:52:25 zfs snapshot boot-pool/ROOT/24.10-RC.2/var@pristine
2024-10-23.17:52:26 zfs set mountpoint=/var boot-pool/ROOT/24.10-RC.2/var
2024-10-23.17:52:26 zfs set org.zectl:bootloader="" boot-pool/ROOT/24.10-RC.2/var
2024-10-23.17:52:26 zfs set readonly=off boot-pool/ROOT/24.10-RC.2/var/ca-certificates
2024-10-23.17:52:26 zfs set mountpoint=/var/local/ca-certificates boot-pool/ROOT/24.10-RC.2/var/ca-certificates
2024-10-23.17:52:26 zfs set org.zectl:bootloader="" boot-pool/ROOT/24.10-RC.2/var/ca-certificates
2024-10-23.17:52:26 zfs set readonly=off boot-pool/ROOT/24.10-RC.2/var/log
2024-10-23.17:52:26 zfs set mountpoint=/var/log boot-pool/ROOT/24.10-RC.2/var/log
2024-10-23.17:52:26 zfs set org.zectl:bootloader="" boot-pool/ROOT/24.10-RC.2/var/log
2024-10-23.17:52:26 zfs set readonly=off boot-pool/ROOT/24.10-RC.2/var/log/journal
2024-10-23.17:52:26 zfs set mountpoint=/var/log/journal boot-pool/ROOT/24.10-RC.2/var/log/journal
2024-10-23.17:52:26 zfs set org.zectl:bootloader="" boot-pool/ROOT/24.10-RC.2/var/log/journal
2024-10-23.17:52:26 zfs set readonly=on boot-pool/ROOT/24.10-RC.2
2024-10-23.17:52:26 zfs snapshot boot-pool/ROOT/24.10-RC.2@pristine
2024-10-23.17:52:26 zfs set org.zectl:bootloader=grub boot-pool/ROOT
2024-10-23.17:52:26 zpool export -f boot-pool
2024-10-23.17:53:47 zpool import -N -f boot-pool
2024-10-23.17:54:07 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa boot-pool/.system
2024-10-23.17:54:07 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o quota=1G -o xattr=sa boot-pool/.system/cores
2024-10-23.17:54:07 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa boot-pool/.system/nfs
2024-10-23.17:54:07 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa boot-pool/.system/samba4
2024-10-23.17:54:07 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa boot-pool/.system/configs-ae32c386e13840b2bf9c0083275e7941
2024-10-23.17:54:09 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o canmount=noauto -o xattr=sa boot-pool/.system/netdata-ae32c386e13840b2bf9c0083275e7941
2024-10-23.17:54:59 py-libzfs: zfs snapshot  boot-pool/.system/samba4@wbc-1729720499
2024-10-23.18:31:08 zfs destroy -r boot-pool/.system
2024-10-23.19:02:34 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa boot-pool/.system
2024-10-23.19:02:34 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o quota=1G -o xattr=sa boot-pool/.system/cores
2024-10-23.19:02:35 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa boot-pool/.system/nfs
2024-10-23.19:02:35 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa boot-pool/.system/samba4
2024-10-23.19:02:35 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa boot-pool/.system/configs-ae32c386e13840b2bf9c0083275e7941
2024-10-23.19:02:35 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o canmount=noauto -o xattr=sa boot-pool/.system/netdata-ae32c386e13840b2bf9c0083275e7941
2024-10-23.19:34:37 zpool import -N -f boot-pool
2024-10-23.19:40:38 zpool import -N -f boot-pool
2024-10-23.20:37:11 zfs destroy -r boot-pool/.system
2024-10-26.18:34:32 zpool import -N -f boot-pool
2024-10-26.18:34:50 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa boot-pool/.system
2024-10-26.18:34:50 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o quota=1G -o xattr=sa boot-pool/.system/cores
2024-10-26.18:34:50 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa boot-pool/.system/nfs
2024-10-26.18:34:50 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa boot-pool/.system/samba4
2024-10-26.18:34:50 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o xattr=sa boot-pool/.system/configs-ae32c386e13840b2bf9c0083275e7941
2024-10-26.18:34:50 py-libzfs: zfs create -o mountpoint=legacy -o readonly=off -o snapdir=hidden -o canmount=noauto -o xattr=sa boot-pool/.system/netdata-ae32c386e13840b2bf9c0083275e7941
2024-10-26.18:35:27 py-libzfs: zfs snapshot  boot-pool/.system/samba4@wbc-1729982127
2024-10-26.19:54:38 zpool import -N -f boot-pool
2024-10-26.20:19:28 zpool import -N -f boot-pool
2024-10-26.20:42:03 zpool import -N -f boot-pool
2024-10-26.21:11:48 zpool import -N -f boot-pool
2024-10-26.22:23:52 zpool import -N -f boot-pool

The offline pool is named “home”

FWIW, here is the top output of the command sudo zdb -lu /dev/sda1

------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'home'
    state: 0
    txg: 433
    pool_guid: 2982698518651645984
    errata: 0
    hostid: 1786724611
    hostname: 'truenas'
    top_guid: 18276733039937115261
    guid: 7873373802631840263
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 18276733039937115261
        nparity: 1
        metaslab_array: 136
        metaslab_shift: 34
        ashift: 12
        asize: 30005843722240
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 3581623171961698179
            path: '/dev/sdd1'
            whole_disk: 0
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 7873373802631840263
            path: '/dev/sdc1'
            whole_disk: 0
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 7588397027135036287
            path: '/dev/sdb1'
            whole_disk: 0
            create_txg: 4
        children[3]:
            type: 'disk'
            id: 3
            guid: 9823480473949456002
            path: '/dev/sda1'
            whole_disk: 0
            create_txg: 4
        children[4]:
            type: 'disk'
            id: 4
            guid: 742717879853238253
            path: '/dev/sde1'
            whole_disk: 0

If you can’t figure it out after another exchange of text messages, I recommend you shutdown, remove the NIC you installed and place the hardware is the exact same configuration it previously was in, power up. Hopefully your pool will begin to work.

If that fails, well I don’t have a lot of personal experience with manually forcing a pool to try to import, especially when it is not listed.

You didn’t do a software update too, did you? If so, maybe roll back to the previous environment after putting the hardware back to the original configuration.

Good luck.

Latest update:

We found, using the command

sudo zdb -lu /dev/sda 

we found the path for the drives were by name rather than ID (e.g. /dev/sda1 instead of ‘/dev/disk/by-id/ata-WDC_WD6003FFBX-68MU3N0_V9G5A33L-part1’) To our understanding this would mean when the OS changed the SD? entries on disks during reboot, the pool would not come up.

So we ran

ls -Alh /dev/disk/by-id 

to get the disk ids and then

sudo zpool import -a -o altroot=/mnt -d /dev/disk/by-id/ata-WDC_WD6003FFBX-68MU3N0_V7GDPSEH-part1 -d /dev/disk/by-id/ata-WDC_WD6003FFBX-68MU3N0_V9G5A33L-part1 -d /dev/disk/by-id/ata-WDC_WD6003FFBX-68MU3N0_V9G7S3RL-part1 -d /dev/disk/by-id/ata-WDC_WD6003FFBX-68MU3N0_V9G83MEL-part1 -d /dev/disk/by-id/ata-WDC_WD6003FFBX-68MU3N0_V9GAXPEL-part1

using those disk ids. This brought the pool back online. We had to do quite a bit of changing mount points after this to get everything working as mount points had all been changed to “/” instead of /mnt We finally got it all working except one thing. samba, this was because System dataset pool was pointing at boot-pool. When we changed that to my pool, home, in System/Advanced and then all worked. Yeah!!!

Until…I did another reboot to get AD working correctly and it went back to pool offline. I looked in syslog for the shutdown and restart. In the shutdown I see it successfuly unmount the pool files systems but in restart I see “No ZFS Pool found” when it tries to remount. Running the zpool import command shown above and point the System Dataset Pool back to my pool has it all running again.

Since we don’t know wnhat is causing it to loose the pool at reboot, unless somreone here has more suggestions, I guess it’s time to start with a fresh install.

Oh, btw, I did remove the new NIC card.

Even if I don’t get this working I learned alot about ZFS and Truenas

Ricke

2 Likes

Great job troubleshooting the issue and thanks for posting the details on how you were able to get it working again. Sounds like a real headache!

I am a little confused, you said “when the OS changed” however you previously said all that was done was the NIC was added.

Yes, that was confusing. I think the addition of the NIC was a red herring. The real cause was the reboot needed to install the NIC. I think it is very possible that was the first reboot since I built the NAS. As you know, one of the beauties of Linux is the lack of need to reboot so often,

What I see now is, every time I reboot, the pool goes offline and is invisible in the GUI. Then I need to run that import command using the disk IDs…
I am still trying to figure out exactly why this is happening at reboot as running that command each time I reboot is not a long-tem option. A fresh build is still in my option list although it will mean re-copying alot of files. Thankfully I do have backups

Ricke

Was that a stab into Windoze back? :rofl: