Cant recover metadata VDEV

Hi,

im completely lost and need your help!

I just experienced a boot drive failure on my TrueNAS Scale system. After replacing the boot drive and reinstalling TrueNAS Scale, I’m facing the following issues:

  • Unable to access a previously existing metadata pool.
  • My HomeAssistant VM fails to start with error: “[EFAULT] VM will not start as DISK Device: /dev/zvol/metadata/haos device(s) are not available.”
  • The dashboard shows a “metadata” pool, but it’s marked as offline.
  • There’s an unassigned disk in the system, which is the former metadata disk.
  • Standard ZFS import commands (zpool import metadata, zpool import -f metadata) fail to recognize or import the pool.

This is the view of my storage dashboard. The unassigned disk IS the metadata disk.


If i try to add a metadata pool or vdev i get warned, that this wil erase all data. That would be pretty bad…

How can i fix this? :confused:

Did you try rebooting?
How is the drive connected to the system?

Hi,
yes. rebooting unfortunately doesnt help.

Drive is connected directly to the motherboard via SATA cable.

Output of zpool status and zpool import please.

root@truenas[~]# zpool status
  pool: Dateien
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 02:11:27 with 0 errors on Sun Aug  4 02:11:30 2024
config:

        NAME                                      STATE     READ WRITE CKSUM
        Dateien                                   ONLINE       0     0     0
          mirror-0                                ONLINE       0     0     0
            50ee83c7-3962-11eb-a124-a8a1592c14f0  ONLINE       0     0     0
            sdd2                                  ONLINE       0     0     0

errors: No known data errors

  pool: Filme
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 05:22:48 with 0 errors on Sun Jul 28 05:22:50 2024
config:

        NAME                                    STATE     READ WRITE CKSUM
        Filme                                   ONLINE       0     0     0
          72ced9c8-3962-11eb-a124-a8a1592c14f0  ONLINE       0     0     0

errors: No known data errors

  pool: boot-pool
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          sda3      ONLINE       0     0     0

errors: No known data errors
root@truenas[~]# zpool import
no pools available to import

Please clear your browser’s cache and reboot, then see if something changed.
If you have one (and you should), upload your configuration backup to the new TN install.

did so. Even tried different browsers. Still not working

Output of zpool import -D and lsblk?

root@truenas[~]# zpool import -D
no pools available to import
root@truenas[~]# lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda           8:0    0   2.7T  0 disk  
├─sda1        8:1    0     2G  0 part  
└─sda2        8:2    0   2.7T  0 part  
sdb           8:16   0 223.6G  0 disk  
├─sdb1        8:17   0    16M  0 part  
└─sdb2        8:18   0 223.6G  0 part  
sdc           8:32   0   1.8T  0 disk  
├─sdc1        8:33   0     2G  0 part  
│ └─md127     9:127  0     2G  0 raid1 
│   └─md127 253:0    0     2G  0 crypt 
└─sdc2        8:34   0   1.8T  0 part  
sdd           8:48   0   1.8T  0 disk  
├─sdd1        8:49   0     2G  0 part  
│ └─md127     9:127  0     2G  0 raid1 
│   └─md127 253:0    0     2G  0 crypt 
└─sdd2        8:50   0   1.8T  0 part  
sde           8:64   0 465.8G  0 disk  
├─sde1        8:65   0     1M  0 part  
├─sde2        8:66   0   512M  0 part  
└─sde3        8:67   0 465.3G  0 part  
root@truenas[~]#

If zpool import -d /dev/sdb -R root gives no result, I’m out of options.

root@truenas[~]# zpool import -d /dev/sdb -R root
no pools available to import

it does not :confused:

Is it possible you mistakenly reinstalled on the metadata pool instead than the previous boot pool drive?

1 Like

FML… I think that could be it… Im not 100% sure, but could be the case that i selected the wrong drive…

any idea how to quickly validate that?

Nope. Maybe @HoneyBadger or @Whattteva can contribute, being amongst the most knowledgable users on this forum re: pool recovery.

First, please try to fix your terminology.
There is no such thing as a “metadata pool”. There are metadata vdevs, but there are part of a pool—if your were missing a special vdev, pool “Dataien” would not import at all.
It seems you are missing a single drive pool which was named “metadata”. This does not make it a “metadata pool”, and is totally obscure to us.

Second, please provide a complete description of your system and how it was set up.
I see that one member of pool Dataien is listed as sdd2 instead of a GPTID: This is not normal.

I also see that the missing drive is an unusual 223,57 GiB, which does not fit with typical sizes. Were you playing with partitions not to “waste space” on the boot drive, by any (mis)chance? :scream:

1 Like

Hi @etorix,
i dont know what you exactly want to know of how i set it up. But I guess this could be irrelevant anyway, because I’m pretty certain that I fucked up the metadata ssd while installing truenas on it…

Regarding the small ssd: no I didnt play around with anything its only an unusal SSD :wink:

“Assumption is the mother of all fuckups.”

A complete list of the hardware (motherboard, CPU, RAM, drives), as required by the rules of the old forum :cry: , would be nice, together with an explanation of how it was set up (pool1: x drives in mirror/raidz#, pool2: etc.) and what you were trying to achieve.
Some of that may not be relevant, but it is not nice to have us guess what’s there from two screenshots and an incomplete description in post 1, zpool status in post 5 and lsblk in post 9.

WHERE do you see that? I don’t.
And I have no idea what you mean by “metadata pool” or “metadata ssd”. If the missing device was a single drive special vdev part of pool “Dataien”, this pool would not import.
“Cant [sic] recover metadata VEDV” is not quite the same problem as “Can’t recover (single drive) pool”.

Have you LITTERALLY named a pool “metadata”, all in small case and not even “Metadataien”??? If so, please state it explicitly!

Well, my guess is there was a single disk pool
Called “metadata” made from a circa 250GB SSD with a zvol on it called “haos”

How did it get erased?

If you don’t look closely it’s likely to be missed.

It took me a while to understand so.

Agreed, but we don’t know about the 250 GB SSD, see below.

Might have been during the SCALE reinstall. There is a 250-ish GB drive and a 500-ish GB one, we know he reinstalled SCALE, that a device is unassigned and that a pool is missing and not showing on zpool status or zpool import.

I would think the user reinstalled on the “metadata pool” SSD, the larger one of the two, instead than the designated boot drive, the smaller one. Then imported all the pools and uploaded the previous configuration backup, realizing the issue.

That being said, most of this is deduction work. I feel like a dentist that’s removing a tooth; back on the topic, my reinstall mishap theory should be take with the due caution.

This might be. But the use of the English keyword “metadata” as a name for a pool intended for VMs is, in non-necessarily equal parts, misleading, flabbergasting and inconsistent with German names “Dataien” and “Filme” for other pools.
So reading metadata pool rather than "metadata" pool, with quotes around the reserved word, resulted in a parsing error. Hey, after all, I’m a resident IA…

Does lsblk -f reveal more?

1 Like