Unespected metadata vdev configuration

Hi,

I used the GUI to add a new drive to the pool metadata vdev to configure a triple mirror, but it is giving me a warning:

When looking at the configuration, I see this:

I’m not sure what this configuration represents.

The metadata vdev comprises a drive and a mirror with two drives?
I intended to have a mirror vdev with three drives.

This is what zpool status shows:

> zpool status tank
  pool: tank
 state: ONLINE
  scan: resilvered 51.7G in 00:04:09 with 0 errors on Fri Jul 18 20:43:36 2025
config:

	NAME                                      STATE     READ WRITE CKSUM
	tank                                      ONLINE       0     0     0
	  raidz2-0                                ONLINE       0     0     0
	    f3f29762-0a30-437b-a470-b9a1b424ed4d  ONLINE       0     0     0
	    bddde7a8-3058-4c2c-8332-37c7a4699d8c  ONLINE       0     0     0
	    2d89b0ba-6b51-4fcc-8ca2-aa123e02082f  ONLINE       0     0     0
	    f1366d66-361d-42ab-85d3-2f6acdf51022  ONLINE       0     0     0
	    dff6aa50-25be-476d-ac88-50b81331df60  ONLINE       0     0     0
	special	
	  mirror-1                                ONLINE       0     0     0
	    dfdcfa56-0d6e-4088-b570-bffc0447b58c  ONLINE       0     0     0
	    ae85eee3-daf1-4d09-b7fb-595ba8ddfba2  ONLINE       0     0     0
	  6cc159e5-719a-4ab2-94cb-5963bfbfe7cc    ONLINE       0     0     0
	spares
	  cf6a231b-ebb5-4b0d-9343-561dd20036c3    AVAIL   

errors: No known data errors

Any help to understand this configuration and how to configure a three-drive mirror vdev will be appreciated.

Thank you.

Do you have a good backup of your pool? I don’t know if you can fix this without destroying the entire pool and rebuilding

1 Like

I have a remote replica of the entire pool, but where is the problem? What happened?

It looks like you said, you have a sVDEV that consists of a mirror pair and a single device.

Because it involves sVDEVs and a Raid-Z2 pool, destroying the pool and rebuilding may be the only option. What version of TrueNAS? I don’t know if you attempt to remove the single sVDEV if it will kill your pool

Unfortunately that’s not what happened. You added another disk as a stripe to your special vdev unknowingly. This currently puts your pool at great risk as if that single device fails you will lose your entire pool. As @SmallBarky said I think you’ll need to backup and destroy the pool and recreate it.

1 Like

Not intending to add insult to injury but a friendly reminder that one can create a pool checkpoint before any topology changing operation. Just in case (like this one).

1 Like

You’re not going to like it…
Backup. Destroy. Restore.

If you have a fourth slot, you may evolve to a less unsafe geometry by extending nvme2 to make it a 2-way mirror, so you have at least redundancy on all metadata. But this is not the setup you intended.

2 Likes

thank you

I didn’t know this. Thanks, this is very useful information …
… next time :slight_smile:

Did I lose all the data?
Just in case, I made three replicas of the dataset on three different systems.
Deleted the dataset, recreated it without sVDEVs, and restored one of the replicas.

Now, when choosing the Apps pool, I get the following error:

[Errno 2] No such file or directory: '/mnt/.ix-apps/app_configs'

Does this mean that TrueNAS stores the application configuration data in a hidden dataset that cannot be replicated or backed up?

Sadly yes, that’s what it means.

Oh!

But how are you supposed to protect your data?

You have remote replicas of all your datasets, you have copies of your configuration and keys, then something happens, I don’t know, your home burns to the ground and you lose all your hardware.

What are you supposed to do when you buy a new server?

Ideally you should use hostpath for the both the config storage and data storage.
I’ve created a single Dataset called “app-config” and each app has their own folder inside that dataset. That way if i screw up i only have to point a new install to the older folder and the app is back up in a few minutes.
But then again i only use a single app from the app catalogue, which is scrutiny. Every other app i’ve deployed is via my own compose-file and portainer.

That’s another reason (apart from the Kubernetes → Docker migration and the inability to update Apps on TrueNAS <24.10.2.2) why the only TrueNAS app I am using is Portainer… which then actually manages my Apps.

I know pointing that out won’t bring your data back of course so…sorry for your loss there.

I understand that, I don’t think that is the issue.
All my applications already use host path for everything, data, DB, configuration, etc.
The problem is that I cannot restore the Apps pool, essentially the Docker configuration space, as it seems to be a hidden place that nobody can access, copy, back up, or replicate. It does not make any sense!

Hidden in plain sight in ‘Resources’:

Thanks @etorix, that solved the problem. I had to run this command for each dataset after replication:

zfs set readonly=off newpool</...>

1 Like