Safely remove recently added Metadata vdev

I recently migrated from freenas to TrueNAS scale. After migration completed, I decided to install some NVMe and ended up setting up meta vdev. Fast forward 1 month of no use on the server, I am not having second thoughts of why I added it, and if I will even see any performance increase.

Is there a way to check if there is even a single byte of data written to it? If its empty, can I delete it? I remember reading it cant be removed later as it will have the same consequence as removing a vdev from a pool, which is the death of the entire pool.

But…if I have yet to write anything to the new meta vdev, can I maybe remove it without any consequences? See attached screenshow of what the pool looks like. Let me know if any other information is needed before you can answer my question. Thanks!

It wont let me attach an image.

  pool: mediastore
 state: ONLINE
  scan: scrub repaired 0B in 4 days 02:53:11 with 0 errors on Wed Aug 27 02:07:08 2025
config:

        NAME                                              STATE     READ WRITE CKSUM
        mediastore                                        ONLINE       0     0     0
          raidz1-0                                        ONLINE       0     0     0
            ata-WDC_WD80EMAZ-xxxxxxx_xxxxxxxx             ONLINE       0     0     0
            ata-WDC_WD80EMAZ-xxxxxxx_xxxxxxxx             ONLINE       0     0     0
            ata-WDC_WD80EMAZ-xxxxxxx_xxxxxxxx             ONLINE       0     0     0
            ata-WDC_WD80EMAZ-xxxxxxx_xxxxxxxx             ONLINE       0     0     0
            ata-WDC_WD80EMAZ-xxxxxxx_xxxxxxxx             ONLINE       0     0     0
          raidz1-1                                        ONLINE       0     0     0
            ata-WDC_WD80EMAZ-xxxxxxx_xxxxxxxx             ONLINE       0     0     0
            ata-WDC_WD80EFZX-xxxxxxx_xxxxxxxx             ONLINE       0     0     0
            ata-WDC_WD80EFZX-xxxxxxx_xxxxxxxx             ONLINE       0     0     0
            ata-WDC_WD80EFAX-xxxxxxx_xxxxxxxx            ONLINE       0     0     0
            ata-WDC_WD80EFAX-xxxxxxx_xxxxxxxx             ONLINE       0     0     0
          raidz1-2                                        ONLINE       0     0     0
            ata-WDC_WD160EDGZ-xxxxxxx_xxxxxxxx            ONLINE       0     0     0
            ata-WDC_WD160EDGZ-xxxxxxx_xxxxxxxx            ONLINE       0     0     0
            ata-WDC_WD160EDGZ-xxxxxxx_xxxxxxxx            ONLINE       0     0     0
            ata-WDC_WD160EDGZ-xxxxxxx_xxxxxxxx            ONLINE       0     0     0
            ata-WDC_WD160EDGZ-xxxxxxx_xxxxxxxx            ONLINE       0     0     0
        special
          mirror-3                                        ONLINE       0     0     0
            nvme-Samsung_SSD_990_PRO_2TB_xxxxxxxxxxxxx4Y  ONLINE       0     0     0
            nvme-Samsung_SSD_990_PRO_2TB_xxxxxxxxxxxxx0R  ONLINE       0     0     0
          mirror-4                                        ONLINE       0     0     0
            nvme-Samsung_SSD_990_PRO_2TB_xxxxxxxxxxxxx1B  ONLINE       0     0     0
            nvme-Samsung_SSD_990_PRO_2TB_xxxxxxxxxxxxx0K  ONLINE       0     0     0

errors: No known data errors

You can’t remove it because your pool already contains RAIDZ vdevs.

Unless you created a checkpoint before adding the special vdev, you’re stuck with it.

Please vote for this feature.

2 Likes

Dang, that feature would have been nice. Vote added!

I haven’t even created/copied any new files on the truenas after creating the special vdev. Maybe I can cut it down to just 2 x 2TB so I can atleast use the other 2 nvme somewhere else. Is it possible to resize it somehow?

No.

ZFS is not the most flexible file system, volume manager & RAID scheme.

This is currently one of it’s limitations. You can’t shrink a vDev without full backup, destroy, re-create with smaller disks and restore.

3 Likes

Thanks for your vote, but mind that reverting to a checkpoint would have lost everything done after the checkpoint was created. It really is a short-time safety net, not something that can linger for one month.

If you go for backup-destroy-restore, think about switching layout to raidz2 for extra safety with these big HDDs.

1 Like

Working on backup-destroy-restore now. I dont think I am going to bother with metadata vdev anymore. I think I have enough RAM to not really see the benefit of metadata vdev. I’m thinking of just manually put small files on NVMe dataset instead.

Special vdev for small files was reportedly designed with dRAID in mind. With “regular” raidz the main benefit of a special vdev is to put metadata on SSD, which is of limited use is you have enough RAM or could be partially achived (reads only) with a metadata-tuned L2ARC.

1 Like

I missed that in their first post. I’ll edit it to say “one day” so that I don’t look as foolish.

1 Like

What you CAN do instead, is have a Metadata only, persistent L2ARC vDev. This CAN be added or removed at any time, and is not critical to your pool. Thus, ZFS does not even allow Mirroring, (or RAID-Zing), L2ARC device(s). Any failure of a L2ARC device simply means that any read request that would have gone to the L2ARC, now goes to the, (likely slower), normal pool devices.

ZFS controls usage of the L2ARC per Dataset using using this property:

    secondarycache=all|none|metadata
      Controls what is cached in the secondary cache (L2ARC). If this property is
      set to all, then  both user data and metadata is cached. If this property is
      set to none, then neither user data nor metadata is cached.  If this property
      is set to metadata, then only metadata is cached.  The default value is all.

It is helpful for you to use the ZFS terminology. The above was probably meant to be:

I’m thinking of just manually putting small files on a dedicated NVMe pool instead.

It is not possible to have a NVMe dataset inside an HDD pool.

3 Likes

Ah yes, this makes a lot of sense. I think I am just not going to bother with L2ARC for now. I have enough RAM to just put everything in RAM. Is there an easy way to monitor when my RAM isnt enough, and having dedicated L2ARC would be better? Is it just when truenas starts writing page file to disk? I am still not very good with how everything works.

And for special stuff, if anything, I might do is ZIL later at some point if I end up using ESXi over NFS a lot, not sure yet. All this extra stuff is making things more complicated for future me, so I think having just plain setup might be better in the long run.

And yes lol, I did mean a dedicated pool, not dataset, of mirrored NVMe vdevs to hold all the small app config files, and maybe even the database files. :thinking:

Use the NAS for a while and then run arc_summary.

1 Like