Electric Eel RC.2 Upgrade Exported boot-pool

I just upgraded from 24.04.2.3 to 24.10-RC.2 and now have my boot pool showing up as an exported pool in my storage dashboard:


I assume the boot pool shouldn’t be exported, but I’m not sure why that’s happened during the upgrade and I’m not sure how best to resolve this? I would expect a way to “un-export” it but cannot do so.

Is this an issue with my upgrade or is this something I should expect that I was unaware of?

In case it’s useful, my system information is below:

  • TrueNAS Version: 24.10-RC.2
  • CPU: Ryzen 5 5500GT
  • Ram: 64 GB DDR4, 3600 MHz
  • Boot drive: WD_BLACK SN750 SE 500 GB

Output of zpool status:

admin@truenas[~]$ zpool status -v
zsh: command not found: zpool
admin@truenas[~]$ sudo zpool status -v
[sudo] password for admin: 
  pool: Vault
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
config:

        NAME                                      STATE     READ WRITE CKSUM
        Vault                                     ONLINE       0     0     0
          raidz2-0                                ONLINE       0     0     0
            f8c8b4cd-4e10-4585-89d2-3a05b47d6759  ONLINE       0     0     0
            a1fd689f-aa31-45bc-9400-73e8707e5dc6  ONLINE       0     0     0
            bb17f32f-e28e-4580-878e-d338f2b37352  ONLINE       0     0     0
            97812c54-3e6f-4e20-b90b-1208b8585733  ONLINE       0     0     0

errors: No known data errors

  pool: boot-pool
 state: ONLINE
config:

        NAME         STATE     READ WRITE CKSUM
        boot-pool    ONLINE       0     0     0
          nvme0n1p3  ONLINE       0     0     0

errors: No known data errors

output of zpool list:

admin@truenas[~]$ sudo zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
Vault      21.8T   467G  21.4T        -         -     0%     2%  1.00x    ONLINE  /mnt
boot-pool   448G  7.08G   441G        -         -     0%     1%  1.00x    ONLINE  -

After looking into this more, my current thought is that this is just the previous boot environment(s?) showing up in the GUI as my actual boot pool is mounted fine.

I won’t know that this is the case until the full Electric Eel release, but hopefully removing the old boot environments will remove this from the UI? Hopefully there’s another way to verify this, but I can’t find this in the documentation so far.

What do you know about the previous use of this drive? Was it purchased new or used? Was it previously used for CORE or any other ZFS based software?

When we saw something similar occur in internal testing it turned out to be the case that 24.10 was picking up on leftover partitions from previous deployments buried within the boot drive and reporting as if there was an exported boot pool.

Adding Observations to Amplify the Original Post:

  • I have a similar setup with a pair of new mirrored NVMe boot drives that have only ever been used for the TrueNAS boot OS. Like the OP, I also encountered the issue after upgrading to 24.10-RC2, where the GUI reports two disks as exported and needing import, though zpool import shows no pools available.

My Scenario:

  • server0: Freshly installed with TrueNAS SCALE 24.04.2.3, then upgraded to 24.10-RC2. The ghost “import disks” notification appeared after this upgrade.
  • I attempted to install 24.10-RC2 directly, but ended up at an initramfs screen. I then installed 24.04.2.3, and from there, upgraded to 24.10-RC2, resulting in the ghost disk import issue.

server1 Comparison:

  • server1 (mirrored NVMe drives, similar hardware) has been running for years, upgraded from TrueNAS SCALE 24.04.2.2, and does not show this issue after upgrading to 24.10-RC2.

Key Difference:

  • server0 was installed fresh with 24.04.2.3 before upgrading, while server1 was upgraded from 24.04.2.2. This difference in base versions
1 Like

What do you know about the previous use of this drive? Was it purchased new or used? Was it previously used for CORE or any other ZFS based software?

The drive is new and was only used to install TrueNAS. I installed 24.04.2.2, upgraded to 24.04.2.3 and then upgraded to 24.10-RC.2.

As far as I’m aware there are no previous deployments? I did re-do the initial install a few times as I troubleshooted an unrelated hardware failure, but those should have wiped out the entire partition table so I wouldn’t count them.

I don’t have a good way to test not doing the minor upgrade but my experiences do align with @khalsajing’s experience. However, I did not run into the initramfs screen he describes; my upgrade went smoothly as far as I can tell.

The primary difference between my configuration and the OP’s is that mine is a mirrored NVME pair instead of a single NVME. This could explain why OP’s fresh 24.10-RC.2 installation succeeded while mine encountered issues. Additionally, I noticed that the 24.10-RC.2 installer no longer prompts for a swap partition, unlike previous versions.

Upgrading from 24.04.2.3 to 24.10-RC.2 might be causing this issue because I successfully updated another server (server1) directly from 24.04.2.2 to 24.10-RC.2, skipping the interim 24.04.2.3 upgrade, without the GUI anomaly.

We’ve seen a similar error behavior, rarely, but not in any situations like you describe. I’d suggest creating a bug report and then attaching a debug via the private file upload service (there will be an automatic comment with the link after creating the issue). That way our devs can take a look at what’s going on.

1 Like

https://ixsystems.atlassian.net/browse/NAS-131890

1 Like

In case anyone find it useful, upgrading from 24.10-RC.2 to 24.10 proper does not fix/change the exported boot pool. I’m still debating the best course of action to manage it since it does not really impact the functionality of the NAS.

Hello, exactly the same issue here with the mirror… just a question, what if I just do the disk replace process (system->boot->boot pool status->…replace) of the exported disk with a new one? May this simply fix the problem avoiding a reinstall that may be quite risky and annoying?

From the linked bug report earlier in this thread, the recommended path forward is to backup your configuration and then do a fresh install of Electric Eel and restore from your configuration backup.

I haven’t attempted this yet as I don’t see any negative impacts other than the confusing GUI state. I plan to at some point in the near future.

Hi Everyone,

I wanted to share my experience with this issue to help others and encourage further discussion. Like others in this thread, I encountered problems installing TrueNAS SCALE (Electric Eel) on mirrored NVMe drives previously used with TrueNAS SCALE.

Issue Overview

The installation appeared to be completed successfully, but the system dropped into the initramfs shell upon reboot. The initramfs shell suggested a resolution (likely involving a ZFS import). While this temporarily allowed the system to boot, the issue persisted, and subsequent reboots returned to the initramfs shell.

Workaround

After further troubleshooting, I discovered that the issue was caused by residual ZFS metadata that the installer (possibly using a tool like wipefs -a) failed to fully clear. The solution was to manually run the following command from the installer’s Shell option:

blkdiscard /dev/nvme0n1
blkdiscard /dev/nvme1n1

Once I ran this command to fully clear the drives, the installation completed without further issues, and the system booted correctly.

Additional Notes

  • The related Jira tickets (NAS-131900, NAS-132081, and NAS-133006) have been closed, but the issue may still occur in specific cases.

I hope this helps others facing similar issues. Thanks to everyone in the community for sharing your experiences, and to the TrueNAS team for your continued efforts in improving SCALE.

Hello khalsajing + Everyone else !

I continue to safely & happily run my TrueNAS Electric Eel 24.10.2.1 that fell victim to the “ghost” exported boot-pool during the upgrade from 24.04.xxx.

However I would now like to proceed with the steps of clearing the GUI anomaly, but I am hoping to confirm the details again before taking any steps.

Is the “fix” simply …

  • export the 24.10.2.1 configuration file
  • destroy all data on the boot-pool disk
  • re-install 24.10.2.1 on the wiped boot-pool
  • import the previously saved 24.10.2.1 configuration file
    ??

To state the obvious, we should NOT be exporting ANY other disk pools, or VDEVs (Apps, Storage, etc). Just complete the steps as noted above ?

My primary App on 24.10.2.1 is PLEX.
I do not need to backup/save the PLEX conifg since the Apps VDEV will not be modified during the ghost GUI fix ?

Sorry for all of the Q&A.
I want to fix the GUI issue, but I do not want to lose any data on any other drive in the system.
:slight_smile:

I finally went through the official fix (re-install) since on upgrading to 25.0.4 the exported boot-pool became an “option” to boot with the same name as the real boot-pool meaning I could only boot via manually selecting the correct one to mount which I didn’t want to have to do more than once.

Is the “fix” simply …

Your list seems correct, @PackRat2025, but I don’t believe you need to manually destroy data on the boot-pool; a fresh install should (and did personally) wipe all data on your boot drives. I only:

  • Exported my 25.0.4 configuration (since I had updated) including the password secret seed
  • Re-installed 25.0.4
  • Logged into the web gui to re-upload my configuration.

That seemed to work perfectly for me; I no longer have any exported pools visible and my apps, pools, shares are all present. As far as I’m aware (though I’m still relatively a novice here) everything outside the core OS is stored within your pools, which aren’t even touched by this operation so should be completely safe.

I did not export pools or VDEVs before or after exporting my configuration. When first booting into the new installation of TrueNAS, none of them appeared (as they were not configured) but all were restored after re-uploading the configuration.

I don’t personally run Plex as an app, but I would assume it’s configured similarly for you and its data is stored in pools.

Well … I put the brass ones on, and then gave myself an anxiety attack. The 24.10.2.1 installer still does not properly clean a drive before installing. Figure that out the hard way. So I had to …

From installer, choose shell.
lsblk for identify drives
then blkdiscard /dev/sdx (or vdx…) -f

… for my boot-pool drive, THEN the new install was successful.

So all is back on track and the GUI anomaly is gone, BUT, now I have to remember how the heck I switched TrueNAS from LEGACY boot to UEFI boot.
Sigh.