I assume the boot pool shouldn’t be exported, but I’m not sure why that’s happened during the upgrade and I’m not sure how best to resolve this? I would expect a way to “un-export” it but cannot do so.
Is this an issue with my upgrade or is this something I should expect that I was unaware of?
In case it’s useful, my system information is below:
TrueNAS Version: 24.10-RC.2
CPU: Ryzen 5 5500GT
Ram: 64 GB DDR4, 3600 MHz
Boot drive: WD_BLACK SN750 SE 500 GB
Output of zpool status:
admin@truenas[~]$ zpool status -v
zsh: command not found: zpool
admin@truenas[~]$ sudo zpool status -v
[sudo] password for admin:
pool: Vault
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
config:
NAME STATE READ WRITE CKSUM
Vault ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
f8c8b4cd-4e10-4585-89d2-3a05b47d6759 ONLINE 0 0 0
a1fd689f-aa31-45bc-9400-73e8707e5dc6 ONLINE 0 0 0
bb17f32f-e28e-4580-878e-d338f2b37352 ONLINE 0 0 0
97812c54-3e6f-4e20-b90b-1208b8585733 ONLINE 0 0 0
errors: No known data errors
pool: boot-pool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
nvme0n1p3 ONLINE 0 0 0
errors: No known data errors
output of zpool list:
admin@truenas[~]$ sudo zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Vault 21.8T 467G 21.4T - - 0% 2% 1.00x ONLINE /mnt
boot-pool 448G 7.08G 441G - - 0% 1% 1.00x ONLINE -
After looking into this more, my current thought is that this is just the previous boot environment(s?) showing up in the GUI as my actual boot pool is mounted fine.
I won’t know that this is the case until the full Electric Eel release, but hopefully removing the old boot environments will remove this from the UI? Hopefully there’s another way to verify this, but I can’t find this in the documentation so far.
What do you know about the previous use of this drive? Was it purchased new or used? Was it previously used for CORE or any other ZFS based software?
When we saw something similar occur in internal testing it turned out to be the case that 24.10 was picking up on leftover partitions from previous deployments buried within the boot drive and reporting as if there was an exported boot pool.
I have a similar setup with a pair of new mirrored NVMe boot drives that have only ever been used for the TrueNAS boot OS. Like the OP, I also encountered the issue after upgrading to 24.10-RC2, where the GUI reports two disks as exported and needing import, though zpool import shows no pools available.
My Scenario:
server0: Freshly installed with TrueNAS SCALE 24.04.2.3, then upgraded to 24.10-RC2. The ghost “import disks” notification appeared after this upgrade.
I attempted to install 24.10-RC2 directly, but ended up at an initramfs screen. I then installed 24.04.2.3, and from there, upgraded to 24.10-RC2, resulting in the ghost disk import issue.
server1 Comparison:
server1 (mirrored NVMe drives, similar hardware) has been running for years, upgraded from TrueNAS SCALE 24.04.2.2, and does not show this issue after upgrading to 24.10-RC2.
Key Difference:
server0 was installed fresh with 24.04.2.3 before upgrading, while server1 was upgraded from 24.04.2.2. This difference in base versions
What do you know about the previous use of this drive? Was it purchased new or used? Was it previously used for CORE or any other ZFS based software?
The drive is new and was only used to install TrueNAS. I installed 24.04.2.2, upgraded to 24.04.2.3 and then upgraded to 24.10-RC.2.
As far as I’m aware there are no previous deployments? I did re-do the initial install a few times as I troubleshooted an unrelated hardware failure, but those should have wiped out the entire partition table so I wouldn’t count them.
I don’t have a good way to test not doing the minor upgrade but my experiences do align with @khalsajing’s experience. However, I did not run into the initramfs screen he describes; my upgrade went smoothly as far as I can tell.
The primary difference between my configuration and the OP’s is that mine is a mirrored NVME pair instead of a single NVME. This could explain why OP’s fresh 24.10-RC.2 installation succeeded while mine encountered issues. Additionally, I noticed that the 24.10-RC.2 installer no longer prompts for a swap partition, unlike previous versions.
Upgrading from 24.04.2.3 to 24.10-RC.2 might be causing this issue because I successfully updated another server (server1) directly from 24.04.2.2 to 24.10-RC.2, skipping the interim 24.04.2.3 upgrade, without the GUI anomaly.
We’ve seen a similar error behavior, rarely, but not in any situations like you describe. I’d suggest creating a bug report and then attaching a debug via the private file upload service (there will be an automatic comment with the link after creating the issue). That way our devs can take a look at what’s going on.
In case anyone find it useful, upgrading from 24.10-RC.2 to 24.10 proper does not fix/change the exported boot pool. I’m still debating the best course of action to manage it since it does not really impact the functionality of the NAS.
Hello, exactly the same issue here with the mirror… just a question, what if I just do the disk replace process (system->boot->boot pool status->…replace) of the exported disk with a new one? May this simply fix the problem avoiding a reinstall that may be quite risky and annoying?
From the linked bug report earlier in this thread, the recommended path forward is to backup your configuration and then do a fresh install of Electric Eel and restore from your configuration backup.
I haven’t attempted this yet as I don’t see any negative impacts other than the confusing GUI state. I plan to at some point in the near future.
I wanted to share my experience with this issue to help others and encourage further discussion. Like others in this thread, I encountered problems installing TrueNAS SCALE (Electric Eel) on mirrored NVMe drives previously used with TrueNAS SCALE.
Issue Overview
The installation appeared to be completed successfully, but the system dropped into the initramfs shell upon reboot. The initramfs shell suggested a resolution (likely involving a ZFS import). While this temporarily allowed the system to boot, the issue persisted, and subsequent reboots returned to the initramfs shell.
Workaround
After further troubleshooting, I discovered that the issue was caused by residual ZFS metadata that the installer (possibly using a tool like wipefs -a) failed to fully clear. The solution was to manually run the following command from the installer’s Shell option:
blkdiscard /dev/nvme0n1
blkdiscard /dev/nvme1n1
Once I ran this command to fully clear the drives, the installation completed without further issues, and the system booted correctly.
Additional Notes
The related Jira tickets (NAS-131900, NAS-132081, and NAS-133006) have been closed, but the issue may still occur in specific cases.
I hope this helps others facing similar issues. Thanks to everyone in the community for sharing your experiences, and to the TrueNAS team for your continued efforts in improving SCALE.