Have to export/import pool each reboot

Hello, I’m new to TrueNAS so forgive my ignorance if I don’t explain my problem well or if its supposed to work like this. My problem is that each time I reboot I need to export and then import my two pools. Both are offline on boot. I also need to then unlock the encrypted datasets but maybe that is another problem.

My questions are therefore:

  1. Is it normal that a pool in a mirrored set up needs to be exported and imported after each boot?
  2. Is there anything I can do about it?

My set up is:

OS Version:TrueNAS-SCALE-24.10.0 (upgraded from RC2)
Product:H170N-WIFI
Model:Intel(R) Core™ i5-6500T CPU @ 2.50GHz
Memory:16 GiB

Running on bare metal. Two pools:
Data VDEVs 1 x MIRROR | 2 wide | 9.1 TiB
Data VDEVs 1 x DISK | 1 wide | 931.51 GiB

No failed SMART tests on either.

Let me know if there are things I can try or share to help diagnose the issue.

On boot the pools are offline and I can’t import anything, the dropdown is empty:

Once I’ve exported the pool I can then import it and the dropdown is populated:

Screenshot 2024-11-06 142000

This next screenshot was taken after boot and before I added the extra drive and pool but you can see the Main pool has a guid (I problem I read about elsewhere):

If there is anything else that would help let me know and I’ll try it.

I recommend you post a bug report on this.
Also, update to 24.10.0.2.

2 Likes

I had a problem with similar. I created a pool and truenas woudn’t import it on the next boot. Even though everything works fine when “manually” importing it.

Here is what I noticed, maybe you have the same issue:

There is a error message in /var/log/middlewared.log:

[2024/11/08 21:29:25] (ERROR) PoolService.import_on_boot_impl():304 - Failed to import 'app' with guid: '7554823053285154468' with error: "cannot import '7554823053285154468': no such pool available\n"

The pool guid is correct, but zfs reports “no such pool available”.

Indeed in my case the pool was not listed when using “zpool import” on the commandline - it was however available for import using the GUI.

Using zfs import -d /dev/sda1 correctly shows the pool, so the issue seems to be with discovery of the pool. According to the documentation (man zpool-import) it uses libblkid for discovery.

Indeed, using the program blkid did not list the partition containing the pool. On my other truenas installation which doesn’t have any problems blkid will display things like partition label / guid / etc. Also the symlinks in in /etc/disk/by-partuuid were completely missing.

My suspicion was that the partition table (GPT) was wrong, maybe missing some attributes which causes it to not be properly discovered.

I went with the nuclear option and recreated the pool. No more issues after that.

Unless someone with this (or these) issue(s) posts a bug report, the chances of getting this fixed are slim at best.

1 Like

Thanks both. Another person with the same problem has logged this Jira ticket: TrueNAS - Issues - iXsystems TrueNAS Jira

I’ve uploaded a debug report so hopefully it can get fixed.

Thanks @tanc for linking that ticket on Jira.

I had the same issue and rather than blowing it all away and starting again or spending a long time figuring out how to use the ‘blkid’ and ‘wipefs’ commands, I just detached the drives in my pool 1 at a time, fully wiped the drive to 0s, then reattached. Thankfully they are all SSDs so the re-silvering didn’t take long. This has fixed the issue for me.

Hello, I experienced the exact same problem as described here. I managed to relatively quickly fix the issue similarly as described by user cgfrost. Since the answer was pretty short I wanted to give a guide to less experienced users. Main difference being that a quick wipe should be enough to solve the issue, a wipe to zeros certainly also works but takes a much longer time.

In my setup I am using a simple mirror with two HDDs. I imagine the process must be very similar for RAID setup but please only follow these instructions if you also have a mirror setup.

The problem:
############

It seems that the drives that have the issue of being exported after a reboot have multiple filesystem entries which confuses truenas and does not automatically mount the pool. The pool is stil present though and can be manually exported and reimported.

For my drives the command sudo blkid --probe /dev/sda1 gives the following output

blkid: /dev/sda1: ambivalent result (probably more filesystems on the device, use wipefs(8) to see more details)

The Solution:
#############

Before doing anything make sure to back up your data, as a mistake in following the instructions will lead to data loss!

Repeat the following for all affected disks: (in my case first for disk sda and then for disk sdb)
########################

wipe the drive:
###############
go to “Storage” and click on “Disks”
select the disk (sda)
select wipe
choose option quick

After the drive was wiped you need to resilver it to copy your files over from the other one back to the wiped one

resilvering:
############
go to storage
on your data pool under “Topology” click on manage device
select the drive that we just wiped (you should see two drives, one named sdb and another one that earlier was named sda) and click on replace
in the drop down menu you can select the previously wiped drive name sda
select it and go back to the storage menu

You should see the resilvering process has started under the “ZFS Health” tab

After the drive is resilvered, reload the page: everything on “Storage” should be green indicating that both disks in your mirror are working fine.

Confirm that the procedure has worked

You can confirm by going to “Storage” and clicking on “Disks” to check that all disks (for me sda and sdb) are showing the correct data pool instead of something like unassigned.

Since the drive was completely wiped and only the partition with your data pool was copied over in the resilvering process, the issue with the multiple filesystems should be fixed now. You can check by performing the command sudo blkid --probe /dev/sda1 again. This command for partition sdb1 still shows the same error mentioned above. But for the now fixed partition sda1 I get:

/dev/sda1: VERSION="5000" LABEL="Data-Pool" UUID="7327691878511260953" UUID_SUB="1860526013766648308" BLOCK_SIZE="4096" TYPE="zfs_member" USAGE="filesystem" PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="data" PART_ENTRY_UUID="da453a01-5237-4af1-802e-0e7ab8f52530" PART_ENTRY_TYPE="6a898cc3-1dd2-11b2-99a6-080020736631" PART_ENTRY_NUMBER="1" PART_ENTRY_OFFSET="2048" PART_ENTRY_SIZE="15628050432" PART_ENTRY_DISK="8:0"

Only once you are very sure that everything worked fine repeat the instructions above for each affected disk.
########################

After the procedure you should see the output from blkid indicating only one filesystem for each of your drives.

Solutions suggested by others either take a longer time or require you to back up your data, wipe all drives and start from scratch setting up the pool.

For anyone experiencing this issue this discussion on this forum named " Pool Offline After Every Reboot" might also be interesting.

1 Like