After reboot stuck on pool.import_on_boot

Was getting close to having everything up an running, only to trip near the final hurdle it seems, entirely brought on by myself, any help getting back up and running would be much appreciated,

Specs if it helps:
Version 24.10
i5-14500
Asus W680-Ace IPMI
32 GB ECC ram (on the QVL list)
64gb Optane - Boot Drive
1TB NVME - SN850x
3x28TB Exos - raid Z-1

I restarted my system earlier after attempting to delete some files with the “mc” command, files would delete for a while and then seemingly get stuck, I noticed that my restart time was sitting around 30-45 mins, which I thought was abit excessive so I decided to look for the cause,

I stopped all my running containers, restart was still sitting around 30-45mins, then I disabled/stopped my SMB share, again restart was taking quite a while to complete, after one of the restarts I noticed the “running jobs” had an icon, when I click to view it shows

pool.import_on_boot, sitting at 0.00%, sat for about 40 mins before I hoped another restart would fix this (I know I know), has been sitting with pool.import_on_boot as a running job for just over an hour now

If I go to my storage dashboard I can see my 1TB nvme pool, all seems fine with it, I can click into the individual datasets within it

on the 28x3 - Raid Z-1 pool, it shows the Topology, Usage - it reports 23.61TiB used, 31.02TiB free, but If I click manage dataset and try to click on any of the sub-datasets I get an error

“CallError” - [EFAULT] Failed retreiving GROUP quotas for MyDataset/SubDataset or

“CallError” [ENOENT] Path /mnt/MyDataset/SubDataset not found - depending on which subt-dataset I click on

I had a google and zpool list -v

NAME                                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
Docker Containers                          928G  97.9G   830G        -         -     2%    10%  1.00x    ONLINE  /mnt
  44a84041-4c32-4c3a-bd0b-649cd0dbff0e     932G  97.9G   830G        -         -     2%  10.5%      -    ONLINE
The_Library                               76.4T  31.1T  45.3T        -         -     1%    40%  1.00x    ONLINE  /mnt
  raidz1-0                                76.4T  31.1T  45.3T        -         -     1%  40.7%      -    ONLINE
    9afcb73e-682d-42e6-a636-3d3e3b65a11d  25.5T      -      -        -         -      -      -      -    ONLINE
    128addd9-c515-4104-8348-852da83b0325  25.5T      -      -        -         -      -      -      -    ONLINE
    0900f966-326c-42e5-994a-746f41d98cfd  25.5T      -      -        -         -      -      -      -    ONLINE
boot-pool                                   54G  5.20G  48.8G        -         -     3%     9%  1.00x    ONLINE  -
  nvme1n1p3

so my HDDs are seen to some degree here,

zpool status

  pool: Docker Containers
 state: ONLINE
  scan: scrub repaired 0B in 00:00:33 with 0 errors on Sun Jan 19 00:00:34 2025
config:

        NAME                                    STATE     READ WRITE CKSUM
        Docker Containers                       ONLINE       0     0     0
          44a84041-4c32-4c3a-bd0b-649cd0dbff0e  ONLINE       0     0     0

errors: No known data errors

  pool: The_Library
 state: ONLINE
config:

        NAME                                      STATE     READ WRITE CKSUM
        The_Library                               ONLINE       0     0     0
          raidz1-0                                ONLINE       0     0     0
            9afcb73e-682d-42e6-a636-3d3e3b65a11d  ONLINE       0     0     0
            128addd9-c515-4104-8348-852da83b0325  ONLINE       0     0     0
            0900f966-326c-42e5-994a-746f41d98cfd  ONLINE       0     0     0

errors: No known data errors

  pool: boot-pool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:03 with 0 errors on Sun Feb 16 03:45:04 2025
config:

        NAME         STATE     READ WRITE CKSUM
        boot-pool    ONLINE       0     0     0
          nvme1n1p3  ONLINE       0     0     0

I tried zpool import The_Library but it comes back that a pool woth that name already exists, if I go back to the Storage Dashboard and try to export (to try and import again) the pool I get the error that its bust and cant unmount

Not really sure what I’ve broken or how to fix it, if you need any other info be sure to say,

Many Thanks in advance.

Well, for starters your pool does seem to be imported properly - which is good. Altroot is also set correctly.

Wondering what’s causing the job to hang for so long. Would probably need a debug to tell, so maybe worth raising a jira ticket.

Do you see the datasets when you run zfs list -r The_Library?

So I went to check what the output would be for zfs list -r The_Library would be, but noticed the jobs icon was no longer present, said something to the effect that it had been stopped by myself (I dont know what I did), but as the pool.import was no longer running, I was able to export the pool manually and add it back it, everything came with it,

and I did take a risk at restarting the system, the pool stayed connected thankfully, restart still took some half an hour I think, but I think I may just leave everything as is,

Thanks for taking the time to reply.