Pool.import_pool stuck at 0.00

Hello all!

This is the first time I’ve run into an issue with TrueNAS/ZFS so I’m trying to keep calm but I’m a bit of a novice with troubleshooting pool issues. All scrub tasks (I run them every 2 weeks) and SMART tests (Opposite weeks of the scrubs) have always come back clean. I have a single pool of 2 devs. Each vdev in raidz2 with x6 16TB drives each. About half of my pool is full. Important photos are backed up offsite but still losing this data would be a huge huge huge bummer. Currently running TrueNAS-SCALE-24.10.2.3 virtualized in proxmox with an HBA connected to my JBOD and passed through to the VM.

I was running into an issue with snapshot exclusions not working and read it might be because I had a space in my pool name. To try and fix this, I followed the following steps:

  1. In the GUI I exported + Disconnected the pool.
  2. In CLI I ran zpool import ‘Main Databank main-databank’
  3. zpool status main-databank (at this point there were 0 issues and everything was online.
  4. pool export main-databank
  5. In the GUI I did Storage > Import Pool > main-databank.
  6. It’s been stuck on pool.import_pool stuck at 0.00 for over an hour now and I’m starting to panic just a little bit.

I did run zpool status main-databank again after 45 minutes or so just to see and the new output looks like this, which I’m not sure if its because I exported the pool since then?

  pool: main-databank
 state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-JQ
  scan: scrub repaired 0B in 1 days 02:57:35 with 0 errors on Sat Aug 16 02:57:38 2025
config:

        NAME                                      STATE     READ WRITE CKSUM
        main-databank                             ONLINE       0     0     0
          raidz2-0                                ONLINE       0    18     0
            2b7f7225-da86-4dbd-a711-19692d4fc559  ONLINE       3     8     0
            f987d3d3-c7d2-445e-ad29-3fcc3db374a4  ONLINE       3    10     0
            9a4228fc-e858-48fd-9cf3-0934eb0a07fc  ONLINE       3    10     0
            c22a5efc-0a36-4293-8dc2-f6e644c1f884  ONLINE       3    10     0
            ba8dc6b1-54f6-4d50-92f8-5a3e9ffc47fd  ONLINE       3     8     0
            a535a835-be87-4355-88b5-6289ddb45ae5  ONLINE       3     8     0
          raidz2-1                                ONLINE       0    18     0
            5abb8bd0-6331-426b-9abf-cbc11a26ea16  ONLINE       3     8     0
            67d688ca-c8c3-4561-a36b-bf8b48c242a6  ONLINE       3     8     0
            b0bebcc5-078c-4645-a100-7717035478dd  ONLINE       3    10     0
            07cb04bb-f5a5-42cf-8ffd-5949efe5da21  ONLINE       3    10     0
            9100c8b1-addb-49ca-bb33-53f3230f7f11  ONLINE       3    10     0
            f61c4573-901a-49a9-855a-2a25c1b48c41  ONLINE       3     8     0
        cache
          97912edb-fd39-462b-ae76-ca5e14032d04    ONLINE       0     0     0

errors: 9 data errors, use '-v' for a list

Praying that this little fix attempt didn’t cost me my data or pool integrity. Appreciative of any guidance that you all could give.

Thank you!

Correction: In CLI I ran zpool import ‘Main Databank’ main-databank

To me that smattering of errors suggests some form of central hardware issue.

Could you describe your server with more detail? A full list please, brands and models. You also mention an HBA and JBOD, please elaborate on models and how they are connected.

What does the cooling look like and did you check your system for memory and CPU stability when you set it up?

You also mention passing something through, was that the HBA? Did you also blacklist the HBA to prevent Proxmox from grabbing the ZFS disks for itself before the VM can?

Seeing the smartctl -a /dev/sdX output of your drives may be useful.

1 Like

I agree. It seemed like maybe a passthrough issue or something that caused the pool to go offline. Yes the HBA is passed through as a PCIe device to TrueNAS.

CPUs: x2 Xeon E5-2680 v4
Motherboard: Supermicro X10DRi
Memory: 192GB Samsung ECC DDR4-2133 (Mostly 16GB pairs with a couple 32GB Pairs)
HBA: LSI 9207-8e
JBOD: EMC KTN-STL3

I rebooted the entire server and attempted to import the pool again. it did mount, but it mounted to /mnt/mnt/main-databank which is strange. I tried zfs set mountpoint=/mnt/main-databank main-databank but running zfs get mountpoint main-databank right after still shows as /mnt/mnt/main-databank so now I’m stuck on that.

That was a poor assumption on my end. Following instructions from the old forum in a post titled Mount point /mnt/mnt/pool1 instead of /mnt/pool1 (I can’t seem to be able to post links) I reran it as ‘zfs set mountpoint=/main-databank main-databank’. Seems to be mounted as expected now.

These mount-issues are caused by manually running zpool import in the shell without the -R switch to specify the proper altroot (zpool import -R /mnt poolname)

By changing the overall zfs mountpoint like you describe you risk interfering with imports done using the GUI as well, and future GUI imports may mount in the wrong location.

I see you marked your last post as the solution, I take it you have concluded that the errors are now gone and won’t reappear?