Hello all!
This is the first time I’ve run into an issue with TrueNAS/ZFS so I’m trying to keep calm but I’m a bit of a novice with troubleshooting pool issues. All scrub tasks (I run them every 2 weeks) and SMART tests (Opposite weeks of the scrubs) have always come back clean. I have a single pool of 2 devs. Each vdev in raidz2 with x6 16TB drives each. About half of my pool is full. Important photos are backed up offsite but still losing this data would be a huge huge huge bummer. Currently running TrueNAS-SCALE-24.10.2.3 virtualized in proxmox with an HBA connected to my JBOD and passed through to the VM.
I was running into an issue with snapshot exclusions not working and read it might be because I had a space in my pool name. To try and fix this, I followed the following steps:
- In the GUI I exported + Disconnected the pool.
- In CLI I ran zpool import ‘Main Databank main-databank’
- zpool status main-databank (at this point there were 0 issues and everything was online.
- pool export main-databank
- In the GUI I did Storage > Import Pool > main-databank.
- It’s been stuck on pool.import_pool stuck at 0.00 for over an hour now and I’m starting to panic just a little bit.
I did run zpool status main-databank again after 45 minutes or so just to see and the new output looks like this, which I’m not sure if its because I exported the pool since then?
pool: main-databank
state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-JQ
scan: scrub repaired 0B in 1 days 02:57:35 with 0 errors on Sat Aug 16 02:57:38 2025
config:
NAME STATE READ WRITE CKSUM
main-databank ONLINE 0 0 0
raidz2-0 ONLINE 0 18 0
2b7f7225-da86-4dbd-a711-19692d4fc559 ONLINE 3 8 0
f987d3d3-c7d2-445e-ad29-3fcc3db374a4 ONLINE 3 10 0
9a4228fc-e858-48fd-9cf3-0934eb0a07fc ONLINE 3 10 0
c22a5efc-0a36-4293-8dc2-f6e644c1f884 ONLINE 3 10 0
ba8dc6b1-54f6-4d50-92f8-5a3e9ffc47fd ONLINE 3 8 0
a535a835-be87-4355-88b5-6289ddb45ae5 ONLINE 3 8 0
raidz2-1 ONLINE 0 18 0
5abb8bd0-6331-426b-9abf-cbc11a26ea16 ONLINE 3 8 0
67d688ca-c8c3-4561-a36b-bf8b48c242a6 ONLINE 3 8 0
b0bebcc5-078c-4645-a100-7717035478dd ONLINE 3 10 0
07cb04bb-f5a5-42cf-8ffd-5949efe5da21 ONLINE 3 10 0
9100c8b1-addb-49ca-bb33-53f3230f7f11 ONLINE 3 10 0
f61c4573-901a-49a9-855a-2a25c1b48c41 ONLINE 3 8 0
cache
97912edb-fd39-462b-ae76-ca5e14032d04 ONLINE 0 0 0
errors: 9 data errors, use '-v' for a list
Praying that this little fix attempt didn’t cost me my data or pool integrity. Appreciative of any guidance that you all could give.
Thank you!