Reboot loop,Unable to mount the pool as rw

Sorry for just creating an account and jumping in like this — I’ve been a long-time lurker, first-time poster.
OS: TrueNAS scale 25.04
Host: Proxmox Virtual Environment 8.4.1
After a large write operation, the system became unresponsive. Upon reboot, it entered an infinite reboot loop. The serial shell output displayed the following error:
[* ] Job ix-zfs.service/start running (1min 46s / 15min 27s)
[ 112.258075] BUG: unable to handle page fault for address: ffffac1fb504a82e
[ 112.258609] #PF: supervisor read access in kernel mode
[ 112.259005] #PF: error_code(0x0000) - not-present page
[ 112.259385] PGD 110000067 P4D 110000067 PUD 1101f7067 PMD 19a570067 PTE 0
[ 112.259896] Oops: Oops: 0000 [#1] PREEMPT SMP NOPTI
[ 112.260408] CPU: 0 UID: 0 PID: 2947 Comm: txg_sync Tainted: P OE 6.12.15-production+truenas #1
[ 112.261433] Tainted: [P]=PROPRIETARY_MODULE, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
[ 112.262187] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014
[ 112.263079] RIP: 0010:zap_leaf_chunk_alloc+0x31/0x50 [zfs]
[ 112.263904] Code: 48 89 fb e8 d1 ff ff ff 8b b3 d8 00 00 00 ba 01 00 00 00 48 89 df 0f b7 68 22 8d 4e fb d3 e2 0f b7 cd 48 8d 0c 49 48 8d 14 8a <0f> b7 54 50 46 66 89 50 22 e8 a1 ff ff ff 66 83 68 1c 01 89 e8 5b
[ 112.265209] RSP: 0018:ffffac1f8f4b3990 EFLAGS: 00010206
[ 112.265572] RAX: ffffac1fb4eca000 RBX: ffff9769839a6900 RCX: 000000000002fffd
[ 112.266197] RDX: 00000000000c03f4 RSI: 000000000000000f RDI: ffff9769839a6900
[ 112.266722] RBP: 000000000000ffff R08: 0000000000000000 R09: 0000000000000000
[ 112.267231] R10: ffffac1fb4ed3030 R11: ffff976920e9b330 R12: 0000000000000280
[ 112.267937] R13: ffff9769839a6400 R14: ffff9769839a6900 R15: 0000000000000001
[ 112.268775] FS: 0000000000000000(0000) GS:ffff97711ba00000(0000) knlGS:0000000000000000
[ 112.269510] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 112.270077] CR2: ffffac1fb504a82e CR3: 000000011ce42005 CR4: 0000000000372ef0
[ 112.270751] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 112.271395] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 112.272058] Call Trace:
[ 112.272294]
[ 112.272814] ? __die+0x23/0x70
[ 112.273370] ? page_fault_oops+0x173/0x5b0
[ 112.273959] ? search_module_extables+0x19/0x60
[ 112.274572] ? search_bpf_extables+0x5f/0x80
[ 112.275155] ? exc_page_fault+0xed/0x190
[ 112.275743] ? asm_exc_page_fault+0x26/0x30
[ 112.276356] ? zap_leaf_chunk_alloc+0x31/0x50 [zfs]
[ 112.277256] zap_leaf_transfer_array+0x54/0x130 [zfs]
[ 112.278138] zap_leaf_transfer_entry+0xc6/0x100 [zfs]
[ 112.279068] zap_leaf_split+0x110/0x1a0 [zfs]
[ 112.279821] zap_expand_leaf+0x219/0x2a0 [zfs]
[ 112.280552] fzap_update+0x101/0x1b0 [zfs]
[ 112.281228] zap_update_uint64_impl+0x41/0xb0 [zfs]
[ 112.281983] ddt_zap_update+0x7d/0xb0 [zfs]
[ 112.282678] ddt_sync_flush_entry+0x130/0x2d0 [zfs]
[ 112.283402] ddt_sync_table_flush+0xec/0x190 [zfs]
[ 112.284110] ddt_sync+0x7d/0xc0 [zfs]
[ 112.284753] spa_sync_iterate_to_convergence+0x11c/0x200 [zfs]
[ 112.285564] spa_sync+0x30a/0x600 [zfs]
[ 112.286246] txg_sync_thread+0x1ec/0x270 [zfs]
[ 112.286962] ? __pfx_txg_sync_thread+0x10/0x10 [zfs]
[ 112.287708] ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
[ 112.288337] thread_generic_wrapper+0x5a/0x70 [spl]
[ 112.288906] kthread+0xcf/0x100
[ 112.289344] ? __pfx_kthread+0x10/0x10
[ 112.289817] ret_from_fork+0x31/0x50
[ 112.290286] ? __pfx_kthread+0x10/0x10
[ 112.290753] ret_from_fork_asm+0x1a/0x30
[ 112.291259]
[ 112.291619] Modules linked in: ntb_netdev(E) ntb_transport(E) ntb_split(E) ntb(E) ioatdma(E) dca(E) ib_core(E) intel_rapl_msr(E) intel_rapl_common(E) intel_uncore_frequency_common(E) intel_pmc_core(E) intel_vsec(E) pmt_telemetry(E) pmt_class(E) kvm_intel(E) kvm(E) crct10dif_pclmul(E) ghash_clmulni_intel(E) sha512_ssse3(E) sha256_ssse3(E) sha1_ssse3(E) aesni_intel(E) gf128mul(E) crypto_simd(E) cryptd(E) rapl(E) iTCO_wdt(E) intel_pmc_bxt(E) iTCO_vendor_support(E) snd_hda_intel(E) pcspkr(E) snd_intel_dspcfg(E) watchdog(E) snd_hda_codec(E) snd_hda_core(E) snd_hwdep(E) virtio_balloon(E) bochs(E) snd_pcm(E) drm_vram_helper(E) drm_ttm_helper(E) ttm(E) snd_timer(E) snd(E) soundcore(E) drm_kms_helper(E) button(E) joydev(E) evdev(E) sg(E) serio_raw(E) nfsd(E) auth_rpcgss(E) nfs_acl(E) lockd(E) grace(E) loop(E) drm(E) efi_pstore(E) configfs(E) sunrpc(E) qemu_fw_cfg(E) ip_tables(E) x_tables(E) autofs4(E) zfs(POE) spl(OE) efivarfs(E) ses(E) enclosure(E) hid_generic(E) scsi_transport_sas(E) usbhid(E) hid(E) ahci(E) sd_mod(E)
[ 112.291665] ahciem(E) libahci(E) ehci_pci(E) virtio_net(E) uhci_hcd(E) ehci_hcd(E) libata(E) net_failover(E) virtio_scsi(E) failover(E) psmouse(E) crc32_pclmul(E) crc32c_intel(E) i2c_i801(E) scsi_mod(E) i2c_smbus(E) scsi_common(E) lpc_ich(E) usbcore(E) usb_common(E)
Booting `TrueNAS Scale GNU/Linux 25.04.0’

Loading Linux 6.12.15-production+truenas …
Loading initial ramdisk …
[ 0.000000] Linux version 6.12.15-production+truenas (root@tnsbuilds01.tn.ixsystems.net) (gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_DYNAMIC Tue Apr 15 20:07:13 UTC 2025
[ 0.016994] Kernel command line: BOOT_IMAGE=/ROOT/25.04.0@/boot/vmlinuz-6.12.15-production+truenas root=ZFS=boot-pool/ROOT/25.04.0 ro console=tty1 console=ttyS0,9600 libata.allow_tpm=1 amd_iommu=on iommu=pt kvm_amd.npt=1 kvm_amd.avic=1 intel_iommu=on zfsforce=1 nvme_core.multipath=N
[ 0.017055] AMD-Vi: Unknown option - ‘on’

Output from attempting to mount the pool on a freshly installed TrueNAS SCALE 24.10 system.

root@truenas[/]# zpool import

pool: zpool
id: 15517656703174955644
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the ‘-f’ flag.
see: Message ID: ZFS-8000-EY — OpenZFS documentation
config:

    zpool                                     ONLINE
      raidz1-0                                ONLINE
        b20df79d-f945-4544-abef-668712cad742  ONLINE
        937f0ab5-108e-4a6f-9e9a-b16d78d9cf7d  ONLINE
        7957c30a-e061-41dc-a407-7b0490477ea4  ONLINE
        d30d094a-f6c1-4219-a7fd-d27745c4a50c  ONLINE
        33050762-9d68-455d-8b33-192db6b23cc9  ONLINE
        93c356de-5838-407d-afb6-07679633a7ae  ONLINE
    cache
      2e7a6c8d-b5fb-4fa8-a353-0cc113aa8fd2

I’m able to mount the pool in read-only mode, but attempting to import it normally causes the system to enter an infinite reboot cycle.

My goal is to recover the pool so that it can be used normally again.

You need to describe your Proxmox settings for your TrueNAS VM, especially how you have PCIe passed through the disk controller and the disks and whether you have blacklisted the disk controller.

My advice is NOT to try to resolve the disk import issue until we have discovered what is causing the pool to become exported.

Please run the following commands in TrueNAS and post the results for each in a separate </> box:

  • lsblk -bo NAME,LABEL,MAJ:MIN,TRAN,ROTA,ZONED,VENDOR,MODEL,SERIAL,PARTUUID,START,SIZE,PARTTYPENAME
  • sudo zpool status -vsc upath,media,lsblk,serial,smartx,smart
  • lspci
  • sudo sas2flash -list
  • sudo sas3flash -list
  • sudo storcli show all

I have no idea whether the above commands will work in Proxmox, but if they do please run them there also and post the output.

Sorry for the late reply. I wasn’t near the device, and I had shut it down remotely, so I only just got the chance to operate it now.

root@truenas[/]# zpool import zpool -o readonly=on
cannot mount '/zpool': failed to create mountpoint: Read-only file system
Import was successful, but unable to mount some datasets
root@truenas[/]# lsblk -bo NAME,LABEL,MAJ:MIN,TRAN,ROTA,ZONED,VENDOR,MODEL,SERIAL,PARTUUID,START,SIZE,PARTTYPENAME

NAME LABEL MAJ:MIN TRAN   ROTA ZONED VENDOR MODEL SERIAL PARTUUID                               START           SIZE PARTTYPENAME
sda          8:0             1 none  QEMU   QEMU  drive-                                                 34359738368 
├─sda1
│            8:1             1 none                      0d42a0a3-9c98-4670-8014-df2f94af8451    4096        1048576 BIOS boot
├─sda2
│    EFI     8:2             1 none                      d5fe7df8-ade1-4c34-958c-641198cf81fb    6144      536870912 EFI System
└─sda3
     boot-pool
             8:3             1 none                      d7d96ca4-c968-4723-83ca-0d2a0ec5647e 1054720    33819704832 Solaris /usr & Apple ZFS
sdb          8:16  sata      1 none  ATA    HUH72 1SG077                                              10000831348736 
└─sdb1
     zpool   8:17            1 none                      b20df79d-f945-4544-abef-668712cad742    4096 10000828203520 Solaris /usr & Apple ZFS
sdc          8:32  sata      1 none  ATA    HGST  2TH4AS                                              10000831348736 
└─sdc1
     zpool   8:33            1 none                      7957c30a-e061-41dc-a407-7b0490477ea4    4096 10000828203520 Solaris /usr & Apple ZFS
sdd          8:48  sata      1 none  ATA    HGST  2THXP0                                              10000831348736 
└─sdd1
     zpool   8:49            1 none                      937f0ab5-108e-4a6f-9e9a-b16d78d9cf7d    4096 10000828203520 Solaris /usr & Apple ZFS
sde          8:64  sata      1 none  ATA    HGST  7PJP6G                                              10000831348736 
└─sde1
     zpool   8:65            1 none                      d30d094a-f6c1-4219-a7fd-d27745c4a50c    4096 10000828203520 Solaris /usr & Apple ZFS
sdf          8:80  sata      1 none  ATA    HUH72 2TJA4B                                              10000831348736 
└─sdf1
     zpool   8:81            1 none                      33050762-9d68-455d-8b33-192db6b23cc9    4096 10000828203520 Solaris /usr & Apple ZFS
sdg          8:96  sata      1 none  ATA    HGST  2TH230                                              10000831348736 
└─sdg1
     zpool   8:97            1 none                      93c356de-5838-407d-afb6-07679633a7ae    4096 10000828203520 Solaris /usr & Apple ZFS
nvme0n1
           259:0   nvme      0 none         WDC P 21064G                                                512110190592 
└─nvme0n1p1
     linshi
           259:1   nvme      0 none                      8474e51a-8d4b-45f9-80a9-cd78710b70e5    2048   509962354688 Solaris /usr & Apple ZFS
root@truenas[/]# ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -vsc upath,media,lsblk,serial,smartx,smart

  pool: boot-pool
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM  SLOW         upath  media    size  vendor                           model        serial  hours_on  pwr_cyc  temp  health  ata_err  realloc  rep_ucor  cmd_to  pend_sec  off_ucor  nvme_err
        boot-pool   ONLINE       0     0     0     -
          sda3      ONLINE       0     0     0     0      /dev/sda    hdd     32G    QEMU                   QEMU HARDDISK             -         -        -     -       -        -        -         -       -         -         -         -

errors: No known data errors

  pool: linshi
 state: ONLINE
config:

        NAME                                    STATE     READ WRITE CKSUM  SLOW         upath  media    size  vendor                           model        serial  hours_on  pwr_cyc  temp  health  ata_err  realloc  rep_ucor  cmd_to  pend_sec  off_ucor  nvme_err
        linshi                                  ONLINE       0     0     0     -
          8474e51a-8d4b-45f9-80a9-cd78710b70e5  ONLINE       0     0     0     0  /dev/nvme0n1    ssd  476.9G       -  WDC PC SN730 SDBPNTY-512G-1101  21064G805982     17093    1,158    50  PASSED        -        -         -       -         -         -         0

errors: No known data errors

  pool: zpool
 state: ONLINE
  scan: scrub repaired 0B in 11:47:19 with 0 errors on Tue Jun  3 11:25:17 2025
config:

        NAME                                      STATE     READ WRITE CKSUM  SLOW         upath  media    size  vendor                           model        serial  hours_on  pwr_cyc  temp  health  ata_err  realloc  rep_ucor  cmd_to  pend_sec  off_ucor  nvme_err
        zpool                                     ONLINE       0     0     0     -
          raidz1-0                                ONLINE       0     0     0     -
            b20df79d-f945-4544-abef-668712cad742  ONLINE       0     0     0     0      /dev/sdb    hdd    9.1T     ATA                 HUH721010ALE601      1SG077UZ     51237       64    40  PASSED       17        0         -       -         0         0         -
            937f0ab5-108e-4a6f-9e9a-b16d78d9cf7d  ONLINE       0     0     0     0      /dev/sdd    hdd    9.1T     ATA            HGST HUH721010ALE600      2THXP0LD     16594      187    41  PASSED      108        0         -       -         0         0         -
            7957c30a-e061-41dc-a407-7b0490477ea4  ONLINE       0     0     0     0      /dev/sdc    hdd    9.1T     ATA            HGST HUH721010ALE600      2TH4ASGD     16654      202    42  PASSED      102        0         -       -         0         0         -
            d30d094a-f6c1-4219-a7fd-d27745c4a50c  ONLINE       0     0     0     0      /dev/sde    hdd    9.1T     ATA            HGST HUH721010ALE600      7PJP6G8C     17882     2072    40  PASSED        7        0         -       -         0         0         -
            33050762-9d68-455d-8b33-192db6b23cc9  ONLINE       0     0     0     0      /dev/sdf    hdd    9.1T     ATA                 HUH721010ALE601      2TJA4BMD     48068      115    41  PASSED        3        0         -       -         0         0         -
            93c356de-5838-407d-afb6-07679633a7ae  ONLINE       0     0     0     0      /dev/sdg    hdd    9.1T     ATA            HGST HUH721010ALE600      2TH2303D     16593      191    41  PASSED      101        0         -       -         0         0         -

errors: No known data errors
root@truenas[/]# lspci

00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller
00:01.0 VGA compatible controller: Device 1234:1111 (rev 02)
00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03)
00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03)
00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03)
00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03)
00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03)
00:1c.0 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:1c.1 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:1c.2 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:1c.3 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03)
00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03)
00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03)
00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92)
00:1f.0 ISA bridge: Intel Corporation 82801IB (ICH9) LPC Interface Controller (rev 02)
00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)
00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02)
05:01.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
05:02.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
05:03.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
05:04.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
06:03.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon
06:10.0 SATA controller: Intel Corporation 200 Series PCH SATA controller [AHCI mode]
06:11.0 Non-Volatile memory controller: Sandisk Corp WD Black SN750 / PC SN730 NVMe SSD
06:12.0 Ethernet controller: Red Hat, Inc. Virtio network device
09:01.0 SCSI storage controller: Red Hat, Inc. Virtio SCSI
root@truenas[/]# sudo sas2flash -list

LSI Corporation SAS2 Flash Utility
Version 20.00.00.00 (2014.09.18) 
Copyright (c) 2008-2014 LSI Corporation. All rights reserved 

        No LSI SAS adapters found! Limited Command Set Available!
        ERROR: Command Not allowed without an adapter!
        ERROR: Couldn't Create Command -list
        Exiting Program.
root@truenas[/]# sudo sas3flash -list

Avago Technologies SAS3 Flash Utility
Version 16.00.00.00 (2017.05.02) 
Copyright 2008-2017 Avago Technologies. All rights reserved.

        No Avago SAS adapters found! Limited Command Set Available!
        ERROR: Command Not allowed without an adapter!
        ERROR: Couldn't Create Command -list
        Exiting Program.
root@truenas[/]# 
root@truenas[/]# sudo storcli show all

CLI Version = 007.2807.0000.0000 Dec 22, 2023
Operating system = Linux 6.12.15-production+truenas
Status Code = 0
Status = Success
Description = None

Number of Controllers = 0
Host Name = truenas
Operating System  = Linux 6.12.15-production+truenas


root@truenas[/]# 

The configuration file of Proxmox VE is as follows:

boot: order=scsi0
cores: 6
cpu: host
hostpci0: 0000:00:17
hostpci1: 0000:0e:00
machine: q35
memory: 36864
meta: creation-qemu=8.1.5,ctime=1718853842
name: truenas
net0: virtio=BC:24:11:02:64:6C,bridge=vmbr0
numa: 0
ostype: l26
scsi0: local:105/vm-105-disk-0.qcow2,iothread=1,size=32G
scsihw: virtio-scsi-single
serial0: socket
smbios1: uuid=ad6930ca-b28b-45b8-af00-fd1d95329ffd
sockets: 1
vmgenid: 739e3a48-3047-44a1-8054-590834c5a482

Ok - I can see your boot disk in this Proxmox configuration, but I can’t see anything relating to passing through disk controllers or disks, or blacklisting the controller to stop TrueNAS ZFS pools being imported by Proxmox.

I just spotted this. I think this is good news because it suggests that the pool should be fixable. Fixing issues of mounting datasets is easier than fixing corruption inside the pool, and it is perhaps this that is causing the boot loop.

I would suggest that you mount it read-only again so that you can run diagnostic commands against it, and then run the following commands to try to see where ZFS is trying to mount the pool:

  • /sbin/zpool list -vL zpool
  • /sbin/zpool get altroot zpool
  • /sbin/zfs list -r zpool

(As an aside, calling a pool “zpool” is not a great idea IMO - I don’t think it should cause any technical issues, but since it is also a ZFS command it could create confusion.)

1 Like
root@truenas[/]# /sbin/zpool list -vL zpool

NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zpool       54.6T      0  54.6T        -         -     0%     0%  1.00x    ONLINE  -
  raidz1-0  54.6T      0  54.6T        -         -     0%  0.00%      -    ONLINE
    sdb1    9.10T      -      -        -         -      -      -      -    ONLINE
    sdd1    9.10T      -      -        -         -      -      -      -    ONLINE
    sdc1    9.10T      -      -        -         -      -      -      -    ONLINE
    sde1    9.10T      -      -        -         -      -      -      -    ONLINE
    sdf1    9.10T      -      -        -         -      -      -      -    ONLINE
    sdg1    9.10T      -      -        -         -      -      -      -    ONLINE
root@truenas[/]# 
root@truenas[/]# /sbin/zpool get altroot zpool

NAME   PROPERTY  VALUE    SOURCE
zpool  altroot   -        default
root@truenas[/]# /sbin/zfs list -r zpool

NAME                                          USED  AVAIL  REFER  MOUNTPOINT
zpool                                        32.0T  11.6T   192K  /zpool
zpool/.ix-virt                               1.80M  11.6T   153K  legacy
zpool/.ix-virt/buckets                        153K  11.6T   153K  legacy
zpool/.ix-virt/containers                     153K  11.6T   153K  legacy
zpool/.ix-virt/custom                         153K  11.6T   153K  legacy
zpool/.ix-virt/deleted                        920K  11.6T   153K  legacy
zpool/.ix-virt/deleted/buckets                153K  11.6T   153K  legacy
zpool/.ix-virt/deleted/containers             153K  11.6T   153K  legacy
zpool/.ix-virt/deleted/custom                 153K  11.6T   153K  legacy
zpool/.ix-virt/deleted/images                 153K  11.6T   153K  legacy
zpool/.ix-virt/deleted/virtual-machines       153K  11.6T   153K  legacy
zpool/.ix-virt/images                         153K  11.6T   153K  legacy
zpool/.ix-virt/virtual-machines               153K  11.6T   153K  legacy
zpool/docker                                 10.0G  11.6T   217K  /zpool/docker
zpool/docker/bgmi                            13.4M  11.6T  13.4M  /zpool/docker/bgmi
zpool/docker/bgmiqb                          16.8M  11.6T  16.8M  /zpool/docker/bgmiqb
zpool/docker/chinesesubfinder                47.7M  11.6T  47.7M  /zpool/docker/chinesesubfinder
zpool/docker/iyuu                             389M  11.6T   389M  /zpool/docker/iyuu
zpool/docker/nas-tools                        339M  11.6T   339M  /zpool/docker/nas-tools
zpool/docker/portainer                        217K  11.6T   217K  /zpool/docker/portainer
zpool/docker/stash                           9.22G  11.6T   217K  /zpool/docker/stash
zpool/docker/stash/blobs                      153K  11.6T   153K  /zpool/docker/stash/blobs
zpool/docker/stash/cache                      153K  11.6T   153K  /zpool/docker/stash/cache
zpool/docker/stash/config                     564M  11.6T   564M  /zpool/docker/stash/config
zpool/docker/stash/data                       153K  11.6T   153K  /zpool/docker/stash/data
zpool/docker/stash/generated                 8.67G  11.6T  8.67G  /zpool/docker/stash/generated
zpool/docker/stash/metadata                   153K  11.6T   153K  /zpool/docker/stash/metadata
zpool/immich                                  379G  11.6T   204K  /zpool/immich
zpool/immich/backups                         3.72G  11.6T  1.28G  /zpool/immich/backups
zpool/immich/library                          296G  11.6T   285G  /zpool/immich/library
zpool/immich/pgData                          1.49G  11.6T   874M  /zpool/immich/pgData
zpool/immich/profile                         1.83M  11.6T  1.20M  /zpool/immich/profile
zpool/immich/thumbs                          10.4G  11.6T  8.58G  /zpool/immich/thumbs
zpool/immich/upload                          40.9G  11.6T  16.9G  /zpool/immich/upload
zpool/immich/video                           26.1G  11.6T  20.6G  /zpool/immich/video
zpool/ix-apps                                76.4G  11.6T   243K  /.ix-apps
zpool/ix-apps/app_configs                    19.8M  11.6T  19.3M  /.ix-apps/app_configs
zpool/ix-apps/app_mounts                     17.9G  11.6T   179K  /.ix-apps/app_mounts
zpool/ix-apps/app_mounts/immich               486M  11.6T   217K  /.ix-apps/app_mounts/immich
zpool/ix-apps/app_mounts/immich/backups       153K  11.6T   153K  /.ix-apps/app_mounts/immich/backups
zpool/ix-apps/app_mounts/immich/library       153K  11.6T   153K  /.ix-apps/app_mounts/immich/library
zpool/ix-apps/app_mounts/immich/pgBackup      153K  11.6T   153K  /.ix-apps/app_mounts/immich/pgBackup
zpool/ix-apps/app_mounts/immich/pgData        485M  11.6T   485M  /.ix-apps/app_mounts/immich/pgData
zpool/ix-apps/app_mounts/immich/profile       153K  11.6T   153K  /.ix-apps/app_mounts/immich/profile
zpool/ix-apps/app_mounts/immich/thumbs        153K  11.6T   153K  /.ix-apps/app_mounts/immich/thumbs
zpool/ix-apps/app_mounts/immich/uploads       153K  11.6T   153K  /.ix-apps/app_mounts/immich/uploads
zpool/ix-apps/app_mounts/immich/video         153K  11.6T   153K  /.ix-apps/app_mounts/immich/video
zpool/ix-apps/app_mounts/jellyfin            17.5G  11.6T   166K  /.ix-apps/app_mounts/jellyfin
zpool/ix-apps/app_mounts/jellyfin/cache      14.3M  11.6T  1003K  /.ix-apps/app_mounts/jellyfin/cache
zpool/ix-apps/app_mounts/jellyfin/config     17.4G  11.6T  17.0G  /.ix-apps/app_mounts/jellyfin/config
zpool/ix-apps/app_mounts/tftpd-hpa            307K  11.6T   153K  /.ix-apps/app_mounts/tftpd-hpa
zpool/ix-apps/app_mounts/tftpd-hpa/tftpboot   153K  11.6T   153K  /.ix-apps/app_mounts/tftpd-hpa/tftpboot
zpool/ix-apps/docker                         58.2G  11.6T  58.2G  /.ix-apps/docker
zpool/ix-apps/truenas_catalog                 288M  11.6T   288M  /.ix-apps/truenas_catalog
zpool/jay                                     309G  11.6T   309G  /zpool/jay
zpool/photo                                  2.87T  11.6T  2.87T  /zpool/photo
zpool/pve                                    2.36T  11.6T  1.06T  /zpool/pve
zpool/share                                  1.24T  11.6T  1.24T  /zpool/share
zpool/video                                  24.8T  11.6T  24.7T  /zpool/video

I passed through the entire SATA controller via PCIe passthrough.However, the controller has not been blacklisted in PVE yet.

Check your Proxmox host using zpool status to ensure that it did not attempt to (or successfully did) import your pool. If it has, you may be in a pool rollback scenario.

Reboot and do the import again only this time:

sudo zpool import -R /mnt -o readonly=on zpool

and then:

/sbin/zfs list -r zpool

Proxmox did not perform the import. This access happened because the original TrueNAS system fell into an infinite reboot loop, so I reinstalled the boot drive.

root@truenas:/mnt/zpool/share# /sbin/zfs list -r zpool
NAME                                          USED  AVAIL  REFER  MOUNTPOINT
zpool                                        32.0T  11.6T   192K  /mnt/zpool
zpool/.ix-virt                               1.80M  11.6T   153K  legacy
zpool/.ix-virt/buckets                        153K  11.6T   153K  legacy
zpool/.ix-virt/containers                     153K  11.6T   153K  legacy
zpool/.ix-virt/custom                         153K  11.6T   153K  legacy
zpool/.ix-virt/deleted                        920K  11.6T   153K  legacy
zpool/.ix-virt/deleted/buckets                153K  11.6T   153K  legacy
zpool/.ix-virt/deleted/containers             153K  11.6T   153K  legacy
zpool/.ix-virt/deleted/custom                 153K  11.6T   153K  legacy
zpool/.ix-virt/deleted/images                 153K  11.6T   153K  legacy
zpool/.ix-virt/deleted/virtual-machines       153K  11.6T   153K  legacy
zpool/.ix-virt/images                         153K  11.6T   153K  legacy
zpool/.ix-virt/virtual-machines               153K  11.6T   153K  legacy
zpool/docker                                 10.0G  11.6T   217K  /mnt/zpool/docker
zpool/docker/bgmi                            13.4M  11.6T  13.4M  /mnt/zpool/docker/bgmi
zpool/docker/bgmiqb                          16.8M  11.6T  16.8M  /mnt/zpool/docker/bgmiqb
zpool/docker/chinesesubfinder                47.7M  11.6T  47.7M  /mnt/zpool/docker/chinesesubfinder
zpool/docker/iyuu                             389M  11.6T   389M  /mnt/zpool/docker/iyuu
zpool/docker/nas-tools                        339M  11.6T   339M  /mnt/zpool/docker/nas-tools
zpool/docker/portainer                        217K  11.6T   217K  /mnt/zpool/docker/portainer
zpool/docker/stash                           9.22G  11.6T   217K  /mnt/zpool/docker/stash
zpool/docker/stash/blobs                      153K  11.6T   153K  /mnt/zpool/docker/stash/blobs
zpool/docker/stash/cache                      153K  11.6T   153K  /mnt/zpool/docker/stash/cache
zpool/docker/stash/config                     564M  11.6T   564M  /mnt/zpool/docker/stash/config
zpool/docker/stash/data                       153K  11.6T   153K  /mnt/zpool/docker/stash/data
zpool/docker/stash/generated                 8.67G  11.6T  8.67G  /mnt/zpool/docker/stash/generated
zpool/docker/stash/metadata                   153K  11.6T   153K  /mnt/zpool/docker/stash/metadata
zpool/immich                                  379G  11.6T   204K  /mnt/zpool/immich
zpool/immich/backups                         3.72G  11.6T  1.28G  /mnt/zpool/immich/backups
zpool/immich/library                          296G  11.6T   285G  /mnt/zpool/immich/library
zpool/immich/pgData                          1.49G  11.6T   874M  /mnt/zpool/immich/pgData
zpool/immich/profile                         1.83M  11.6T  1.20M  /mnt/zpool/immich/profile
zpool/immich/thumbs                          10.4G  11.6T  8.58G  /mnt/zpool/immich/thumbs
zpool/immich/upload                          40.9G  11.6T  16.9G  /mnt/zpool/immich/upload
zpool/immich/video                           26.1G  11.6T  20.6G  /mnt/zpool/immich/video
zpool/ix-apps                                76.4G  11.6T   243K  /mnt/.ix-apps
zpool/ix-apps/app_configs                    19.8M  11.6T  19.3M  /mnt/.ix-apps/app_configs
zpool/ix-apps/app_mounts                     17.9G  11.6T   179K  /mnt/.ix-apps/app_mounts
zpool/ix-apps/app_mounts/immich               486M  11.6T   217K  /mnt/.ix-apps/app_mounts/immich
zpool/ix-apps/app_mounts/immich/backups       153K  11.6T   153K  /mnt/.ix-apps/app_mounts/immich/backups
zpool/ix-apps/app_mounts/immich/library       153K  11.6T   153K  /mnt/.ix-apps/app_mounts/immich/library
zpool/ix-apps/app_mounts/immich/pgBackup      153K  11.6T   153K  /mnt/.ix-apps/app_mounts/immich/pgBackup
zpool/ix-apps/app_mounts/immich/pgData        485M  11.6T   485M  /mnt/.ix-apps/app_mounts/immich/pgData
zpool/ix-apps/app_mounts/immich/profile       153K  11.6T   153K  /mnt/.ix-apps/app_mounts/immich/profile
zpool/ix-apps/app_mounts/immich/thumbs        153K  11.6T   153K  /mnt/.ix-apps/app_mounts/immich/thumbs
zpool/ix-apps/app_mounts/immich/uploads       153K  11.6T   153K  /mnt/.ix-apps/app_mounts/immich/uploads
zpool/ix-apps/app_mounts/immich/video         153K  11.6T   153K  /mnt/.ix-apps/app_mounts/immich/video
zpool/ix-apps/app_mounts/jellyfin            17.5G  11.6T   166K  /mnt/.ix-apps/app_mounts/jellyfin
zpool/ix-apps/app_mounts/jellyfin/cache      14.3M  11.6T  1003K  /mnt/.ix-apps/app_mounts/jellyfin/cache
zpool/ix-apps/app_mounts/jellyfin/config     17.4G  11.6T  17.0G  /mnt/.ix-apps/app_mounts/jellyfin/config
zpool/ix-apps/app_mounts/tftpd-hpa            307K  11.6T   153K  /mnt/.ix-apps/app_mounts/tftpd-hpa
zpool/ix-apps/app_mounts/tftpd-hpa/tftpboot   153K  11.6T   153K  /mnt/.ix-apps/app_mounts/tftpd-hpa/tftpboot
zpool/ix-apps/docker                         58.2G  11.6T  58.2G  /mnt/.ix-apps/docker
zpool/ix-apps/truenas_catalog                 288M  11.6T   288M  /mnt/.ix-apps/truenas_catalog
zpool/jay                                     309G  11.6T   309G  /mnt/zpool/jay
zpool/photo                                  2.87T  11.6T  2.87T  /mnt/zpool/photo
zpool/pve                                    2.36T  11.6T  1.06T  /mnt/zpool/pve
zpool/share                                  1.24T  11.6T  1.24T  /mnt/zpool/share
zpool/video                                  24.8T  11.6T  24.7T  /mnt/zpool/video

That looks correct.

With all due respect, how do you know?
Proxmox has the capability of importing zfs pools with no user input.

Checking the status @HoneyBadger mentioned is a good idea.

But I still can’t import the zpool through the web UI.

Alright, but after passing through the PCIe device once, the disk was no longer visible in Proxmox. Since I didn’t reboot Proxmox during the time the bug occurred, I believe it’s unlikely that the disk was imported by Proxmox.

They want you to check the status in Proxmox using the CLI. If it’s found Proxmox did ‘touch’ it, @HoneyBadger will need to walk you through the rollback attempts for TrueNAS

I hope you can provide specific commands to help me check my Proxmox system.

If I attempt to import with read-write access, the TrueNAS system will reboot within a few minutes.