After spending a few days troubleshooting via Reddit, and learning more about ZFS than I ever thought I would, I seem to have found a way to mount BOTH of the mirror vdevs successfully, and all the data contained within seems to be intact! Big thanks to @Protopia for patiently fielding my questions and providing invaluable help.
==============================
In case it helps anyone reading this thread in the future, I’ll share the steps I took to retrieve the data here:
Although lsblk and blkid did not seem to be able to read the PARTUUID values, gdisk seemed to indicate that the values and partition type were definitely there. After a lot of dancing around trying to work out why exactly this might have been happening (troubleshooting steps including running partprobe to try and re-read the partition table, and noticing that when running zdb -l /dev/sdb1 it showed only LABEL 0 instead of the LABEL 0 and LABEL 1 that zdb -l /dev/sda1 showed)
zdb -l /dev/sdb1
LABEL 0
version: 5000
name: 'HDDs'
state: 0
txg: 290165
pool_guid: 4963705989811537592
errata: 0
hostid: 1637503756
hostname: 'Vault'
top_guid: 3323707249957188009
guid: 16516640297011063002
vdev_children: 2
vdev_tree:
type: 'mirror'
id: 1
guid: 3323707249957188009
metaslab_array: 335
metaslab_shift: 34
ashift: 12
asize: 16000893845504
is_log: 0
create_txg: 290163
children[0]:
type: 'disk'
id: 0
guid: 11454120585917499236
path: '/dev/disk/by-partuuid/6be30b9d-db27-409c-895e-9990ab79e974'
whole_disk: 0
create_txg: 290163
children[1]:
type: 'disk'
id: 1
guid: 16516640297011063002
path: '/dev/disk/by-partuuid/31c323a0-f0fa-42f7-a92f-69f97c646ea2'
whole_disk: 0
create_txg: 290163
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
com.klarasystems:vdev_zaps_v2
labels = 0 1 2 3
zdb -l /dev/sda1
LABEL 0
version: 5000
name: 'HDDs'
state: 0
txg: 379469
pool_guid: 4963705989811537592
errata: 0
hostid: 1637503756
hostname: 'Vault'
top_guid: 12557942224269859001
guid: 17992915048327931901
vdev_children: 2
vdev_tree:
type: 'mirror'
id: 0
guid: 12557942224269859001
metaslab_array: 128
metaslab_shift: 34
ashift: 12
asize: 16000893845504
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 13392533498850484953
path: '/dev/disk/by-partuuid/496fbd23-654e-487a-b481-17b50a0d7c3d'
whole_disk: 0
DTL: 110916
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 17992915048327931901
path: '/dev/disk/by-partuuid/232c74aa-5079-420d-aacf-199f9c8183f7'
whole_disk: 0
DTL: 110915
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
com.klarasystems:vdev_zaps_v2
labels = 0 2
LABEL 1
version: 5000
name: 'HDDs'
state: 0
txg: 290164
pool_guid: 4963705989811537592
errata: 0
hostid: 1637503756
hostname: 'Vault'
top_guid: 12557942224269859001
guid: 17992915048327931901
vdev_children: 2
vdev_tree:
type: 'mirror'
id: 0
guid: 12557942224269859001
metaslab_array: 128
metaslab_shift: 34
ashift: 12
asize: 16000893845504
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 13392533498850484953
path: '/dev/disk/by-partuuid/496fbd23-654e-487a-b481-17b50a0d7c3d'
whole_disk: 0
DTL: 110916
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 17992915048327931901
path: '/dev/disk/by-partuuid/232c74aa-5079-420d-aacf-199f9c8183f7'
whole_disk: 0
DTL: 110915
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
com.klarasystems:vdev_zaps_v2
labels = 1 3
I noticed that while ls -l /dev/disk/by-partuuid/ didn’t show the missing partitions - ls -l /dev/disk/by-id/ did:
ls -l /dev/disk/by-partuuid/
total 0
lrwxrwxrwx 1 root root 10 Jan 27 11:55 232c74aa-5079-420d-aacf-199f9c8183f7 → …/…/sda1
lrwxrwxrwx 1 root root 15 Jan 27 11:55 3f331e64-bc2d-4f15-928c-f081425c49eb → …/…/nvme1n1p3
lrwxrwxrwx 1 root root 10 Jan 27 11:55 496fbd23-654e-487a-b481-17b50a0d7c3d → …/…/sdd1
lrwxrwxrwx 1 root root 15 Jan 27 11:55 8c298b13-b7ff-4a8f-aa73-11ccc3edeb47 → …/…/nvme1n1p2
lrwxrwxrwx 1 root root 15 Jan 27 11:55 bf8b306a-17d3-474a-91c1-d2a7de30e971 → …/…/nvme1n1p1
ls -l /dev/disk/by-id/
total 0
lrwxrwxrwx 1 root root 9 Jan 27 20:39 ata-ST16000NE000-2RW103_ZL2Q1VVR → …/…/sdc
lrwxrwxrwx 1 root root 10 Jan 27 20:39 ata-ST16000NE000-2RW103_ZL2Q1VVR-part1 → …/…/sdc1
lrwxrwxrwx 1 root root 9 Jan 27 11:55 ata-ST16000NM000J-2TW103_ZR521CDT → …/…/sdd
lrwxrwxrwx 1 root root 10 Jan 27 11:55 ata-ST16000NM000J-2TW103_ZR521CDT-part1 → …/…/sdd1
lrwxrwxrwx 1 root root 9 Jan 27 20:29 ata-ST16000NM001G-2KK103_ZL21HSD1 → …/…/sdb
lrwxrwxrwx 1 root root 10 Jan 27 20:29 ata-ST16000NM001G-2KK103_ZL21HSD1-part1 → …/…/sdb1
lrwxrwxrwx 1 root root 9 Jan 27 11:55 ata-WUH721816ALE6L4_2CKNL63J → …/…/sda
lrwxrwxrwx 1 root root 10 Jan 27 11:55 ata-WUH721816ALE6L4_2CKNL63J-part1 → …/…/sda1
lrwxrwxrwx 1 root root 13 Jan 27 11:55 nvme-SPCC_M.2_PCIe_SSD_210402295190177 → …/…/nvme1n1
lrwxrwxrwx 1 root root 15 Jan 27 11:55 nvme-SPCC_M.2_PCIe_SSD_210402295190177-part1 → …/…/nvme1n1p1
lrwxrwxrwx 1 root root 15 Jan 27 11:55 nvme-SPCC_M.2_PCIe_SSD_210402295190177-part2 → …/…/nvme1n1p2
lrwxrwxrwx 1 root root 15 Jan 27 11:55 nvme-SPCC_M.2_PCIe_SSD_210402295190177-part3 → …/…/nvme1n1p3
lrwxrwxrwx 1 root root 13 Jan 27 11:55 nvme-SPCC_M.2_PCIe_SSD_210402295190177_1 → …/…/nvme1n1
lrwxrwxrwx 1 root root 15 Jan 27 11:55 nvme-SPCC_M.2_PCIe_SSD_210402295190177_1-part1 → …/…/nvme1n1p1
lrwxrwxrwx 1 root root 15 Jan 27 11:55 nvme-SPCC_M.2_PCIe_SSD_210402295190177_1-part2 → …/…/nvme1n1p2
lrwxrwxrwx 1 root root 15 Jan 27 11:55 nvme-SPCC_M.2_PCIe_SSD_210402295190177_1-part3 → …/…/nvme1n1p3
lrwxrwxrwx 1 root root 13 Jan 27 11:55 nvme-SPCC_M.2_PCIe_SSD_240460155221007 → …/…/nvme0n1
lrwxrwxrwx 1 root root 13 Jan 27 11:55 nvme-SPCC_M.2_PCIe_SSD_240460155221007_1 → …/…/nvme0n1
lrwxrwxrwx 1 root root 13 Jan 27 11:55 nvme-eui.32343034010000004ce0001835323231 → …/…/nvme0n1
lrwxrwxrwx 1 root root 13 Jan 27 11:55 nvme-nvme.10ec-323130343032323935313930313737-53504343204d2e32205043496520535344-00000001 → …/…/nvme1n1
lrwxrwxrwx 1 root root 15 Jan 27 11:55 nvme-nvme.10ec-323130343032323935313930313737-53504343204d2e32205043496520535344-00000001-part1 → …/…/nvme1n1p1
lrwxrwxrwx 1 root root 15 Jan 27 11:55 nvme-nvme.10ec-323130343032323935313930313737-53504343204d2e32205043496520535344-00000001-part2 → …/…/nvme1n1p2
lrwxrwxrwx 1 root root 15 Jan 27 11:55 nvme-nvme.10ec-323130343032323935313930313737-53504343204d2e32205043496520535344-00000001-part3 → …/…/nvme1n1p3
lrwxrwxrwx 1 root root 9 Jan 27 20:29 wwn-0x5000c500c3802153 → …/…/sdb
lrwxrwxrwx 1 root root 10 Jan 27 20:29 wwn-0x5000c500c3802153-part1 → …/…/sdb1
lrwxrwxrwx 1 root root 9 Jan 27 11:55 wwn-0x5000c500db60e9eb → …/…/sdd
lrwxrwxrwx 1 root root 10 Jan 27 11:55 wwn-0x5000c500db60e9eb-part1 → …/…/sdd1
lrwxrwxrwx 1 root root 9 Jan 27 20:39 wwn-0x5000c500e5a14487 → …/…/sdc
lrwxrwxrwx 1 root root 10 Jan 27 20:39 wwn-0x5000c500e5a14487-part1 → …/…/sdc1
lrwxrwxrwx 1 root root 9 Jan 27 11:55 wwn-0x5000cca2a1f3a23e → …/…/sda
lrwxrwxrwx 1 root root 10 Jan 27 11:55 wwn-0x5000cca2a1f3a23e-part1 → …/…/sda1
So I figure, if by-id is recognising the partitions, can I import the pool using those values, instead of the PARTUUID values?
Using zpool import -d /dev/disk/by-id HDDs:
cannot mount '/HDDs': failed to create mountpoint: Read-only file system
Import was successful, but unable to mount some datasets
zpool status now showed me the magic output I had been waiting for:
pool: HDDs
state: ONLINE
scan: scrub repaired 0B in 21:53:45 with 0 errors on Thu Jan 9 21:53:47 2025
config:
NAME STATE READ WRITE CKSUM
HDDs ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5000c500db60e9eb-part1 ONLINE 0 0 0
wwn-0x5000cca2a1f3a23e-part1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
wwn-0x5000c500e5a14487-part1 ONLINE 0 0 0
wwn-0x5000c500c3802153-part1 ONLINE 0 0 0
errors: No known data errors
pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:00:26 with 0 errors on Wed Jan 22 03:45:27 2025
config:
NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
nvme1n1p3 ONLINE 0 0 0
errors: No known data errors
After manually correcting some mountpoints (zfs set mountpoint=/mnt/HDDs HDDs and zfs set mountpoint=/mnt/.ix-apps HDDs/ix-apps), my data and apps are all now visible and completely accessible.
==============================
In the process of troubleshooting, I determined that the SATA controller in my Aoostar WTR Pro is none other than the ASMedia ASM1064 - which I have now learned is woefully incapable for the job.
Sadly I’m going to have to ditch my little WTR Pro, and instead build something much more reliable myself.
Thanks to all who read my earlier post or replied with troubleshooting assistance!