I appreciate the help and warnings here, thank you.
You have a 6x mirror i.e. the size of a single drive? Or do you mean RAIDZ1 5x the size of a single drive? Or something else?
A single data vdev consisting of 3 mirrors of 2x4TB, 2x6TB, a 2x12TB respectively. This server is re-using old drives and I’m in the process of dedicating it as a backup machine.
recreate the pool from scratch and restore from backups.
Can do. I can live with that, as long as I can avoid this in the future and there’s nothing I can do here.
Put simply, if you failed to blacklist your PCIe card in Proxmox, your luck has lasted several years longer than it might have expected to last.
Just to recap my understanding here - during a restart, Proxmox somehow took control of the HBA, rather than patching it through to the VM, and something? messed with the partition layout? I’m not even aware what that could have been - the pool has never been imported anywhere but this VM.
I understand the need for exclusive access to the disks for zfs, but I’m not aware of anything in Proxmox actively doing anything to a device that isn’t used elsewhere.
This is why the community recommends bare-metal TrueNAS installs
I’m aware, as well as I’m aware of the “appliance” mantra, but that sometimes simply isn’t realistic. It changes the required hardware for a setup like this from “I can run a small homelab on hardware I have and make TrueNAS a part of it” to “I need dedicated machines, space, and probably a new HVAC run”. Following other best practices like mirrors instead of RAIDZ1, having a backup instance, and having cold storage already make this a very expensive hobby.
As I’ve outlined earlier, I’m making this particular server a dedicated backup box, and once I’ve got my new server set up, I can run TrueNAS directly on it (this time without a mixed capacity vdev). However, my new primary NAS server is going to be virtualized again. I will ensure to blocklist the disk and/or controller to the host when I get there.