SOLVED: Why multiple VMs went from booting to failing to boot

My mistake was cheating by experimenting with volmode=geom. Don’t do this!! Only access your ZVol via the VM, not via the command line.

In one ZVol, the FAT partitions flags were ALL erased, the GPT partitioning was wiped out (gdisk said MBR was there, not GPT), the partition looked like it was 100% full, and yet there were no files.

The other one “looked” fine, but the partition type was changed to MBR.

gdisk told the story. gparted did not.

This was an expensive (in time spent) lesson not to flip the settings you aren’t supposed to flip.

How did you verify it being the cause?
The man page states that geom is just an alias for full, and that the default is full.

Well I was notified by TrueNAS bug triage team that setting to geom is indeed setting it to full and that full mode is NOT supported.

When I changed it to “default”, no more blue spinners on the ZVol chooser in VM setup.

So I’d bet good money that default isn’t the same as full.

Does anyone know?

The OpenZFS man pages has this bit about volmode:
https://openzfs.github.io/openzfs-docs/man/master/7/zfsprops.7.html#volmode

On my 23.10.2 SCALE system the mentioned zvol_volmode parameter is set to:

#cat /sys/module/zfs/parameters/zvol_volmode
2

So, the answer would be: dev

2 Likes

mystery solved!