SOLVED: How do you reliably get /dev/zvol partitions to show up after setting volmode=geom

The answer is: DO NOT DO THIS.
Leave the volmode=default or things will break badly as I found out.
So I had to set it back, reversing my change.

To undo the damage, use zfs inherit

For example,

zfs inherit volmode main/appdata/vm/Debian main/vm
root@truenas[/mnt/main/user/stk]# zfs get volmode|grep local

if the grep returns nothing, you have reversed out your damage. You need to reboot and don’t try to clone any of your snapshots taken when the volmode was set wrong or you won’t be able to add any zvols to any of your VMs.

Original message:
I found this baffling why just one zvol shows it’s partitions, when they all have partitions. gdisk verified the partitions, yet only one Zvol shows it’s partitions. I cannot figure this out.

root@truenas[/dev/zvol/main/appdata/vm]# zfs set volmode=geom main/appdata/vm
root@truenas[/dev/zvol/main/appdata/vm]# ls -l
total 0
lrwxrwxrwx 1 root root 15 May 16 19:01 Debian -> ../../../../zd0
lrwxrwxrwx 1 root root 16 May 16 19:02 OpenSUSE -> ../../../../zd48
lrwxrwxrwx 1 root root 16 May 16 15:58 Windows -> ../../../../zd32
lrwxrwxrwx 1 root root 16 May 16 18:56 stk-laptop -> ../../../../zd16
lrwxrwxrwx 1 root root 18 May 16 18:56 stk-laptop-part1 -> ../../../../zd16p1
lrwxrwxrwx 1 root root 18 May 16 18:56 stk-laptop-part2 -> ../../../../zd16p2
lrwxrwxrwx 1 root root 18 May 16 18:56 stk-laptop-part3 -> ../../../../zd16p3
lrwxrwxrwx 1 root root 18 May 16 18:56 stk-laptop-part4 -> ../../../../zd16p4
root@truenas[/dev/zvol/main/appdata/vm]# 

What’s the “proper” way to force the show of the partitions after setting the geom property?

GEOM is FreeBSD’s block device subsystem. Does that setting have any effect on Linux at all?

If for some reason you’re deliberately testing ZFS features we do not expose in our backend and webui and break some aspect of our product (and choose to file a bug report about it), please clearly note the exact settings you were using in your ticket.

exact settings for what? the entire bug can be replicated by anyone with that one zfs set line. Have you tried it?

I’m not trying to test anything. Mounting zvols in truenas enables me to use a CLI to easily move files between ZFS pools and a VM.

When I didn’t have the bridge working this was the ONLY option. Now I can mount the pools (using NFS or smb) from within the VMs after setting up the bridged network.

Still, its a nice fallback and this is SUPPOSED to work, right?

It does seem like a bug, right?

zfs set is not directly exposed by the TrueNAS CLI. See MOTD when you ssh into TrueNAS:

Warning: the supported mechanisms for making configuration changes
are the TrueNAS WebUI, CLI, and API exclusively. ALL OTHERS ARE
NOT SUPPORTED AND WILL RESULT IN UNDEFINED BEHAVIOR AND MAY
RESULT IN SYSTEM FAILURE.

Configuring via unix shell may result in undefined behavior. Yes, things may happen when you replicate a dataset from another OS and we should be aware of the issue, but if you are deliberately playing around with settings via shell please let developers know up-front when filling bug tickets so that it’s easier to triage what’s going on.

1 Like

Ah. What happened is the volmode=geom is not supported.

See: TrueNAS - Issues - iXsystems TrueNAS Jira

So if you want to to view your zvol partitions, do that in a VM.

Until I got the ethernet bridge working, I had no way to copy data between pools and Zvols. But now with the bridge, I’m good and no longer need this method.

Now all I have to do is figure out how to undo my mistake, but that’s easy to figure out.