Sharing an existing VM's zvol with iSCSI and booting off of it from another KVM hypervisor

I’m trying to transfer VMs running on Truenas SCALE to another machine running Promox.

The idea is to keep the storage on Truenas SCALE and have the VM run from a dedicated Hypervisor, using Truenas as a SAN via iSCSI.

Is there any chance that I could share a Replicated Snapshot, Restored or not (I’m not sure I understand why there’s a restore possibility).
A Replicated Snaphost shows up as an Extend candidate in iSCSI dialog, hence the idea.

It can be iSCSI share, readonly though, and made available as an associate target. But a VM wouldn’t boot of of it.

Can this work at all?

How could a vm boot of it if it’s read only?
Those are e.g. for cloning off from aslong as they are ro,

Just make some space, create an zvol, share it via iscsi. Move the VM’s one by one while increasing/reducing the size if you don’t have anything else wherre they could temporarily live like on a spinner in proxmox.

PS: And you know that you could just make a clone, which isn’t read-only, and use it? As long as you don’t write data to it like a mad man the overhead will be very low.

Correct, I don’t expect the VM to “work” with a readonly storage.
I just want to validate that, conceptually, if everything falls into place a VM’s zvol snapshot clone, could be shared as an iSCSI LUN and be booted off from another hypervisor.

At least the boot record should be found and the VM should boot and then complain because of the RO FS.

So far I had not luck, VM get stuck with “Booting from Hard Disk…”. At least it doesn’t boot from iPXE, so it detects a bootable hard drive I suppose.

Just make some space, create an zvol, share it via iscsi. Move the VM’s one by one while increasing/reducing the size if you don’t have anything else wherre they could temporarily live like on a spinner in proxmox.

I don’t understand how you would mean here @crpb : “Move the VM’s one by one”-> To me the VM is the Zvol and I don’t see how I could better “move it” than by directly share the zvol with iSCSI and make the LUN the HDD of a new VM in Proxmox.

I actually never tried Clone on a VM in Truenas: it does clone everything, including each Zvol, very nice.
But these zvol clones don’t show up as a selectable Device for a new Extent of type Device.

Maybe I need to understand what is considered a selectable Extent of type Device. I assumed any Zvol was, but there seems to be more to it.

Hey,

i misunderstood aswell a few things so with that “use the iscsi lun at the same time” i read “i dom’t have enough resources otherwise” or something.
And you must be careful about thinking that Proxmox’s KVM would be the same as others. They do tinker quite a lot intere own stuff and it’s usually not really compatible with what everyone else ships.
So you are using those perl .patches to make use of zfs over iscsi or did you just create an zvol, added that as an share in iscsi and mounted it on your proxmox host?
if the later than wouldn’t you end up with images on those iscsi-luns or did you tak the even harder way of creating each disk(zvol+target+integrate…) for each guest?

I would probably go ahead and do an export on proxmox to qcow2 files which you would most likely be able to import again on the truenas side

I must confess i have no idea on scales “virtualization” as i’m used to running vsphere with truenas.

Maybe show what type of disk or whatever you actually have before going any further here with more speculation than anything else.
:tada:

And you must be careful about thinking that Proxmox’s KVM would be the same as others. They do tinker quite a lot intere own stuff and it’s usually not really compatible with what everyone else ships.
ok, I didn’t anticipate that. But let’s see, I still want to believe they use standard KVM and KVM keeps backward compatiblity… that would be next step if get to these kinds of issues.

So you are using those perl .patches to make use of zfs over iscsi or did you just create an zvol, added that as an share in iscsi and mounted it on your proxmox host?
if the later than wouldn’t you end up with images on those iscsi-luns or did you tak the even harder way of creating each disk(zvol+target+integrate…) for each guest?

I want to use the perl script ZFS over iSCSI, as it would be ideal to initiate backup from Proxmox and Truenas would do the snapshot (hopefully it’s what the plugin brings)
But for now, I’m just tinkering manually. I just want to test migrating one VM (I don’t have that many anyway) . So I have a VM in Truenas, with one main Zvol for the system.

To me, that zvol contains everything and should be usable from Proxmox. Hence, the idea of sharing that zvol as an iSCSI share and starting a Proxmox VM with it as the main harddrive.

I could probably find some way to export the zvol as a qcow2 image, but then it would anyway be written back to Truenas on an iSCSI volume.

Wouldn’t be great to keep the zvol and run the VM from any Hypervisor?

I finally got this working.
In essence there’s nothing special to do.

For a boot test of your TN VM on another platform, just snapshot the VM main volume, clone it, then share it with iSCSI.

you can double check the content is there with fdisk on the zvol, e.g.:

freenas% sudo fdisk -l /dev/zvol/subramanya/ubuntudockerhost-0vhbns-clone
[sudo] password for jean:
Disk /dev/zvol/tank/ubuntudockerhost-0vhbns-clone: 80 GiB, 85899345920 bytes, 167772160 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 33554432 bytes
Disklabel type: gpt
Disk identifier: F02887A7-E662-44BC-83A5-6905B15FF0EE

Device                                                Start       End   Sectors  Size Type
/dev/zvol/tank/ubuntudockerhost-0vhbns-clone1    2048   1050623   1048576  512M EFI System
/dev/zvol/tank/ubuntudockerhost-0vhbns-clone2 1050624   3147775   2097152    1G Linux filesystem
/dev/zvol/tank/ubuntudockerhost-0vhbns-clone3 3147776 167772126 164624351 78.5G Linux filesystem

On Proxmox for instance but it’d work the same on XCP-Ng, you can mount the iSCSI share, (mount it with option Use LUN directly on PVE since the volume is dedicated to a single VM), identify the volume that has been created and double check you have the same partitions with fdisk again, e.g.

root@pve:~# fdisk -l /dev/sde
Disk /dev/sde: 80 GiB, 85899345920 bytes, 167772160 sectors
Disk model: iSCSI Disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 8388608 bytes
Disklabel type: gpt
Disk identifier: F02887A7-E662-44BC-83A5-6905B15FF0EE

Device       Start       End   Sectors  Size Type
/dev/sde1     2048   1050623   1048576  512M EFI System
/dev/sde2  1050624   3147775   2097152    1G Linux filesystem
/dev/sde3  3147776 167772126 164624351 78.5G Linux filesystem

Here we have an UEFI boot VM, it would be simpler with classic BIOS I guess, so you need to be careful that the Hypervisor that’s going to host the VM is aware of the EFI partition location.

On PVE, you need to make sure you uncheck Add EFI Disk: uncheck since it’s all managed from the main volume.

And that’s enough for a successful first boot.
And then you can start dealing with all the subsequent issues :slight_smile: (network device name change, other potential mounted storage, hardware differences, etc.)