I upgrade from 24.10 to 25.04 and my virtual machine is gone

@Foxtrot314, You are not alone. I am in the same boat, with a Win11 VM on which ASM (ARK Server Manager) and RAT4 (7Days to Die server manager) run, supporting several clustered ASE servers and one 7D2D server. I also run various apps that I was able to assign IP addresses from a reserved range on pfSense via a TC app (named metalLB, I believe). Having those separate IP addresses allowed me to route traffic through appropriate VPNs for specific applications easily. From what I think I understand, the method moving forward involves assigning aliases (I presume from the same reserved batch I used previously) to one of the two NICs on the TrueNAS box (not entirely sure which NIC that may be yet), and then set an IP from that batch of IPs for each app in its respective configuration file via the edit function for that app. It appears that I will have to wait to achieve both of those things until v25.10 or later, as I am currently on v24.10.2.2.

1 Like

My VM did work, but now It says no bootdisk

Lack of effort on the developers to not provide a migration method for VMs.

Yep. But don’t worry, they’re throwing out this entire new virtualization system and going back to the old one.

So I moved my VM to Incus and now with 25.04.02 you’ve re-introduced VMs.

  • How can I take my Incus KVM back to the old model?
  • Will this re-introduced new model stay around or are there plans to drop this again?

Frankly there is too much flip/flop with no clear direction and it seems like kneejerk changes so some guidance would be extremely helpful.

Well, we have a 300-messages-and-counting discussion about your last sentence…
So back to your questions:

  • Do you have a backup of your Electric Eel configuration? This might import straight into 25.04.2. Might…
  • Your guess is as good as mine :roll_eyes:

Hypothetically, not having done it yet,

Clone and Promote the zvol out to somewhere else, re-enable the cache settings and set the volmode to default, then recreate the vm “classically” and point at the zvol.

You should then be able to delete the old incus vm in the UI.

For reference, its similar to this, but in reverse:

And I document the exact properties here:

1 Like

Here’s what I did.

In the TrueNAS UI, I went to Containers and stopped the VM I want to move back to Virtual Machines.

Then, in the TrueNAS shell I listed the volumes I had imported into Incus with zfs list -r -t vol <pool>/.ix-virt

I imported my Zvol to Incus using the move option, and I did not replace my root disk with the Zvol, as in Stux’s guide. Here’s how my volumes looked and the steps I took using this as an example.

sudo zfs list -r -t vol tank/.ix-virt

NAME                                        USED  AVAIL  REFER  MOUNTPOINT
tank/.ix-virt/custom/default_haos          32.5G   119G  1.89G  -
tank/.ix-virt/virtual-machines/HAOS.block    56K  87.9G    56K  -

I can see that tank/.ix-virt/custom/default_haos is the Zvol I want to move, so I took a snapshot of it.

sudo zfs snapshot -r tank/.ix-virt/custom/default_haos@relocate

Then I used zfs send/receive to copy that to the desired location in my pool.

sudo zfs send tank/.ix-virt/custom/default_haos@relocate | sudo zfs receive -v tank/vm/haos

I checked the cache settings and volmode as Stux mentioned, and they were already set correctly. However, I had to set refreservation=auto so the Zvol used thick provisioning. (TrueNAS default when creating a Zvol in the UI)

sudo zfs set refreservation=auto tank/vm/haos

Then I recreated the VM on the Virtual Machines page, using the existing Zvol. Once I confirmed it was functioning correctly, I deleted the VM from the Containers page and deleted the hidden Zvol from the Configuration > Manage Volumes menu. I also deleted the @relocate snapshot from the Zvol I moved.

3 Likes

I also planned on doing something similar with my homelab virtual machines. I have about 11 to do so I’m planning this now for this coming weekend.

Your general flow should work fine. I would caution, however, that there are ramifications to this particular workflow from a space allocation standpoint.
Using zfs send/receive in this way will result in you temporarily using 2x the amount of space for each volume you are replicating.

refreservation=auto may not always be the answer, I prefer riding on the dark side with thin provisioning :stuck_out_tongue: