Instance volume lifecycle management?

Under the “Import Zvol” menu, it says “Importing a zvol as Instances volume allows its lifecycle to be managed, including backups, restores, and snapshots. This allows portability between systems using standard tools.”

However, I cannot figure out how to actually perform backups, restores, and snapshots for Instance volumes through the GUI - I think it would be possible from the command line, but I generally like to stick to the GUI for such tasks. Is this functionality actually shipped with 25.04, or is planned for a later release?

1 Like

Bumping because I’m actually getting a bit nervous here - still have not figured this out, and not super keen on my VM’s being vulnerable until, what, Goldeye?

1 Like

Agree, especially since the release forces us into this volume management system and won’t let us just attach a zvol as before.

It seems to be backed up if you do a pool level backup. I migrated all my stuff yesterday, and my regular nightly backup was busy.

1 Like

Interesting! Okay, I might try this. Obviously not a very convenient approach, but if it works then it’s better than nothing I guess.

Beware of doing VM snapshots manually in shell. Can be risky if the middleware doesn’t like it. Jira

But if someone can offer any working workaround it would be great :slight_smile:

2 Likes

Backing up the whole pool is not great for me. I have one VM that is doing rolling packet captures and I’m going to burn up a lot of my storage space on these snapshots now. :frowning:

Yeah, exactly, it’s really not ideal. Gonna have to do some re-configuring myself to make this viable.

I found the directory where the zvols are stored, do with this information what you will. :stuck_out_tongue_closed_eyes:

I setup a replication task in the UI from MYPOOL/.ix-virt/custom to BACKUPPOOL/custom with recursive checked and it seems to be doing the thing. Now I just need to tweak and fiddle with replication jobs until I get back to where I was before.

Importing allows Incus to manage the volumes, allowing access to it’s native backup tools. There is currently no UI support for the backup features, but it is being worked on.

You may be able to use Incus commands to manage this via the command line, but I don’t have any experience with it.

1 Like

Weird, I tried to do this but it didn’t work at all. Tried going through the Replication Task Wizard and it just stalled out after I hit “Save” :frowning:

I went to the Advanced setup, I’ve not had good luck with the wizard.

Thank you for the response. This is deeply unfortunate.

While I agree that using the Incus storage pool should be the main default way to manage VM volumes there is also available “Path in the host” method which could just mount any zvol on host which would quickly allow people to use Incus with snapshots managed by Truenas the same way as in EE while Incus implemention matures.

Implementing also seems simple.

Path on the host

You can share a path on your host (either a file system or a block device) to your instance by adding it as a disk device with the host path as the source:

incus config device add <instance_name> <device_name> disk source=<path_on_host> [path=<path_in_instance>]

The path is required for file systems, but not for block devices.

1 Like

This is the settings that worked for me in the advanced setup.




Holy crap, this actually worked perfectly!

Just ran this after creating the instance (and the zvol):

sudo incus config device add <instance_name> zvol disk source=/dev/zvol/<pool_name>/<zvol_name>

It then showed up in the Instances UI, and I was able to edit its boot priority and I/O Bus. Data Protection tasks work as intended now.

This is definitely the best solution, thank you!

1 Like

That’s basically reverting to 25.04-RC1 functionality, cool.

I’m going to move forward with bending the volume management to my will. There were some ARC caching changes they made that apply only to that dataset that I would like to take advantage of.

1 Like

Lol yes, that is true. The only improvement is the ability to change boot priority and I/O Bus – I had issues with boot priority on the RC (wasn’t an option, and struggled with workaround). So I can honestly say that now, I’m in a relatively good place.

My workaround was to just wait for the PXE boot to timeout. :rofl:

Is there any guidance that iX can provide about any gotchas or anticipated-safe practices would be here? Incus CLI is great, powerful, easy etc. but there’s the “don’t use the CLI” warning that the GUI flashes up…

I know an official position might be hard to come by, but as long as we use Incus tooling to remove anything we create (i.e. don’t go behind its back and delete something with ZFS tooling that we created with Incus tooling) would this be anticipated to be generally safe? At least within the range of the user not doing something stupid?