stable/fangtooth
← incus-storage-pools
opened 08:07PM - 10 Mar 25 UTC
This commit adds the ability for the administrator to configure multiple storage… pools for incus on TrueNAS. This is required in order to give administrators flexibility about where to store instances and the ability to shift them to different storage pools.
Due to the complexity of potential incus instance configurations, failure to initialize any incus storage pool is treated from the standpoint of middleware as a global failure to properly set up the virt plugin.
Since it touches all areas of the virt plugin APIs it is a pre-requisite for developing backup / restore and snapshot endpoints.
The middleware virt plugin has been changed in the following ways:
* virt.global.config / virt.global.update -- these methods now have a storage_pools key, that provides a list of ZFS pools that are configured for use with incus. There is a 1-1 correspondence between incus storage pool name and ZFS pool name.
As a result of this change, the `pool` key in virt.global indicates the default pool in which new instances and volumes will be created if the optional `storage_pool` key from relevant API payload is omitted. This is to maintain backwards-compatibility with earlier API behavior so as to keep the UI in a functional state until the new storage behavior can be implemented.
Storage pools are removed from the running configuration and deleted from the incus config by removing from the `storage_pools` list. Attempts to remove pools that are in use by incus instances or volumes will raise a ValidationError until those instances or volumes are deleted.
* virt.instance -- entries now contain a `storage_pool` key indicating the incus / ZFS pool in which the instance's root filesystem is located. As a result of this, changing the location of the root filesystem of an instance will result in changing the `storage_pool` in which that instance appears.
* virt.instance.create -- this methods now can take an additional key in the creation payload: `storage_pool`. If specified, then the instance's root filesystem will be created in the specified pool. If omitted, then the instance will be created in the pool designated as the default `pool` in `virt.global.config` output.
* virt.volume -- entries now contain a `storage_pool` key indicating the incus / ZFS pool in which the volume is located.
* virt.volume.create -- this methods now can take an additional key in the creation payload: `storage_pool`. If specified, then the volume will be created in the specified pool. If omitted, then the instance will be created in the pool designated as the default `pool` in `virt.global.config` output.
* virt.instance.device_* -- Disk type devices now have a `storage_pool` property indicating the storage pool in which the disk exists.
The middleware API changes corresponds with the following changes for incus configuration:
* The `default` profile now does not include a hard-coded root filesystem disk. The root filesystem for new instances is defined by middleware when it is created through our APIs.
* There is no longer a hard-coded `default` incus storage pool. Storage pools are named based on their underly ZFS pool.
Backend API changes merged. UI support forthcoming. Basically, you can define an array of zpools to use for incus storage under the storage_pools
key.
When you create an incus volume, instance, etc, the API now has an optional key storage_pool
that allows specifying the pool in which to create it.
This allows for configurations where, for instance, VMs are on a different pool than containers. Locating containers in pools where external data is located (for instance media dataset) is interesting from block-cloning / efficiency perspective.
4 Likes