Incus: How & why are instance rootfs using emualted nvme?

RE: Fangtooth RC1

This is probably a naive question from someone who has only used incus briefly. How and why are instance roofs set to io.bus=nmve?

I’m curious as the the how after finding logging in incus to be pretty opaque in QMP logs. etc. Is this a SCALE thing, AFAIK using incus natively on debian defaults to virtio-scsi for rootfs?

It’s not in the profile, it’s not via raw.qemu, is it via “incus device config set” prior to launch? - how does the instance rootfs end up on a nvme device?

Why is this setting being made? Is there quantitative evidence for better performance, or there other reasons?

Its set in Incus config.

sudo incus config show <instance>
  root:
    io.bus: nvme

I dont think nvme is any better than virtio-scsi. Only benefit is it doesnt need virtio drivers.

Obviously it’s in the instance config, but how/when is it set? How do you debug what incus commands the TN UI generates in the backend?

We’re adding that in anticipation of making Windows 11 install without needing to roll the customized virtio Windows install ISO. Which is a pain in the rear. We’ve got it working internally, and that is one piece of the puzzle.

Of course you can use virtio if you wish, but its about removing barriers to entry.

Oh, that’s what you mean.
Well, Truenas uses Incus API. To know exactly what and how it uses your would need to read the source code. GitHub - truenas/middleware: TrueNAS CORE/Enterprise/SCALE Middleware Git Repository