Cannot boot a non-uefi iso image in a VM?

Running 25.04.1.

Trying to boot a Seagate seatools iso as a vm. It fails with:

BdsDxe: failed to load Boot0002 "UEFI QEMU NVMe Ctrl incus_root 1" from PciRoot(0x0)/Pci(0x1,0x5)/Pci(0x0,0x0)/NVMe(0x1,00-00-00-00-00-00-00-00): Not Found
BdsDxe: failed to load Boot0001 "UEFI QEMU QEMU CD-ROM " from PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/Scsi(0x0,0x1): Not Found

I have just created it afresh to boot from the ISO.

How do I arrange for non-UEFI boot?

I have tried:

root@truenas:/mnt/dozer# incus config set seatools security.csm=true
Error: Couldn't find one of the required UEFI firmware files: []

Supposedly, security.csm defaults to false but if true allows booting with non-UEFI firmware.

Also tried

# export EDITOR=vi; incus config edit seatools

to add the raw.qemu.conf entry, after managing to understand how it was supposed to be used:

config:
  boot.autostart: "false"
  limits.cpu: "2"
  limits.memory: 4096MiB
  volatile.apply_nvram: "true"
  raw.qemu.conf: |-
    [drive][0]
    file = "/usr/share/seabios/bios-256k.bin"
...

No joy. qemu process is running, but it is stuck in a loop trying to boot the machine and failing, as far as I can tell from perf top

Note that Instances (Linux Containers and VMs) are Experimental in 25.04 and features will be changing before they move out of Experimental status.

Having said that, I just tried running a BIOS only OS image in a VM and it failed.

But, I think lack of BIOS boot support will be the least of your issues with a small toolset like seatools, the lack of VitrIO drivers for either SCSI or Block devices will prevent the included OS from accessing it’s disks. IIRC seatools uses FreeDOS as the OS layer under the diagnostic tools.

Question. Are you trying to spin up Seatools because you have a failing/failed Seagate drive and they want output from it before you can RMA the drive?

If that’s the case you can probably Rufus that ISO to a usb drive and boot the server from that, capture the output they need, then boot back into Truenas like usual.

Trying to run one of the “fix” menu items that take almost a week to complete for this drive type / failure mode, based on past experience.

That is what I did before I gave away a spare X10DRi based system I used to mess with the first one of these to fail, but I was running a linux version of seatools booted off a kali-linux usb drive then.

I guess I should just do that again in a VM, now that I know I can PCIe-forward the sSata controller, but I wanted to try the old (dos-based?) version to see what it was like.

BTW: both of these defective drives I purchased from B&H Photo as a group of 4 but I am not in the US so RMA is not possible. Painful lesson to learn.

I have two sata cntrollers in this platform, one I had never used before hooking up disk slot zero to it for this purpose.

This has allowed me to successfully PCI forward it to a VM I created using the GUI. Not entirely sure it will work until I forward it to a bootable VM. The non-bootable VM UEFI BIOS complains that it cannot reset the device.

I’ll know in 20 minutes or so.

1 Like

Success!
smartctl -a /dev/sda works in the debian VM I spun up.

Once I locate the linux version of the seatools, I can get to work.

1 Like

The test I wanted to run is the fix all long which takes almost a week to complete.

Running in an incus VM with the extra sata device PCI-forwarded. I had to forward the entire device so that the PCI reset worked as expected:

00:11.0 Unassigned class [ff00]: Intel Corporation C620 Series Chipset Family MROM 0 (rev 09)
00:11.5 RAID bus controller: Intel Corporation C610/X99 series chipset sSATA Controller [RAID mode] (rev 09)

This is cool. Loving TrueNAS!

1 Like