I’m new here so please forgive me if this is a dumb question or subject.
I have TrueNAS Scale running as a guest on Proxmox. It took me nearly
3 days to get it working and during my searches for answers I came across several posts warning that PCI Passthrough is really only experimental with Proxmox.
This makes me nervous. So I ask the community, what (Free) Hypervisor does TrueNAS Scale run well on? It seems that VM Ware aren’t doing a free version anymore. MS Hyper-V? XCP-ng? Any thoughts would be helpful. Thanks
True, and that’s the only known-solid option AFAIK. Hyper-V is only free if you don’t have to pay for a Windows Server license.
How recent are they? Because of course Proxmox has been developing over time, as has xcp-ng. And I think those really are your two options.
I suspect PCI passthrough will forever be treated as a “you break it, you fix it” feature, not because it’s unstable or anything, but because the configuration is not trivial, options vary quite a bit, and the caveats abound. And people’s expectations are often rather impossible to meet.
Hell, I have some experience and got tripped up the other day by a system that should have supported SR-IOV, but didn’t because the system firmware didn’t enable the magic configuration bits needed to support it.
Proxmox’s PCI Passthrowgh warning should be at least a decade old iirc.
XCP-ng is also a great alternative, with plenty of tutorials around as well… but if you are using SCALE why would you need an Hypervisor? Isn’t that one of his many jobs?
Yo dawg, we heard you like hyperconverged systems, so we put a hyperconverged OS that provides KVM inside your hyperconverged OS that provides KVM, so that you can virtualize while you virtualize.
…because SCALE’s UI for the hypervisor features is light-years behind that of Proxmox and xcp-ng.
Edit: I really like SCALE for apps, particularly the TrueCharts apps. They’re everything plugins ever wanted to be and more–they reliably install, it’s a point-and-click experience, they have easy access to data storage on the NAS, Ingress is trivial, they’re kept very up-to-date, etc. Apps are a definite win.
It’s the VM piece that falls behind. I don’t doubt that the VMs themselves are just as performant and stable as they are under Proxmox–they’re both running KVM, after all. But the UI piece just isn’t there yet.
Thank you all for your thoughts
I didn’t see it said but there are actually quite a few people here running Proxmox and TrueNAS on top of that. If my ESXi ever gives me trouble, I’ll just ship to Proxmox given the things I’ve seen posted.
Meanwhile. I devirtualized TrueNAS and now use it as my hyper visor. Yes, the VM configuration is a bit primitive, but the benefit is native/local access to the system’s resources and file systems.
Believe it or not, I cannot run a VM in my TrueNAS while running it on ESXi. If there is a trick to it, I just don’t know it, and that does not bother me. One day I might run TrueNAS on bare metal again and then I can play with the VMs there.
Isn’t there a checkbox in VMware to pass vmm extensions or something like that for nested hypervisors?
TrueNAS Scale runs well on my Proxmox Hypervisor with dedicated Storage.
And Hypervising is much better with Proxmox
Not sure if there is ‘vmm’ checkbox, maybe I will take a look into it. It would be cool to run a VM in TrueNAS so i can at least relate to other people’s plights.
I have an LSI 9300 and had to edit the TrueNAS VM’s config file (rombar=off) to stop it from trying to “read” the BIOS settings of the LSI on boot which would literally take 10 minutes. That was hard to troubleshoot because I kept thinking that the VM had hung. The config tweak worked, but seemed kind of janky to me. Should I be worried or should I just accept that I am now living in a world of details and tweaks?
That sounds fairly normal
LSI HBAs can have a BIOS OpROM which allows booting off a connected disk.
This would seem to be a bad idea when virtualized as you’d want to boot off the boot image that you’re hosting in proxmox
Disabling the ROM BAR disables the OpROM I think. The Reddit discussion points out a GUI option to do it I think
@Stux Thanks for the information, and I tried the Expose hardware assisted virtualization, it was a No Go. The error message was it failed due to me using PCIe passthrough (for my drives of course). It was not meant to be. I will need to read up on it some tomorrow to see if I can get through this hurdle.
PCI passthrough is tricky, and very dependent on hardware.
My previous home server hardware had:
i7-3770, 32GB RAM + asrock extreme6 motherboard.
It has 6 sata intel ports plus 2 through an asmedia adapter.
As I recall I had 2 sata SSDs in mirrored drives for boot/virtual machine storage.
They were connected to the asmedia sata ports.
I passed through the intel sata controller to the trunas scale VM with no issues.
It lacked enough memory and CPU power, so upgraded.
I typically use older gaming hardware as a home server.
So now main home server is:
3600X CPU- MSI B450 tomahawk - 64GB memory.
It is much more powerful but PCI passthough is apparently broken.
More accurately I do not wish to spend time fixing it - if it can be fixed.
So I used this:
https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)
I shows how to attach drives without PCI passthrough.
It is not ideal and smart monitoring of drives need to be through proxmox instead.
Of course it is not ideal.
Oh and I pass through the truenas boot drive also to proxmox, to boot from.
It means if there is a big problem with proxmox, could reboot from the drive and still access the data.
Truenas scale lacks a lot as a hypervisor as I see it, so use proxmox.
Ideally it should be multiple machines, one a NAS, and one as hupervisor. It is not always practical.
You could just nuke the bios?
Example for or an older card, should be fairly similar nowadays (sas3flash o/c)
Yes, nuking the BIOS and UEFI OpROMs would likely resolve that specific issue.