My only gripe with HAOS is the lack of a proper installer ISO–the “dd this image to your boot device” is a bit of a hassle, and it’s not a scenario most hypervisors are designed to deal with naturally, nor is it particularly convenient for installing on physical hardware (other than on a Pi, where that’s the way you do everything, and it’s a built-in option to the rPi Imager). But once that’s done–and it certainly isn’t that hard–it really is the way to go.
Yep, I know, it’s a one time hassle, once running and configured, it runs itself, forever. Been using HA for 4 years now, never ever had to troubleshoot anything.
But yes, agreed.
Adding an extra worker to my kubernetes cluster for “free” is always nice.
Byond that I wouldn’t touch the vm system with a long pole.
Even more so since iX went “lets screw the users up without migration” again by just moving to another backend with even more obfustication beyond kvm/qemu.
It really does not.
Only for discovery really.
I’d almost forgotten about this one: I’m running another VM under SCALE (that I’ll need to manually migrate when I upgrade to Fangtooth, oh joy) for Proxmox Backup Server. There’s no app for it, Proxmox themselves don’t publish a Docker image for it, and they recommend against running it in Docker. And running the backup server under the hypervisor it’s backing up doesn’t seem like the best course of action.
Im playing games on a Windows VM on Truenas.
I run FreeBSD, Linux (including TrueNAS) and Windows 10/11 VMs on TrueNAS.
pfsense is FreeBSD.
I don’t run VMs in TrueNAS, only Proxmox. In fact, my TrueNAS is virtualized on Proxmox, with an HBA passed through. I also don’t use the TrueNAS applications, I instead use my own docker compose files to run docker apps on my TrueNAS instance, and I bind mount to my data sets in TrueNAS. My entire server, including Proxmox, TrueNAS, 10 gig networking, 8 drives, 6 VMs, and 24 containers only consumes 40 watts at the wall plug.
I use VMs so I don’t have to buy a personal computer. I work remotely and only have a MacBook Pro from my job. I run a VM for all my personal computer use, rather that use my work OS.
TrueNAS virtualised under TrueNAS? With ZFS on ZFS? Inquiring minds want to know…
Propably just a test vm with nightlies/betas of newer versions for testing purpuses…
Just for some additional food for thought:
Looking at the general statistics:
VM’s pretty widely deployed, just slightly behind NFS & SSH in terms of % systems which are running VMs compared to those services. Nearly 3x the number of systems running VMs vs iSCSI for comparison.
Personally the only VM I run is HAOS, for many of the same reasons mentioned above. My HA setup has grown quite complex, and managing it manually or via containers would be pretty tedious at this point. (I use too many add-ons)
Like others, I run HAOS, for the reasons well outlined above.
I used to run an Arch VM for nextcloud and UniFi, but with the move away from K3S to dockge, now run those as docker compose stacks.
As such, the only native app I run is dockge.
HomeAssistant, I’m using a VM for HAOS - just because there is no support for LXC and I can’t maintain the effort of this…
Simple reason currently besides easier customization: Own dedicated IP addresses for a service to have a good availability.
For example my Pihole is normally running on a VM with its dedicated IP address that gets served from the router’s DHCP.
Now I have a backup Pihole on a physical Raspi that has been teleported and is lying around powered off with the same IP as my VM in my “server room” in the basement.
Need to do some updates or shaky reconfigurations like network settings on Truenas? Just power down the VM for good and fire up the Raspi.
Wife surfing the web won’t even notice the things going on in the background.
Now that will probably change when future releases will allow dedicated own IPs for apps.
I do some IT stuff for my small church. I’m trying to train another person to do the stuff too, so I’m limiting the number of systems I use so there is less to teach him. (This is why I am not running a separate hypervisor focused OS.) The VMs themselves all run Turnkey Linux deployments so that they have auto security updates and such, and thus don’t really require any maintenance. One runs a basic web server and the other runs uptime Kuma to monitor some of my personal stuff. I prefer having them on VM from a security perspective as there is increased isolation compared to an app. (They are also isolated on the network too, with their own VLANs that can’t get to anything else.) It’s two separate VMs because the latter is monitoring my personal stuff and I wanted to make sure it could easily be removed if something were to come up where they no longer wanted me doing that. (My homelab monitors their public services, so far everyone has been cool with the mutual trade approach.)
Edit: To clarify the reason security was a concern. These VMs are running publicly exposed services.
I don’t presently run a VM on SCALE but I plan to. HAOS in a VM makes a lot more sense than the docker setup mess presently. I may also run a second VM to run BlueIris in a barebones Windows environment since I continue to fail setting up Frigate.
It seems like the vast majority of VM users do ‘real work’ in their use cases, so I apologize in advance as mine is primarily for maintaining technical skills in retirement. In short, I run VMs on TrueNAS (Core) because I can. Prior to discovering FreeNAS and its ability to host VMs, I had old physical computers cluttering my home office, running open source software for fun and experimentation. I suppose I could have taken up golf, but instead my profession morphed into my hobby.
Thank you to all who contribute to making TrueNAS available!
Although I moved to a dedicated device because of pass-through issues with my Z-Wave controller, my only use case for a TrueNAS VM has been HAOS.
I couldn’t agree more. From a TrueNAS perspective, I find the three commands to download, extract, and write the image to a zvol rather trivial, but you wouldn’t believe the number of people who have trouble with this. (I don’t use dd
in my guide)
Ok, so as unfriendly as frigate is, I have succeeded in getting the Coral TPU recognized, the various cameras into the feed, and the main status page is working.
Now it’s time to start working on detection / recording and like efforts. Happily, the configuration file is no longer obsessed with making me fail continuously so there is that.
However, there are aspects of Frigate like drawing zones that are mind-bendingly awful. The good news is that much of that can be fixed later by making changes to the config file and it’s a one-time process.
While the BlueIris menu system is quirky too, BI is light years ahead of Frigate re: features, such as being able to off-load event detection to cameras instead of having the CPU do it all. Ditto deep stack ai.
So I may stick to building a VM for Blue Iris after all.