I am running a Virtualmin Web Server, that handles a bunch of PHP based websites (Joomla, WordPress, etc.) as well as some static HTML pages and Proxy Sites to apps on my TrueNAS. When I started out on CORE, this was the most effective way to both host my own sites, and proxy to jails (now apps) on the host device.
I did originally, one of those phat power strip looking APC BackUPS ones. It died in a lightening storm and the switch was killed anyway.
Never heard of a UPS dying from that, maybe their house has bad grounds?
I use dockers whenever possible, and currently have 40-50 running.
I use two instances for scripts and tools I want to run on the host system when I’ll be installing/removing packages and don’t want to constantly modify and re-deploy a docker.
I use only two VMs:
- Home Assistant OS. From their documentation, they recommend deploying it as a VM. I could probably live without it being a VM, but it seemed like the easiest way.
- Ubuntu (or other distros). My primary desktop is Windows (and never really used), and my laptop is Mac. Occasionally I want to use a Linux ‘desktop’ or try out a new distro. I’ll spin it up as a VM.
If I may offer some unsolicited advice: protect your UPS with a Brickwall surge protector. Mums home keeps getting hit and ever since I put a brickwall upstream of the UPS, it has been happy instead of getting blown up.
They can be stressed in many ways. Mums house features a lot of brownouts due to rural location. That never helps, especially as large loads come online and cause motors, etc. anguish due to voltage drop. Similarly, spikes and inductive kickback issues are common also on remote circuits.
Lightning itself is a tricky business, incidence rates vary widely depending on where you are, and not all grounds are alike. Few to no houses are built with Ufer foundations and/or proper lightning protection.
Combine the spikes associated with lightning and inadequate circuit capacity common in rural areas and it’s only a matter of time before the metal oxide varistors (MOVs) in the UPS or power strip give up the ghost.
It’s the same reason that some replacement AC fan motor brands advertise that their motors are sold with external MOV packages instead of the usual potted internal MOVs that are nearly impossible to replace (see one example here).
I use Brick wall exclusively to prevent meaningful spikes from propagating to the UPS. Critical stuff is also only connected via fiber. Induced loads during lightning hits are something one should not ignore either.
Yet somehow I ended up with an HA architecture where I have an alpine LCX container (and before that a BSD jail) sharing a Z-wave dongle over TCP/IP to a Home Assistant VM…because I didn’t want to pass through a full USB PCI when HA quit working as a jail.
And (on top of HA clearly favoring their preferred development targets and at best tolerating everything else) that’s why my HA is in a VM in the first place - because it dates to running on BSD TrueNAS where there was choice that did not involve a VM.
Well in addition to HA, I have an OPNSense VM, for example, which would be nonsensical as anything except a VM. Or any other software distributed as an OS image. And in general the ability to have a harder isolation can make sense for security or hardware-isolation purposes. Or just to have a self-contained, easily movable thing.
LXC containers which are still not quite polished in the GUI should be the better choice for a lot of things that people were doing in VMs (i.e. stuff that’s not well documented or inconvenient for dockerization, stuff that you can send between hosts with a single command, etc.).
Incus coming into release is something I’ve been greatly anticipating because it takes over a lot of what were “jail use cases” in core that weren’t well addressed in scale.
I did not vote, becaause I not any more run VMs under TN.
I used to have a Linux VM for Crafty server and I made some experiences with a Kemp Load Balancer VM.
Now I moved TN from bare metal to under Proxmox.
Proxmox is better as hypervisor.
TN is perfect as a NAS appliance.
Incomparably so.
I run Proxmox Backup Server and Veeam inside virtual machines to have an all-in-one backup machine.
Having the possibility to run virtual machines directly on the NAS is a major technical advantage of TrueNAS: other big storage vendors cannot do that.
Two VM’s constantly running: one for HAOS, and one for Ubuntu server running some outdated stuff that I do not want to migrate and will eventually be scrapped.
Two VM’s switched off with Windows and Ubuntu desktop to be available for the odd job that needs some better hardware than my laptop.
Didn’t vote because I don’t run a VM on TrueNAS yet. I plan to run a Proxmox Backup Server.
Regarding the truenas apps, the amount of topics claiming that some app is broken after an update leaves me with the thought that these are subpar not very robust. So, if I were really into running apps/services in truenas, I perhaps would run them inside VM(s). Or mb I would run them as portainer stacks.
OTOH, I probably would use apps for storage-related services, like minio or webdav.
Even wanted to place it behind the spoiler. But then I decided that my post would have enough of decadence even without it.
I might be an overly-anxious type, but given how cheap a dedicated bare metal router box is, I’d not want the additional network complextity of running my main router (OPNsense) virtualised. I do appreciate may of you do this successfully though!
On topic, I run HA (haos) in a VM, but that’s about it. I used to run Arch Linux for a webserver and UniFi controller, but have long since dockerised those.
When Apple finally stops support for Intel-based CPUs (possibly as soon as MacOS 16, coming later this year), I’ll probably spin up a MacOS VM too.
I am currently in the beginning stages of trying to improve the energy efficiency of my home network. I just consolidated a switch by upgrading to a RB5009 w/POE; putting the UBNT controller into a docker container is next on the menu. Only then will I try to start with HAOS on a VM.
Our friends at parallels seem to have gotten the message that non-support of Intel MacOs on Apple silicon is a nonstarter for many folk (including me). Apparently parallels 20 will start offering Intel emulation, though they promise it will be slow as molasses.
I just got my old MacBook Pro i7 or whatever setup to run Mojave natively, with the option of running older programs from the PPC era using OSX snow leopard in parallels 17.
My biggest use case for Mojave is Cadsoft Eagle 7.x as I’m too lazy ATM to learn KiCAD and autodesk is simply ludicrous re: TOS on later versions of Cadsoft. I suppose I could transfer the kit and kaboodle to Win 11 since I also have that installer…
VirtualBox had a beta release that supported cross architecture emulation on Apple Silicon a few years ago. It would run, very slowly, certain x86 OSes. I don’t know what ever became of the functionality, whether it ever went past beta.
Yup, ditto QEMU with similar caveats- slow as molasses but then again one never breaks speed records with Eagle.
For me, it’s likely easier to have a Intel CPU handy to run stuff on. With Thunderbolt 3, etc. that MacBook Pro is quite usable, even if the built-in keyboard is garbage, thanks to external devices via Docks.
So that’s the plan for now. I’ll likely put the Eagle files on my server and then I can access them from either CPU since Mojave’s SMB support is adequate.
I’m fascinated by the amount who voted “I need to run Windows-only software on my server”.
I wonder if there’s a common software that dictate this?
I understand a Window VM on someone’s Linux or macOS desktop. I’m curious when people need it on a server.