We don’t know the distribution of small vs. med vs. large iXsystems customers. The large ones are unlikely to not dedicate machines even for branch offices as hardware is cheap and support for one hardware set of infrastructure is a lot easier to train and maintain than for two+.
So I’ll presume that it’s small and medium businesses that still care about latency that will be the primary target group for this feature, should it attract attention from enterprises. Given the relentless drumbeat to just “cloud everything” from CIOs desperate to reduce their required level of technical proficiency, I wonder if that too is not a shrinking market.
But there is no doubt that some businesses will value latency and the ability to run VMs at the same time. But the turmoil in VM land and apps suggests that these two areas still have growing edges that need time and resources to iron out as iXsystems allocates resources between nice to haves and projects that enterprise customers are willing to pay for.
So far, neither VMs or Apps have emerged as a killer feature in NAS land, for the most part, they seem relegated to achieving at least peer equivalence to competing for prosumers using platforms like QNAP, ReadyNAS, and Synology (Apps) vs. ProxMox and the like for VMs.
This thread and the comment from Kris is very reassuring. I have 24.04 system that I avoided upgrading since I was out of country, and I use several VMs. Could they be docker or LXC containers? Probably in some cases. I like to keep my VMs segmented from TrueNAS, even though they are also running docker and k3s with many containers. The shift to incus was interesting to me but unwanted.
I think there is a balancing act between transperancy and avoiding confusion. I do not envy IX’s position here. At my workplace, we intentionally avoid telling customers about our tech stack under the hood because many times, some number of customers completely misunderstand it and do the wrong thing. I do like transperancy, and I always look forward to reading release notes.
Oh yeah, people should read the release notes. Don’t upgrade blind, this is not your iPhone.
Almost certainly in the vast majority of cases. The two things that I know work best in VMs are Home Assistant (for the plugins &c) and anything Windows.
If a dev environment is desired, something that’s a Linux and can be driven from any laptop/PC, that sounds like a job for LXC. Or a VM if the work to be done is to test Debian upgrades via Ansible, the sort of thing that can’t be done in an LXC.
Everything else can probably just be a container. If desired via include yml and .env file, that’s a clever pattern.
Personally I find running a VM to run containers to be a little too old school - but if it works, it works.
me too, which is why i was surprised they are reverting to libvirt vs incus and I assume (big assumption) building their own HA orchestration… or maybe use someone elses like redhats, just didnt think that was open source, unless HA isn’t coming to ommunity editon - in which case they could license something…
The whole scale direction seemed like a feature that iXsystems envisioned as a killer app to get enterprise customers to make the switch from core? But that switch didn’t happen and iXsystems seems to no longer prioritize that development, at least in the CE.
I doubt CE will have HA features going forward because the pool of eligible testers capable of giving feedback to iXsystems is small (making it easier to contact them on an individual basis instead).
HA is a primary capability of the TrueNAS Enterprise product line. This ranges from 10TB systems in 2U to 30PB systems in 2 racks.
It’s a combination of software with automated failover and dual controller platforms with reliable interconnects. Unlike a cluster, it doesn’t require the storage to be replicated.
Until now VMs were not supported on the HA platforms. With Goldeye that changes.
agreed, the time to not use LXCs is when you need a specific kernel for the userland components
i am experimenting with incus LXC for a proxmox backup server, i just upgraded it at the weekend - was quite amusing to me how many kernell warning the apt dist-upgrade gave me (i was not surprised), still worked tho.
I wish it would be possible to add kernel modules in Linux Containers.
That way you could add for example gasket-dkms for Coral Edge TPU while still being more efficient than VM.
But I wonder if this is fundamentally incompatible with Linux monolithic kernel design.
Yeah that won’t work. A kernel module would be available to everything on the host; and potentially destabilize the entire host. You can’t load a kernel module selectively in a container and only in that container. The kernel is shared between all processes on a host, including containers.