The grievances from the TrueNAS community are not strictly due to things being “retired”.
The headlines should read like this if we’re making the same comparison:
“Apple discontinuing OpenDoc to replace it with something else, but then brings it back 6 months later because they think OpenDoc is a framework their users should remain on after all.”
Many of users following this thread might be eager to upgrade to 25.04.2 as soon as possible, as it was officially released yesterday.
But I have to issue a warning: Several users have reported that all their VMs stopped working after upgrading from 24.10.2.2/24.10.2.3. Apparently, these VMs are based on libvirt.
I recommend continuing to discuss and track upgrade failures in this thread:
TrueNAS still has an opportunity to right this wrong by keeping in Incus after all and restoring the ability to create and edit Incus VMs in a 25.04.3 release. Then we have both systems running in parallel, each one not knowing of the other and not messing with the other, they develop and eventually a decision can be made on the best one—with a nice migration path.
But that’s not what was decided.
So, yes, a wasted development cycle and probably more work ahead to implement from libvirt the functionality that would have been provided by integrating Incus.
“Buy cheap, buy twice”, and all that…
Meanwhile, jailmaker, the best contribution to SCALE/CE from the community, keeps running. Strangely, iX/TrueNAS did not remove nspawn when introducing LXC to run system containers…
(But, shhh! we won’t be discussing about backends any more. Security through obscurity is the way to go. )
If 100% of enterprise users avoid TrueNAS’s VM feature, doesn’t that speak volumes about how lacking it is?
You can just run a poll or Google it — a ton of people are running TrueNAS on top of Proxmox VE, not the other way around. Why? Because the difference in VM capabilities is way bigger.
That’s exactly why many users are totally fine with the performance hit of mounting ZFS storage from TrueNAS back into Proxmox via NFS — just to avoid using TrueNAS’s own VM feature. That alone tells you how bad it is.
I’m not sure what iXsystems has on their long-term roadmap, but isn’t a distributed VM cluster built on top of ZFS actually a solid direction for the future?
IMO (as someone with a lot of Enterprise IT Management experience), no it doesn’t say anything about TrueNAS’ virtualisation capabilities.
Enterprise Datacentre IT Operations are VERY focused on availability, and they do not like putting even two eggs in one basket so they go down together and so are prepared to have more hardware boxes, and they will usually pay more (and often a LOT more) for redundant hardware. TrueNAS virtualisation doesn’t meet these requirements cf. specialised virtualisation platforms.
Enterprise Branch Operations are more variable - but IME they don’t tend to have virtual machines in branches because they almost always be centralised into a data centre more cost effectively and where managing them is easier.
Yes - there might be about the same number running TrueNAS under Proxmox alongside other VMs as running VMs under TrueNAS. But they won’t be Enterprise users.
And yes - Proxmox being a specialised virtualisation platform with a sole focus has better capabilities, but it also has significantly greater hardware requirements to run TrueNAS virtualised than to run a small VM under TrueNAS. So large Home Labbers might run VMs under Proxmox but small Home Labbers will run VMs under TrueNAS.
No - it tells you that a specialised hypervisor has better capabilities than a NAS with some VM capabilities added.
BUT in some cases, it may also be a sign that experienced users don’t trust TrueNAS to deliver long term stable virtualisation capabilities without all the angst created by frequent changes of technologies.
If you want distributed (by which I think you mean resilient to single points of failure) VM cluster, then you need resilient storage clusters as a pre-requisite - and that is pretty darned difficult (though TN have achieved it if you buy their own specialised hardware). But if you are investing in that for high availability, then you would almost certainly reserve those boxes for storage and do virtualisation on other boxes.
… and that’s assuming the company even has data centers, etc. Some company CIOs get prizes for outsourcing literally everything into the cloud. Perhaps those prizes are sponsored by cloud giants? Who knows.
For end users with large datasets / files this approach is super annoying because of the latency that is added with every file operation because OneDrive, et al have to be updated too, which means the file has to be sent into the cloud over whatever internet connection bandwidth you may have, every time you save it.
But it looks good on paper in terms of how said CIO was able to minimize his organizations cost center by eliminating just about every human in it. Also a great way to abdicate all responsibility if something goes sideways, such as during provisioning, upgrades, etc. argh.
IT management (like all other professions) has people with a variety of degrees of competency, and the vast majority are NOT technical enough to be able to judge for themselves with foresight whether technology decisions are a good idea, and many are not project oriented enough to know how to manage risks by e.g. getting good foresight technology advice from others.
And/or with Ceph for VM storage backend which is pretty darn resilient and scalable. Or, as is sometimes forgotten in these discussions, public cloud (AWA, GCP, Azure, …) which is obviously the mother of all virtualisation environments and where more and more enterprise is going.
I also speak from personal experience when I say it can go both ways. Obviously needs foresight and the right choices made along the way but the elasticity/scalability inherent in these massive cloud environments can provide great performance benefits too, even locally, which would be extremely expensive to replicate on your own hardware. But it needs thoughtful deployment and for the software and enterprise architecture to have been designed to be able to benefit from it.
Ceph is a storage platform - not a virtualisation platform (though it can support virtualisation platforms with virtual disks. And it isn’t ZFS based either.
So whilst Ceph is interesting to learn about, and whilst there might be an interest in discussion the merits of Ceph vs. TrueNAS storage and where each is applicable (i.e. whether an Enterprise would be better off with TrueNAS or CEPH for their distributed storage needs), I am unclear how this is relevant to a discussion of TrueNAS virtualisation.
Absolutely — I agree this is mostly true for home environments. In my own HomeLab, I have both setups: TrueNAS providing centralized storage and container applications, as well as a bunch of test machines running Proxmox VE VMs/LXC with Ceph-based distributed storage.
That said, most of my stable, long-running services are still hosted on TrueNAS. I simply want to avoid the overhead of network-mounted storage (even Proxmox now mounting via NFS-over-RDMA), simplify the architecture, and save power — electricity is expensive, and my rack already idles at over 600W.
For my personal use case, it would be ideal if TrueNAS could provide a complete and flexable VM/LXC solution, so I could consolidate all core services there. I don’t need enterprise-grade high availability in a HomeLab — but the current VM implementation in TrueNAS doesn’t meet even some basic VM requirements, so I still have to depend heavily on Proxmox for virtualization.
True — but in practice, Proxmox VE with Ceph works well even without needing dedicated external storage nodes, especially for mid-scale deployments where cost is a major factor. I’ve seen this kind of architecture used effectively in product environments.
My concern isn’t really about high availability — it’s more about questioning where TrueNAS is heading. Once you’ve delivered centralized ZFS-based storage, and you’ve added enterprise-level features like NVMeoF, RDMA, etc., what’s the next step for a so-called “enterprise storage”?
Well, Ceph is as much or as little relevant to virtualisation as ZFS is. In - as you say - underlying storage technology. Difference being that Ceph is designed from start to scale horizontally across clusters and in that incidentally very well suited for ”hyperconverged” compute+storage, which SCALE was set out to achieve, but fell short due to betting on the wrong technologies.
I am by no means advocating a move from ZFS to Ceph - both have their uses and unique areas of strength and both are awesome and leading technologies in their respective fields. But I also think equating ”TrueNAS storage” with ”ZFS” is significantly narrowing the opportunities ahead - assuming iX really means business when they claim or aspire to be a storage and compute platform rather than a GUI for ZFS and some ancillary services for remote access to the data.
Clearly this is why Proxmox VE supports both (ZFS and Ceph).
I realize of course that this is large step though for a company which identifies so strongly with ZFS.
Having followed this thread for a few days, this passionate discussion reminds me of the upset that happened when iX abandoned Truecharts and changed to Docker apps. That change was a win win for TruenasCE in terms of adoption rates and increasing the user base. Expectations were high for Incus but apparently haven’t worked out for whatever technical and business reasons (the home users will never know why). Sure there is now some loss of confidence in TruenasCE which may be reflected in a slow down in adoption rates of new releases–I will certainly delay upgrading my install. But those who are expecting mea culpas from iX and a major change in how they aggressively roll out new changes in what they consider non-enterprise features will be disappointed. Such things rarely happen in business, especially when the adoption rate is on a growth trajectory, and open source at the end of the day is a business.
I just wonder why home users should never know why? Is that a secret for some reason?
Incus itself could use these technical reasons as feedback to improve itself.