The issue isn’t that there were problems, you’re glossing entirely over the issue. That was expected from experimental. We expect things to break or not be fully functional.
The issue is this is where they were going and this is how we’re going to do things, getting community excitement then just dump it after one release cycle.
They got community buy in and support, I was excited for this change. It really is the perfect solution for this type of setup. Yet it gets yanked anyway.
They now have a long track record of this issue, and no amount of ignorance by anyone will cover that up. I’ve been a supporter and have tried to help out in many ways as I can, but I spent a ton of work migrating, helping others do so, and testing, etc, and it just got dumped.
I’m a very positive person and always will be, but my attitude started changing when I would submit valid bug reports and they just got torpedoed.
At this point, IX can apologize and say whatever they want, actions speak louder than words.
If they think a feature isn’t going to make it, then it should be marketed as just that. Hey, we’re testing this out, it may not make it, but let’s see how it goes. That’s an entirely different expectation. It was completely opposite and was marketed as this is the new hotness, get in and start migrating, help us test, and let’s make the future of Incus in TNCE awesome! That’s what I did. That’s the issue, it wasn’t marketed as this may go away, but this is the future…
Given the ongoing 24.10-onwards kerfluffle, I didn’t even try to implement VMs, because it’s outside my wheelhouse, I know it’s going to take me hours, and I would like to benefit for a while from said work once it’s done. For whatever reason, this is the case with my other hardware, such as my raspberry pi’s that do their job as pi holes, time servers, etc. without issue.
The problem with the latest twist and turn re: SCALE is that management omitted a transitional upgrade path, ie required a potentially massive amount of VM transition work the minute the base OS / Middleware / whatever was updated.
I understand that allowing incus and libvert to run side by side is a potential risk (messing up datasets, etc). It is more technologically challenging and harder to code interlocks into the code to prevent that.
At the same time, such transitional releases give users the luxury of time and pacing - ie if I have four VMs to update, I can do one every weekend for a month as opposed to having to work my way through all of them at once, with the inevitable issues that we are seeing crop up in the help section.
Orderly transitions go a long way. I see the same at work, where industries agree to more onerous regulations as long as they get longer implementation periods because it allows for more planning, smaller CAPEX every year, more R&D, and so on.
Ironically, the only reason I would “upgrade” from Core to SCALE is to inherit the upstream updates of OpenZFS and the software I run in my jails. If there was a TrueNAS train that was “no new features, just updates to the underlying FreeBSD and OpenZFS”, I would be using that train instead of SCALE/CE.
This is why I am considering zVault as an alternative after May 2026.
If I upgrade to SCALE/CE, I would be losing out on some features, which were gutted from Core, so it would be a downgrade for me.
EDIT: If SCALE/CE did not remove certain features and capabilities, and provided first-class jail management that would remain reliable across releases, then I would have already switched by now. I don’t care if it’s Linux or FreeBSD under the hood.
EDIT 2: Don’t believe the rumors. I actually want TrueNAS SCALE/CE to succeed and be the best NAS ever. I would rather jump off a steady ship, which I can jump back into if the other ship sinks.
Another benefit of running libvert and incus side by side is that if incus has some growing pains, edge cases, etc. that need to be resolved, the engineering team at iXsystems can address these issues in a orderly manner. Once the replacement feature has been proven to work, a wholesale transition can be effected.
Or, they can realize that incus is a dead end after all, despite the initial promise. That’s ok too, as long as people have an opportunity to move their work / adapt / whatever in an orderly, not all or nothing manner. For example, transition scripts / guides / etc. go a long way vs. “destroy your previous work and start over”.
Edit: Looking to how other companies have handled this (and even iXsystems) note how much handholding MS has offered to get away from SMB1 / NTLM1. MS deprecated it for years, yet allowed its use. MS even offered to write a SMB2 stack for sonos.
Similarly, IXsystems made you check a box and modify SMB aux commands or whatever. But now, years later, it’s gone altogether. People were given time. Allowing for transitions is harder but it can buy a lot of goodwill.
Good. Looking for an “apology” is a Karen wanting to talk to the manager power-trip move. It’s a totally unnecessary humiliation ritual that serves only those used to managing through fear.
Not everything works out. Sometime you have it; sometimes it has you. That how development goes. That’s how life goes. Keeping score is for Karens. Just go forward and give your best.
Focus on Learning, Not Punishment
Celebrate Effort and Progress
Embrace Failure as a Prerequisite for Innovation
Encourage Risk-Taking (Calculated Risks)
Celebrate Experiments (Regardless of Outcome)
Most of the “problems” mentioned here are just people failing to regulate their emotional responses.
Incus in TrueNAS seemed cool – and it had potential to graft in ready-made API, stateful snapshotting, host-to-host migration, maybe even clustering advances. If that can work out, great. If it can’t work out, then I’ll just build with what I get – and be grateful for it. I’m going to focus on what I can have, not the one thing I can’t have.
If I had to choose between Incus and Docker Compose, I’d take the Docker Compose, especially with built-in Nvidia container toolkit support and easy bind mount volumes to ZFS datasets that I can easily manage, snapshot, replicate, etc. – like I have in TrueNAS.
Guys, I’ve been trying to avoid the continual time-waste that seems to be happening here on this thread, but enough is enough. Seriously, if 5% of the effort spent complaining, griping and otherwise arguing back and forth was spent on constructive open source code development or similar meaningful contribution, imagine where we could be right now.
I will say this once, politely, but firm. If you are here just to complain, gripe, moan or otherwise argue incessantly without contributing anything new or meaningful to the conversation, then there are better places where you can do that. I suggest you find those places.
If you are here to contribute, collaborate and otherwise be part of the “solution” and not the problem, then of course we welcome you. This forum is free much like our software, in that it costs us (iX/TrueNAS) time, money and resources to run and maintain. Its purpose is for constructive dialog and community helping community through their issues. I’d suggest folks re-read the FAQ on what is considered being a good forum citizen.
I would implore you to see this as more than griping (though there is some of that!). I feel confident that the vast majority of contributors on this thread want TrueNAS to succeed, but have been wrong-footed by the approach taken in 25.04 to introducing instances. (I won’t go back through it again!).
It’s just a shame that you appear to be dismissing all this strength of feeling as being time-wasting, rather than a group who really care about the product being as good as it can be. Perhaps that attitude is why we are where we are.
Not at all. The feedback was heard loud and clear. The first 10 times
But at some point some of us need to actually do work. And our community needs to understand that while we all want VM’s/Containers to be awesome, they have never been apart of our Enterprise offerings (You know, the thing that pays for all of this). Big part of all this churn is trying to get those functionalities into shape enough to call them Enterprise worthy, which is why we’re going through these struggles. So folks will need to be patient. Understand we’ve heard you, but beating the same dead horse over and over isn’t going to add a single line of new code to the product to fix the issues at hand.
Thank you, Kris, I think this a really helpful post to close out the discussion and will set most at rest knowing the core point of the feedback is heard.
(And I see from your perspective, as you’ve pointed out, that after several rounds of feedback, going over the same ground ends up diverting from the work to move things forward, which is indeed what we all want.)
Since this thread was unlisted, I’m going to start a new thread titled “Continuing the discussion of the continued discussion on TrueNAS Virtualization Plans for 25.04.2.”
Just wanna say I totally agree with you.
Incus was a nice try and if Truenas stumbbled upon some blocking issues and choose to go back to libvirt thats absolutely understandable.
For me I just didnt clearly understood that this is not temporary but permanent, as you likely saw in forums chat. But thats clear to me now.
Thats a good point, that networking issues could be one of the reasons.
I myself would just enjoy to know what these technical reason were because I find that interesting and I think we can learn about limitations of these tools from that.
But of course I dont want to irritate Kris and keep beating a dead horse.
For me personally ideal end of this would be just simple post listing a few reasons and issues why Incus didnt work out and then we can just accept that and look forward to future with libvirt.
But I also understand we arent owed any explanation. And Truenas likely doesnt want to confuse peple with any technical posts about backend. So dont consider this some Karen-like demand.
This is a hell of an introduction post (in a positive sense from where I stand).
We are in violent agreement on the fact that iXsystems has a severe leadership issue. Not that they are alone with this …
Also, as someone else (don’t remember who, sorry) pointed out: The reason why people are engaging so strongly is the fact that they still(!) care. Many organizations dislike this.
But it is actually a good sign! Because it shows that folks haven’t given up yet. They are close to, and desperately trying to change things. But there is still a chance to win them back. Yet, it requires more than “warm words”.
The real issue is when people stop complaining. Because then they don’t care anymore and have moved on.
We are in agreement on this. I’m glad we have so many passionate users. And when we make a mistake in direction, we try to own up to it. I think somebody even went to the trouble of making a video podcast where that was explicitly said at one point. I don’t mind eating crow when necessary, keeps us all humble.
A distributed cluster comes with an enormous amount of complexity. It starts with “split brain” decisions, so you need at least 3 instances of everything. Then comes latency which beyond a metro cluster is a real issue.
For more details you can Google e.g. “Oracle RAC vs. GoldenGate” or “IBM z/OS sysplex”.
There have been some inconvenient decisions, but the situation about apps being not fully fixed was pretty clear, which is why I have stuck to Core for a jail I need, and not updated to Scale and used an app. And the release notes for Fangtooth made it clear that those who didn’t want to start again with experimental VMs should stay on EE, pending advice on how to upgrade.
Given this I am astonished how many people are offended by completely predictable problems they have caused themselves.
If I do upgrade to Scale to use an LXC “app”, I shall be expecting to redo it Goldeye, which as it is only for Unifi will not be too much of a burden.
All in all I think most of these problems are self-inflicted, and largely by people who would be quite able to avoid them if they wanted to.