I understand, then it should’ve never been offered as supported solution from the start. I guarantee you IX devs are divided into two camps, related to this decision, since some of them share the same logic I detailed into several posts here.
In our “Enterprise” case that is exactly what they do. Apart from major hyper-scale enterprises, we don’t interact with hardly any folks who are doing K8s at scale. Our particular enterrpise customer segment is pretty happy with light-weight containers for deploying something like Minio for S3 services. TrueNAS isn’t a generic hosting server for building a cloud-scale public hosting offering. It is storage. Any admin worth their salt at that enterprise scale would end up running K3s/K8s as a stand-alone deployment and building the bespoke setup that best suits their very specific use-cases. TrueNAS plays well in that type of setup today in a storage role.
I would call BS on that. Our engineers have been very enthusiastic about the shift to Docker, especially since they have been the ones responsible for digging into every K3s bug that has come across over the past couple of years The overwhelming consensus here has been that K3s is simply overkill for the vast majority of our use-cases.
I know it was overkill for all of my use cases, which is why after I gave k3s, k8s and microk8s an evaluation, learnt MetalLB, Traefik, OpenEBS, Helm charts etc, I said “screw this” and went back to compose.
Heh.
Ha, you aren’t the first we’ve heard that from
Came to the conclusion that k8s is cool, but running a cluster is a full time job.
I would call BS on that.
Unneeded words, from a SVP.
Our engineers have been very enthusiastic about the shift to Docker, especially since they have been the ones responsible for digging into every K3s bug that has come across over the past couple of years.
I think IX is using the easy way out. If that is the case, why did you offered K3s to begin with. Either ways, I personally have a solution to the problem you candidly introduce, but other end-users might not. I’m going to desist as I see this discussion derailing from its original scope. From your response, IX definitely does not digests the constructive discussions.
why did you offered K3s to begin with
This has already been answered: because SCALE was supposed to be able to cluster. And apparently that plan was 100% dependent on Gluster, and there’s no acceptable alternative–so with the death of Gluster, so died “clustered TrueNAS SCALE.” And since clustering was apparently the only reason they were using k3s in the first place, so died the rationale for k3s.
You mention Longhorn as an alternative–can it provide general storage, with ZFS on top? Or is it only for k3s storage? If the latter, then it really doesn’t address Gluster’s (intended) application in SCALE.
Edit: iX have said that it wouldn’t be technically feasible to have both Docker and K3s at the same time without some kind of sandbox or VM for at least one of them. I don’t have the knowledge to evaluate that claim. The TrueCharts folks appear to disagree with it. If iX are correct in this regard, then I think that goes a long way in justifying the removal of k3s. I continue to think it was a bad decision, but that doesn’t mean there are no points in its favor.
You mention Longhorn as an alternative–can it provide general storage, with ZFS on top?
Of the top of my head, as example. I’m sure the devs can implement this easy.
# zfs create zdata/longhorn-ext4 -V 500G
# mkfs.ext4 /dev/zvol/zdata/longhorn-ext4
# mount -o noatime,discard /dev/zvol/zdata/longhorn-ext4 /var/lib/longhorn
Work it out and push a change/feature/option. I’m sure they’d be happy to review.
Edit: iX have said that it wouldn’t be technically feasible to have both Docker and K3s at the same time without some kind of sandbox or VM for at least one of them. I don’t have the knowledge to evaluate that claim. The TrueCharts folks appear to disagree with it. If iX are correct in this regard, then I think that goes a long way in justifying the removal of k3s. I continue to think it was a bad decision, but that doesn’t mean there are no points in its favor.
To clarify, I’ve said that with some nuance on Reddit and elsewhere. It is technically possible to have K3s + Docker, or K3s + Docker + Podman + Nomad + $OTHER. The question is not can you do something, but should you do something. People hack crazy things all the time
At the end of the day its about managing complexity and supportability. Judging by the low numbers of TC users, and users who leverage K3s specific functionality it would make near zero sense for iX to shoulder all the additional complexity of trying to maintain two container ecosystems in parallel. We’d be more than doubling the support burden, since now we have to maintain and troubleshoot each, but also additional issues that come from where they intersect. That would make the overall quality of the product worse, not better for the vast majority of users.
Is the customer that runs the show, not the developers.
PAYING customers run the show. From the few we heard from in the forum, they want pure storage from iX and are not interested in any form of apps.
We, the home lab freeloaders, enjoy a remarkable degree of interaction with iXsystems, including top managers, here. But we are NOT the ones whose voices define the product. And I think it is fair to say that the overwhelming majority do not care what’s running their containers[1] and just want it to be simple, yet flexible.
You’re only the third forumer I’ve seen here complaining about the removal of k3s because he’s actually using custom Helm charts. ↩︎
For Enterprise users, we will support K3s in a Sandbox if that is required.
For our community, we have validated and provided a how to guide on Kubernetes in a Sandbox. We will do more in future.
Some members of the TrueNAS community have expressed interest in running their own custom Kubernetes instance, as opposed to the iX-provided installation. Sometimes this is to integrate with their own management system, use a different runtime, or even experiment with a “cluster-in-a-box” style configuration; but whatever the reason, this small tutorial will provide a brief overview of the process of installing Kubernetes within a TrueNAS SCALE “Sandbox” using the popular Jailmaker script. Prer…
Really, what is being removed in 2025 is TrueNAS support for Helm charts. They are powerful, but too rich and complex. One set of charts doesn’t play nicely with another set of charts and the users get a bad experience. The charts are difficult to maintain, port and integrate.
The charts are difficult to maintain, port and integrate.
While I understand IX devs have several tasks to fulfil related to Scale, stating that Helm charts are difficult to maintain is unrealistic. They are probably the most robust way to deploy applications, especially when combined with products like ArgoCD.
You’re only the third forumer I’ve seen here complaining about the removal of k3s because he’s actually using custom Helm charts
So I’m re-iterating what I said several times, if there are only few people on the planet complaining about K3s removal, why this feature was not evaluated properly by IX, prior initial release. I personally believe people who are not interested in K3s, do not have the understanding and knowledge to use this incredibly powerful and elegant product. Frankly, my vision was that IX will evolve the use of Kubernetes to cover a lot more than apps, into their products. Which is the trend for many years already, out there. I agree with you there is no point to debate this but the end-result is quite unsatisfactory.
It is technically possible to have K3s + Docker, or K3s + Docker + Podman + Nomad + $OTHER.
Team $OTHER, represent!
…
stating that Helm charts are difficult to maintain is unrealistic. They are probably the most robust way to deploy applications, especially when combined with products like ArgoCD.
I know so little about Docker but I did look into Helm charts briefly (just now). The problem with helm charts I see–and I assume docker will suffer this as well–is the extra layer of abstraction; there is no “viable” way for an end-user to find out what the difference between this and that is. All they’re seeing is “Bob” v “Jane” version.
The other aspect is on the developer side is the need for testing, validation, and assertions during the development of the helm charts. Not to mention updates: I would bet helm chart generation is automated (I would if I had to do this). -i.e. THIS GUIDE is crap. There is no way you could build and maintain a reasonable repository of charts manually like that.
I say roll your own repo. But I guarantee, if you do not include an XYZ chart in your repository and an end user finds XYZ in another repo, they will favor that repository over yours. -i.e. your chart will be nothing more than disposable or a stepping stone along the path of finding a one-stop-shop-repository for that user’s needs. It’s a nonstop chase for the developer and the end-user.
…
why this feature was not evaluated properly by IX, prior initial release.
Money. As I said above, containers are achievable in BSD but yet the decision to use/switch to Linux and the Kubernetes brand was a chance to get a slice of the pie. -i.e. Articles like this.
Chocolate? Apple?
Then that becomes Chocolate-apple-banana and Apple-sauce and Chocolate and Apple are tossed in the bin-bucket along with Orange and Tofu. Count the number of neoVim plugins for an exercise. Then count the similar ones.
The major selling point behind pkgs are that there is only one repo. There is no competition so the problems above become smaller (-i.e. to only “testing/validation” and “updates” and most of that can be automated).
The problem with helm charts I see–and I assume docker will suffer this as well–is the extra layer of abstraction; there is no “viable” way for an end-user to find out what the difference between this and that is.
Helm has a built-in mechanism to create a chart, it literally takes 5 minutes to build one, with all the required templating. From there, the end-user can customize the values.yaml
to their liking. All major open-source projects use Helm charts, is a standard. In contrast, Docker is used only to perform build tests and generate container images, which are published to a public repository and used into Helm charts. This is where Docker role stops, in today’s Devops environments. Let’s put it this way, if a vital project like Cilium would not have a Helm chart, it would be a nightmare to deploy it into a cluster.
I say roll your own repo. But I guarantee, if you do not include an XYZ chart in your repository and an end user finds XYZ in another repo, they will favor that repository over yours.
This is where people don’t grasp the Helm role usage into Kubernetes. It has nothing to do with who has a shinier repository with fancy charts. Is all about the freedom of using a Docker image and creating your own Helm chart in minutes, followed by fully automated CI/CD deployments like ArgoCD and release version control with tools like Renovate. You can break an entiere Kubernetes cluster and have it redeployed in minutes with all apps fully restored, is that simple. That’s what I do now with my open-source cluster project. Technically, the machines work for you, not the other way around. There is no way in the world, I’m going to chase the latest release vulnerabilities or similar issues, when we have automatisation who can do all this for us, so I can push a PR to a Kubernetes test environment, with a press of a button in GitHub.
That’s what I would expect from IX, not Docker features. Their entire GUI should run as microservice on Kubernetes, with proper backups like K10 or Velero, certificates management done by cert-manager and ExternalDNS linked to Cloudflare, for example. Is a reality that IX needs to grasp, they cannot stay in the past. If they stick with NAS only features, they will be eaten by a bigger fish, since they refuse to move forward.
I say all this respectfully, Blockbuster vs. Netflix.
Our take on projects like this, is that the average user is going to have a hard time maintaining their own kubernetes cluster, even through ansible.
Unless custom tooling is made to make it more accessable for them.
It’s already more work to install/maintain helm-charts yourself (as that requires yaml knowhow), using all sorts of custom ansible scripts is not going to help that adoption.
If we would need to advice any such script, it would be the template made by onedr0p, who has been actively working on bringing kubernetes to the homelab significantly longer than youtubers like a “techno-tim” have tried to milk it.