Add nodes to truenas scale kubernetes cluster and another doubt

I have a some k8s clusters at my job and I would like to take advantage of already having one for the Apps in the same box than TrueNAS.

Is it possible to add nodes to the TrueNAS Scale kubernetes cluster?

Independently to the k8s cluster, is it possible to add redundancy to TrueNAS Scale in terms for datasets replication? (failover/replication/ha)

Thanks

K8’s in TrueNAS cannot be clustered, it is a back-end implementation detail of our Apps framework, and not expected to be leveraged as a full-blown K8’s system. It is subject to change between releases and updates.

That said, if you do want to run K8s/Podman/LXC or $OTHER, you can do that by leveraging sandboxes/jails.

For data replication you can add redundancy in a few ways. iX offers proper HA systems, so you have complete redundancy from hardware failures. You can also setup ZFS replication to backup data off system, or leverage other tools like rsync/syncthing for keeping file contents in sync between hosts.

2 Likes

Wow! That’s a pretty strong statement about the trust and/or hopes that iXsystems puts on jailmaker.

2 Likes

Jailmaker is a nice tool. We figure its only time before we do a proper UI-based “sandboxes/jails” system in SCALE and when we do that we’d expect it to carry forward any current jailmaker managed environments fairly easily.

9 Likes

Works for me.

I replaced my docker compose VM with a Jailmaker sandbox running Debian in a n-spawn container. Far more efficient.

In theory, it should work for k8s too.

I used to run k8s VMs too, but I decided for my needs compose and host mounts was all I needed.

2 Likes

@kris given the fact that iX advertises “Kubernetes” multiple times on the product page for SCALE, this statement makes me tangibly angry. It is not reasonable to advertise Kubernetes (seemingly as a feature) and then simultaneously be surprised when people are expecting to be able to use Kubernetes in the manner they expect rather than your non-k8s abstraction. If it is indeed just an implementation detail, and is not planned to be properly supported as a use-case for SCALE, I highly recommend removing it from the product page altogether, as it’s not actually relevant; you may as well be advertising the language your UI is written in.

I migrated from an actual Kubernetes cluster to SCALE soon after its first stable release, because I wanted a more “managed” k8s platform, and my understanding was that it was a supported feature as the product material seem to indicate. Will it ever be a natively supported feature, or will I have to look elsewhere for the “managed” k8s platform I’m looking for?

My apologies for likely coming off as particularly abrasive and/or aggressive, but I’m currently frustrated trying to figure out what I’m going to do now that the CSI for all my PVCs for all my deployments on my SCALE install has been deprecated in Dragonfish and I’m left on Cobia with no clear, supported upgrade path.

Edit: I really want to love this platform, I’ve been using it on and off since way back in the FreeNAS days and even ran a box as a FC SAN, but every time I come back I seem to get burned.

1 Like

I suppose there are nuances between “Kubernetes”, “k8s” and/or “k3s”, but that’s going too far down some rabbit hole.

That is a fair criticism. We need to get better about communicating what features are supported and how so. K3’s is no different from ZFS/BSD/Linux/Samba in the sense that there are random features here and there which are not exposed or supported in TrueNAS at all, often for very good reasons. If you used those behind the UI’s back you are in uncharted waters and likely to hit breakages down the road. I’ll talk internally to the team about how we communicate that going forward.

And this is how I did it…

I’ve been converting all my work loads to Jailmaker, and recentlty I spent the time to fully test its networking scenarios.

This is my story :wink:

K3’s is no different from ZFS/BSD/Linux/Samba in the sense that there are random features here and there which are not exposed or supported in TrueNAS at all, often for very good reasons.

Sorry to further derail, but I feel like I must clarify: my issue isn’t/wasn’t necessarily that k3s was chosen or used in a way that does not support clustering (though I do still feel the lack of k8s clustering is important to call out); it is a compliant distro that meets the requirements of being called Kubernetes. My issue is/was that the messaging around the TrueNAS product suggests that it is supported to use the cluster directly, whereas your comment directly contradicts that by saying it is merely an implementation detail of “iX Apps” and not a feature unto itself.

I personally don’t particularly care which Kubernetes implementation is chosen as long as it is supported to be used directly, because I (and I suspect many others) don’t want to be married to or deploy “iX Apps”; the uniformity and portability of k8s as a platform is what is desired.

Reading through the openEBS removal notice:

“…use the one which is already provided in SCALE.”

I’ve spent entirely too much time in docs and both forums searching for “what is the supported CSI in TrueNAS scale” and tangentially “why was a functioning CSI deprecated”.

ZFS storage is the feature, being able to access the storage is the reason.

Removing functioning storageclasses and going back to manually provisioning each dataset seems like a step backward. Please correct me if I am missing some explanation somewhere. Thank you.