Can't Deploy Custom Apps

As per this thread, I’ve been trying to deploy a Custom App on TrueNAS Scale 23.10.2 as the catalog versions of Tautulli are deprecated by the upstream maintainer and no longer being updated.

However, when attempting to deploy, the k3s daemon seems to get stuck in a loop of creating many pods then deleting them. The “Related Kubernetes Events” panel in the webui complains of not being able to find a healthy GPU device each time a new pod is created. The system has no GPUs and I have not tried to assign one to the custom app.

I tried deploying different docker images to see if it was an issue specific to the tautulli one and see the same behaviour.

I am not specifying anything in the Network Settings part of the custom app configuration, assuming that this would configure the container on the same bridge network as all the catalog apps I have installed and working.

NB. the system has three NICs total:

  • enp2s0f0 with a static address and no additional routes/gateways. This is a management network with no other access
  • enp3s0f0 and enp3s0f1 as bond0 with a public IP address and default gateway to the internet (firewalled)

In Kubernetes Settings the Node IP is forced to use the IP address and gateway for the bond0 interface.

In each case the custom app is configured to use a dataset configured with a share type of Apps for persistent storage, but nothing is ever written to it.

Any ideas what is going wrong here? As I alluded to: catalog apps from TrueNAS and Truecharts work fine. It’s just custom apps I can’t get to work.

It would be clearer if you posted the screenshots of all the custom app config options so we can see what might be going on.

Can you check if Apps->Settings->Advanced Settings shows that “Enable GPU support” is enabled?
If so, try disabling that.

1 Like

Thanks @sfatula for responding

@neofusion - first of all I tried disabling “Enable GPU support” in Advanced Settings and restarting the already created custom app. This produced the same results: endlessly spawning and deleting pods.

Then I deleted the custom app and recreated it with GPU support disabled in Advanced Settings. This time it worked.

NB. the reason I had “Enable GPU support” enabled with no GPU in the system was that I intend to add a GPU in the near future for Plex transcoding. Hopefully re-enabling it when I add the hardware, even though there is no GPU requirement specified in the custom app settings, will still allow custom apps to run.

Thanks for the help both of you!

1 Like

Good to hear. Yeah, shouldn’t cause an issue when you do have one, though I would say it shouldn’t if you don’t have one either but oh well.

I converted all my apps to custom apps. And given Eel is going to be docker based, this is great as all my custom apps will be easily importable. I find no need for the app system myself.

I’m using Emby instead of Plex, but, my solution and savings of power and money was to only use clients that do not require any transcoding. So, Infuse on Apple TV and IOS devices I don’t need any transcoding, so I don’t need a GPU.

1 Like

Likewise, I’ve come to TrueNAS Scale from a Synology/docker background so prefer to do everything in docker if I can, and was pleased to see iX announce they’re moving to docker. I think I’ll work on moving all my apps to docker now.

A couple of additional questions, if I may, while I have your attention:

  • how do I update docker images in when deployed in this way?
  • does TrueNAS Scale assign internal hostnames to containers in the same way it does catalog apps? i.e. my plex app is internally resolvable by other k3s apps as plex.ix-plex.svc.cluster.local. I tried tautulli-docker.ix-tautulli-docker.svc.cluster.local (my custom app is called tautulli-docker) but that doesn’t resolve

I hear ya, but I share my library with a few friends so cannot control what clients are being used. A low-end GPU will help with some of the transcoding where needed, as the CPUs in this box are old enough that they don’t do quick sync

Internal kubernetes names will be retained with custom apps, though they might be different than apps names but the same idea applies. That is, unless you run a specific app on a static IP as I do with Emby, in that case, none is needed as you can just use the static ip. Here’s an example for redis: redis-ix-chart.ix-redis.svc.cluster.local

As far as updates, not sure actually as all my come from my own repo. I build my own containers via docker buildx and have my own repo I pull images from. The reason I do this is I do not like automatic updates of anything. I like to choose if or when each app updates, which in many cases I do once a year for things like mariadb. I am conservative on updates and I monitor bugs for new major versions especially. Unless I have a specific issue, not really a reason to update. Heck, I am still on Cobia as I am not satisfied yet with updating that, am waiting for the .2 version. Emby I will generally build again when a worthwhile change comes out but in general, I don’t chase updates.

If you are sharing Plex with friends, then yes, you may well want a GPU. That’s awfully nice of you to provide though, if you are providing for free! I don’t even have quick sync as I have a server xeon processor.

Actually I do have one I am tracking the latest tag and the standard docker hub images. It’s kopia, the backup system. It appears to be on the latest stable version, I don’t think I had to do anything except click update in the GUI but honestly can’t be 100% sure that’s what happened.

1 Like

Thanks again, the namespace works :+1: