Why Do Some Apps Have Multiple Pods?

As the subject says, why do some of my apps have multiple pods? I’m new to TrueNAS Scale and Kubernetes, so apologies of this is obvious. I tried to search but couldn’t find anything obvious.

If I try to open a shell or view logs for the given App only one of the pods lists a container (or more if there are legitimately multiple containers in the pod).

Are these left behind from upgrades, or part of the roll-back functionality? Do they need to be cleared out or left?

I notice the same issue on boot - for me it seems the Apps try to deploy before my bridge is made & kinda bug out, then the bridge goes up & everything works fine, but there are leftover pods. I’ve tried to make a script to clear them after boot, but having issues getting ‘sleep’ to run the way it should.

TLDR - use this in shell to clear pods that aren’t active:

k3s kubectl get pod --all-namespaces | awk '{if ($4 != "Running") system ("sudo k3s kubectl -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'
After a boot I've now started just stopping all apps, unsetting the pool, then mounting it again, and starting all apps. Seems to be the cleanest way - luckily a reboot doesn't have to be done after & realistically everything does still fire up 'fine' in case I ever forget or unexpected reboot happens.

I don’t like it, but ehhh, good enough?

1 Like

Put the following as a post init command in advanced settings.

sleep 300 && sudo k3s kubectl get pod --all-namespaces | awk ‘{if ($4 != “Running”) system ("sudo k3s kubectl -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}’

And before reboot after set the command open a shell and run

sudo k3s kubectl get pod --all-namespaces | awk ‘{if ($4 != “Running”) system ("sudo k3s kubectl -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}’

This clean pods and after every rboot you will find only one running.

works for me, but i wish they fix it, goes way back…

Regards
Dinos

1 Like

I’m guessing I would need to change the timeout value, too? As the default is to stop the script after 10 seconds, but there’s a ‘sleep 300’ there at the start to make it wait 5 minutes

Also, fixed that up a bit, as the command throws an error for the header line output by kubectl get pod:

sudo k3s kubectl get pod --all-namespaces | awk '{if ($4 != "Running" && $4 != "STATUS") system ("sudo k3s kubectl -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'

On the advanced settings Edit Init/Shutdown Script you have to choose command, were it say type not script!

1 Like

Doh! Sorry, not paying attention :roll_eyes:

@Fleshmauler That’s why I have gone to a shutdown script and a startup script. The shutdown script stops all VMs and apps. No VMs are set to start at boot. That way, when booting, no apps or vms start. So then, my startup script starts the apps in the order I need with delays as needed. Once those are done, it starts the VMs and all is well, every time. This works out well for my use at least.

1 Like