As the subject says, why do some of my apps have multiple pods? I’m new to TrueNAS Scale and Kubernetes, so apologies of this is obvious. I tried to search but couldn’t find anything obvious.
If I try to open a shell or view logs for the given App only one of the pods lists a container (or more if there are legitimately multiple containers in the pod).
Are these left behind from upgrades, or part of the roll-back functionality? Do they need to be cleared out or left?
I notice the same issue on boot - for me it seems the Apps try to deploy before my bridge is made & kinda bug out, then the bridge goes up & everything works fine, but there are leftover pods. I’ve tried to make a script to clear them after boot, but having issues getting ‘sleep’ to run the way it should.
TLDR - use this in shell to clear pods that aren’t active:
k3s kubectl get pod --all-namespaces | awk '{if ($4 != "Running") system ("sudo k3s kubectl -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'
After a boot I've now started just stopping all apps, unsetting the pool, then mounting it again, and starting all apps. Seems to be the cleanest way - luckily a reboot doesn't have to be done after & realistically everything does still fire up 'fine' in case I ever forget or unexpected reboot happens.
I’m guessing I would need to change the timeout value, too? As the default is to stop the script after 10 seconds, but there’s a ‘sleep 300’ there at the start to make it wait 5 minutes
Also, fixed that up a bit, as the command throws an error for the header line output by kubectl get pod:
@Fleshmauler That’s why I have gone to a shutdown script and a startup script. The shutdown script stops all VMs and apps. No VMs are set to start at boot. That way, when booting, no apps or vms start. So then, my startup script starts the apps in the order I need with delays as needed. Once those are done, it starts the VMs and all is well, every time. This works out well for my use at least.