I had a similar issue recently while trying to set up a new server with multiple NICs. Even with the Kubernetes bridge bound to 0.0.0.0 apps only ever seemed to respond when accessed via the server NIC with the default gateway set on it.
I’m a Scale and Kubernetes newbie but have plenty of experience with docker and networking general. The best I could figure out was that Kubernetes doesn’t work well on multi-homed systems. You could maybe try using nginx reverse proxy (or traefik) to redirect traffic to the NIC your kuberentes bridge is running on?
I have the same issue and started digging around.
The wired thing for me is that the Plex app (web ui at port 32400) is accessible via wg0 but minio (web ui at port 9002) is not.
Also when I run netstat -tulpen I can see port 9002 occupied and port 32400 not.
So I think there is some way it could work, but I have not yet found a specific setting/config that is different between the two helm charts.
Yeah, this matches what I’ve experienced with Tailscale (which is built on wireguard, I believe).
Using tailscale from another machine, and with tailscale running as a kubernetes app on TrueNAS Scale, I can get “out” from the tailscale container to the TrueNAS Scale webui, but I can’t loop back in again to other kubernetes apps/containers.
This is, I assume, because all the kubernetes apps/containers are on the same bridge network and they can’t access each other using the IP addresses on the TrueNAS Scale host. It’s a routing problem. They can, of course, talk to each other inside the bridge network using the 172.16.0.0/16 addresses and .ix-.svc.cluster.local namespace