Need assistance with setting up Invoke AI docker container on Scale - not using Portainer

With the upgrade to newer hardware for my NAS, and the addition of a second GPU to the mix, I’m wanting to play around with some local AI image generation.
I was turned onto Invoke AI by Craft Computing a few months back and I liked that the interface seemed pretty intuitive for a non-coder guy like me to play with and maybe give remote access to a couple of my more creative friends for their personal use in creating stuff.

I’ve tried deploying Invoke AI via a custom app and a Yaml file, and I can actually get it to start (and stay running with Yaml) but I can see no logs in the container, and can’t access the web UI regardless of what port I set it to. 9090 is the default and I’ve tried mapping to 15090 in case there was some conflict. Using the custom app wizard it will start for about 3-5 minutes then stop. Still nothing shows in the container logs.

Anyone have any success with getting this running without deploying to a VM or through Portainer/Dockage? I’m not 100% opposed to Portainer, but wanted to avoid another layer. I’m running Portainer in a couple VMs/LXC on other hosts in my lab, but would prefer to have it directly on the host so I can have other containers access the GPU when its not used by AI stuff…

Did you find a solution that worked for you? I am also looking for stable diffusion on Truenas running in Docker. I would prefer a native app, but portainer / dockage would also work fine.