Frigate app: Fails to restart after config save

Running the Frigate app from the TrueNAS SCALE catalog. Each time I edit config.yml in the UI and hit Save & Restart, the app moves to Deploying… and never finishes. Logs show the go2rtc container cycling messages like go2rtc: stopping, then nothing — Frigate never comes back online until I manually Stop and Start the app.

Anyone else seeing go2rtc fail to restart automatically after a config change; or know a fix so I don’t need the extra stop/start step? Thanks!

1 Like

For further details, the logs show the following:

2025-07-06 11:44:35.409768126  [INFO] Service Frigate exited with code 0 (by signal 0)
s6-rc: info: service frigate successfully stopped
s6-rc: info: service go2rtc: stopping
s6-rc: info: service frigate-log: stopping
2025-07-06 11:44:35.411959834  exit with signal: terminated
2025-07-06 11:44:35.414664490  [INFO] The go2rtc service exited with code 0 (by signal 0)
s6-rc: info: service go2rtc successfully stopped
s6-rc: info: service go2rtc-log: stopping

Nothing happens after that until I manually stop and start the container.

You have rightly identified this problem, which has persisted for ages, but there is no fix yet.

My approach is to edit the config, save it and then go into the TN app menu and stop/start from there.

Out of interest, I have confirmed that the app simply runs a docker-compose yaml so I installed the ix-dockge app, and ran the docker-compose yaml in it and found that it ran perfectly well. I mention it in case you wanted to experiment.

(I didn’t test to see if the yaml reload from within the frigate UI worked or not).

/mnt/.ix-apps/app_configs/<app>/<version>/templates/rendered/docker-compose.yaml

Hey @E_B, thanks for your response. I’d be open to switching from the community package to a custom app (similar to what I did with lldap), but before making the change, I’d like to confirm whether it would actually resolve the issue. If you have a moment, could you test a Save & Restart from the Config Editor in your YAML-based Frigate deployment and let me know the outcome?

I’m not using ix-dockge at the moment; if I reinstall it I shall extract the ix-frigate docker compose and give it a try.

In the mean time, this might be useful to keep an eye on:

Yes, that was my post you linked to. I also posted some details on the container setup I extracted to this related issue:

Hopefully you can share feedback on if you are able to restart Frigate when using a docker-compose yaml file.

I tried it in ix-dockge for you: the reload from the yaml editor window doesn’t work.

I see this:

but when I left it to complete the countdown I get

and in ix-dockge I see

which is how it remains.

I stopped it manually, and shut down ix-dockge and I’m now back on ix-frigate (the official app running in TrueNAS).

Would you be able to share your YAML file? I’d like to pass it along to the original package authors to help with their diagnosis.

You can simply extract your own from the path I showed.

I guess more to the point - the original documentation has a docker compose configuration they recommend. I’m curious if you’ve tried using these exact settings with your TrueNAS and experience the same restart issue.

I’m not able to try at the moment; could you install dockge and give it a go?

When I try these experiments, I have to stop and unmount my Frigate instance and I’m always worried I’ll make a mistake/it won’t start again.

It would certainly be good to know the answer to this restarting problem.

I have also encountered the same issue with Frigate apps in TrueNAS Scale.
Wondering if this will be fixed in future app release?

Is anyone from the app team looking into this? It is still happening in the latest version of the app.

I don’t currently have Frigate app installed as I got tired of trying to make Frigate work as it seems to be extremely picky, easly broken, and generally overly difficult.

When I did have the app installed I could not get it to restart either from its web interface. You can just Save, then stop and start the app from the Truenas GUI apps page and unless there is an error in the config file (yamil is very picky about formatting I guess) it will deploy the changes saved. That is the only way I was able to get the app to restart, a pain I know and not how it should work.

I think I saw recently in one of their recent videos that the app team is very thin with very little contributions and so if anyone has the experience and would like to help or assist in some way contact them.

Hi @PhilD13, thanks for your note. I’ve been saving the config and then restarting the container in TrueNAS directly as a workaround. For me I love Frigate and have found it to be a very capable app. The only issue is this restart one and this only seems to be the case on TrueNAS since I’ve run the container on another platform and it doesn’t seem to have any difficulty restarting otherwise. As such, I will continue to advocate for the TrueNAS devs to fix it since I really like this app.

I also gave up on frigate as a working config would suddenly become a non-working one. BlueIris has its issues but it is far better resourced re: a wide array of hardware and software settings than Frigate and I’ll happily pay some $$$ for the windows / BlueIris licenses rather than lose more time trying to get frigate to stay operational. Life is too short.

I like the concept of Frigate and once had it working in a simple fashion in a VM that was running Truenas Dragonfish. It was flaky back then to where you had to save, then stop/start the app. so the issue has been there a long time.

I don’t want to take this thread off topic but I am just curious are you running that on Truenas or on a different machine? I trialed BlueIris on my laptop a year or so ago and overall liked it.

I tried to run BlueIris in a VM on Core way back when. The old versions of BlueIris were quite resource intensive. As a result, I had the same issues with blue iris virtualization as with zoneminder - lots of missed events like people not triggering anything while walking across frames.

Part of the issue was that BlueIris would regularly max out the VM CPU Cores. It also didn’t benefit from present-day innovations like Coral TPUs. Between back then and today, BlueIris also dramatically reduced CPU intensity, even if you don’t have coral to help you. So today it could conceivably work better.

But then the question is how important it is for your BlueIris setup to work even if the lights go out. Servers are pretty power hungry creatures. If your aim is to have a working security system even a day after power fails, you likely need to think about different hardware.

There are alternatives like NUCs that sip power and can run off an UPS for hours, or even days. So i am unlikely to go back to trying to run BlueIris in a VM. Instead, I will likely try to run UniFi, Pi-hole, and plex as apps since they are not mission critical.

I may attempt a HAOS VM implementation, though the probability seems low. Given the ongoing changes of SCALE VMs, it likely makes more sense to run HAOS on a raspberry Pi. It also avoids the challenge of how to get external hardware associated with the VM. There are a ton of how to guides in the raspi universe due to the stability of that environment.

Thanks for the information an cpu usage etc. and experience in a VM and the suggestions. Not being on utility is not an issue, as my power backup system works well. In fact since the storm earlier house is on backup power right now but I might just look into alternatives like a NUC.

I’ve updated to Frigate 0.16 on my TNCE and the restart from UI still doesn’t work.