Nginx Proxy Manager suddenlynot working

Hi all!
I was using Nginx Proxy Manager for a year i think, and like a week ago, when i decided to upgrade all of my apps on my truenas system, it went crazy.
It means, every of my app is working, running (checked by hostip:port, they are actually working), but NONE of my proxies defined on NPM working.

I got instant HTTP 521 errors as reply, when i tries to load the pages from a browser.

Once again. First i bulk updated all my apps on my truenas server (that time i was on 24.X. (i think 24.4 but not sure)
The app updates were successfull, but then i realized that i was unable to reach my sites. My first action was to update my OS as well to the latest.
I did it in 2 steps. First i updated to 24.10, then 25.04.0, and here im at this moment on this version.

Again checked, everything is working, running, no error, but when i tries to open my sites over internet (with my cname.domain.hu) then it gives me instant HTTP 521.

Also i recognised my home assistant VM gone. As it turned out there were a big change with the OS regarding virtualization, which i didn’t know. Luckily i had my vm on a separate zvol , so that is not a problem now.

My problem is why am i getting suddenly the HTTP521 errors? Were there any network related change on the system?

I spent my last 4 night trying to find a solution, but no success.
Tried to set up 100x port forwards on my router and test them (for example with a python server).
Tried to set up like 3 new instance of NPM.
My last try was installind Traefik.

I think i configured everything well, yet i get 521 errors.
Almost forget to mention that my domain is managed by cloudflare
My dns settings (A, cnames) are set to proxied, and SSL setting to FULL strict. As it was for the previous year, and nobody touched it.
It can be also usefull information that on NPM i can renew an SSL certificate for one of my proxy. So the communication, and the API key is also (should be) correct.

Any hint where should i look for?
Thank you in advance!

When you updated all your apps, did they change what docker network they were running on? I know nginx needs the apps to be on the same network, so an ip 172.16.1.1 ans 172.16.1.2 would work fine but 172.16.1.1 and 172.16.2.4 would not. I had a similar issue when 1st getring started with nginx, worth a go.

You are absolutelly right, and i highly appretiate your comment.
To be honest, i checked it before, so i know that all of my apps are on DIFFERENT network.
So you are correct.

But since on npm ui, on a proxy host edit section i used the host ip (for all of the apps), i tought the internal docker ip for each app is irrelevant.

But it is worth a try, i will add then to the same network.
What were the exact steps you have made?

Is that possible that previously all of my apps were on the same network, and for some reason, after the update they got their own network for some reason?

No problem, happy to (try) and help, to be honest, until about October last year I’d never used containers, it has been a steep learning curve, I dont see why any ip would have changed when you updated but then stranger things have happend,

I use Portainer for my container management, just prefer the layout, in it I can go into each container and manually select which network it is on (although these days I found out you can specify the network in the yaml file so been doing that with the more recent ones).

Hope you get it sorted, and I’ve not lead you down a wild goose chase.

So i tried, but unfortunatelly no success.
I deleted every one of nginx installations. I created a brand new one.
I added the new nginx and for example radarr to the same network.
Then i added a new proxy to npm.

To the host ip i tried to add my truenas host ip first (as it was like for a year!)
-No success!
I tried DNS names from the radarr container
Like “ix-radarrnew-radarr-1”
-No success

I tried with the IP of the running radarr container
-No success.

So i don’t know what the hell is goind on.
I was wondering if any kind of firewall on truenas kicks in to block my request from outside, so it does not even reach npm, so it cannot route to radarr, but as i read it, in theory there is no such a thing like truenas firewall.

I created a new docker with a simple debian, where is installed curl. From that docker i was able to curl my running radarr app
-by its dockers DNS name,
-by my host ip:port

So there is connection between the dockers!
If i get this working again, i never ever will do an update (which is not good , because it is a risk by itself… :S )

Ohh just 1 more thing that I thought off, again prob isnt an issue, but when I was configuring nginx, i had to manually change the webui ports of truenas away from 80/443 so that nginx cpuld listen on these ports

Any chance something similar happening here?

That is already done.
Truenas base ports were changed to custom like 2 years ago.

Now the current instance of npm uses a custom port, and i do port forwarding on my router.

But i tried to have an npm installation which listens on the normal 80/443 ports , no success.

In every case i do port forward, i did not forget it.
The port forward functionality is also tested, and it works.

Any luck? I foolishly just did the same thing today (mass upgraded all apps) but did not upgrade my OS, and have had nothing but 521s since from NGINX.

Honestly, i did 1000x things to make it working, yes, it can be that i swihed to host network, but i don’t remember if i did it on NPM or if it was one of my Traefik test.

If i did it, it was not working for sure.
And as en extra, i remember i read it somewhere, that it is not recommended at all, to use host network on NMP. I’m not an expert, I do not know why, it was just written somewhere.

But why, is it working to you? Was that the solution?

This didn’t work for me.

One thing I noticed is when checking if my ports are exposed, 80 shows as open while 443 shows as closed, even though the container says it is listening on 443. My router is forwarded correctly, so not sure why NGINX is refusing connections to 443, but I’m pretty sure once I solve that things should work.

Any ideas?

Absolutely nothing. But please if you solved it, share it with me :slight_smile:
I’ll do the same!

What i want to do now, install docker desktop on my other pc to host a new npm installation on a completely different environment.
Of course its host ip will be different, so i have to create new port forward too.
Im curious if it will work or not. But it should.
So i can learn from the output of this test.

Since nobody updated this topic, I suppose nobody was able to solve it yet.

My theory is that something changed on the last version (or the previous) of Nginx proxy manager, which does not allow us to work with cloudflare proxies.

But i was wondering if this is the case, why doesn’t a rollback solve the problem?
Regarding our best friend chatgpt, this can happen if the last version made a change on a config file, which is not specifically part of the docker itself. So even if we did a rollback, the configs theoritically could be the same.

My current plan is to clean install a new instance of an old npm version on a fresh empty dataset.
Has anyone tried this?

Maybe it would be a good idea to try a different provider than cloudflare to make sure that it is the provider that is the issue. You could try duckdns since it is free and pretty easy to set up. If it works then it is definitely the provider. If it doesn’t than maybe something else is the problem.

I found this other forum post. Setting it to “Flexible” and changing NGINX to not enforce HTTPS for now works, and as I got busy with work, its my solution until I can look into it more on the weekend. Sadly, no response if they ever got it fixed.

From what I can tell, NGINX is “listening” on port 443 but not actually accepting any connections on the port anymore. I tried looking through all the config files and checking things that way, but no luck. Hopefully I can make more progress this weekend.

The fact that this isn’t solved already seems to indicate we’ve hit some sort of Edge case, and so there isn’t a lot to go on for it, sadly. A fresh install of NGINX on Truenas had the same issue, so… unless no one else is currently installing NGINX, seems like its something with our base setup. Maybe a bad image? No idea, but yeah. I’ll dig in more this weekend. Good luck.

I’m scared of that host network option too, and the GUI warns you. I used it once and it blew up my host, couldn’t get to anything until I hooked up a kb/mouse and a monitor so I could at least see it. Never again, that’s why I’m on Core patiently awaiting IP support for all apps next month before I try scale.

But we can take a 1000 foot view of how this stuff is supposed to work. Read on for boredom.

When trading information on the network, there’s a host header the client sends out. The server, or proxy, can read this and use this to redirect to another site. That’s how Apache did virtual hosting back in the old days; read incoming header, direct to virtual host and path on the server.

That’s still what’s happening here. Cloudflare is part of this proxy chain. So the way it works now, is like this. Client looks up DNS. Cloudflare says yeah, it’s over here (your NPM proxy replies to Cloudflare). Cloudflare then jumps in front and intercepts that request in the background and handles all the communication between the server behind NPM and the public internet. That’s how it slaps their SSL cert on your http sites behind NPM inside your network.

For all this Rube Goldberg machine to work, it requires DNS pointing hostnames at Cloudflare. Cloudflare maps DNS to your NPM proxy. Cloudflare also generates SSL certs to communicate with public clients. But when it talks to your NPM server, it has to see 80 and/or 443 open so it can map requests to what the client is asking for, and stand in as a proxy, changing headers in real time.

We have established 3 things are critical here. DNS must be active and mapped properly. Cloudflare must be aware of your NPM host. Cloudflare must be able to use those ports to push and pull data back to your servers. If those things are all working or unchanged, then you turn your attention to the inside of your network. The key probably has something to do with Docker under Truenas, and that’s a black hole for me. Right now I use NPM internally but I did host things externally for years using Cloudflare, but I was on core with per-plugin (per app now) dedicated IP’s inside. As far as the internal network goes, as long as my DNS points named hosts at NPM and NPM has host entries, it’s golden. I still use real certs on my LAN via LetsEncrypt because I use Cloudflare’s API key for them.

Keep digging, I’m certain it’s the internal and/or Docker networking stuff that smashed it. GL!

I have good news — I solved the problem, and I really hope it helps you too!
It turned out to be something so simple from the beginning.

The issue was the port! It was blocked by the router.

After I almost gave up, I decided to monitor incoming requests to my TrueNAS instance from another PC on the network. Specifically, I sent a direct request to app.mydomain.com.
I expected to see the usual HTTP 521 error, but I wanted to confirm whether anything at all was reaching my server.

To do that, I used the following command — you can try it too:

sudo tcpdump -i any port 443 or port 80

Paste it into the TrueNAS shell and hit enter. This will put the shell into “listening” mode.
If any requests hit the server on either of those ports, you’ll see them logged in real-time.

Since I saw no output, it clearly meant that nothing was reaching the server from the router.

That brought me back to the router — I triple-checked all settings, but everything looked correct.
Then I wondered: how can I confirm whether traffic is even getting through my router at all?

The answer: online port checkers. Just Google them — for example, you can use dnschecker.org.

It showed that port 443 was blocked.

So the fix was to figure out how to unblock it. In my case, I couldn’t find any explicit option to unblock or reset ports on the router UI.
I ended up doing a factory reset on the router, then reconfigured the port forwards — and boom, it worked!


But here’s the interesting part:

You might be thinking, “Okay, but how can a server app upgrade or OS update cause a router to suddenly block ports that have worked fine for 2–3 years?”

I have a theory.

It may be due to one or both of these router features:

  • DMZ
  • UPnP

If these were enabled (as they were in my case), it’s possible that some network activity or update triggered the router to automatically alter port behavior.
This could explain why things stopped working without you changing anything manually.


That was a long and frustrating journey, but I learned a lot!
I hope this helps someone else solve the same issue.

Good luck!

1 Like

Fixed me too!

I just had to unadd the port forwarding rule, and re-add it. So odd that it was triggered by an update.

Thanks again! You are awesome!

1 Like