Right, which is the whole point of running NPM in the first place.
Hello everybody!
So after a bit of research I found a way how to configure my stack so that all those conditions are met:
- TrueNAS instance is not exposed to the internet, no public domain, local network only.
- TrueNAS GUI is running on default ports, so there are no problems with sync etc.
- Nginx proxy is running with default configuration, no host networking or any other overrides.
- I can access all my services via proxy on default HTTP(S) ports like
unifi.truenas.lan
,plex.truenas.lan
, etc.
The method I used is basically achieving āport forwardingā inside local network by combining loopback interface with couple of NAT rules on my router, nothing really complicated.
Even though is quite simple in principle, I wanted to write an easy-to-follow guide which might be used by others, so the resulting text is a bit longer - you can find it here on my GitHub.
If there is anybody with appropriate permissions, feel free to reuse my text and/or include it in an official documentation for TrueNAS apps so novice users can easily find it.
(Iām pretty much new to TrueNAS as well, playing with it couple of weeks and still just exploring the ecosystem.)
Sorry to jump in on this thread, but Iām having issues trying to get NPM working myself. Iāve tried to keep things as simple as I possibly canā¦
- Changed the default ports for TrueNAS to 8080 & 8443
- Deployed NPM using ports 80 & 443
- Created a Local DNS entry in my Pi-hole for ātruenas.local > ip.of.truenas.serverā
- Created a Proxy Host in NPM of ātruenas.localā pointing to āip.of.truenas.serverā on port 8080
NPM shows the Proxy Host as online but when I click the link to test it out I just land back on the NPM login page.
Iāve been searching for a solution but all I can find it people saying āedit the proxy_pass valueā, which I canāt see anywhere in the NPM UI.
Any suggestions would be greatly appreciated
Ok, so Iāve solved this now (and realised it a bit of a silly/simple mistake to fix)ā¦
Iād bound ports 80 & 443 to the NPM container with port 80 mapped to the web ui. What I should have done was bind the web ui to port 81. Changed this and itās working flawlessly now!
Has this changed ? Im at EE and its just deploying forever :
2025-03-11 16:33:28.363569+00:00e[1;34m⯠e[1;36mConfiguring npm user ...e[0m
2025-03-11 16:33:30.181472+00:00useradd warning: npm's uid 568 outside of the UID_MIN 1000 and UID_MAX 60000 range.
2025-03-11 16:33:32.058685+00:00e[1;34m⯠e[1;36mConfiguring npm group ...e[0m
2025-03-11 16:33:32.336202+00:00e[1;34m⯠e[1;36mChecking paths ...e[0m
2025-03-11 16:33:32.778756+00:00e[1;34m⯠e[1;36mSetting ownership ...e[0m
With user/group 0 it works.
Anyone having issues with DNS lookups from the NginX container?
Everything except dns lookups for letsencrypt is working, it is giving a timeout when creating a new certificate and when starting the container.
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by āNewConnectionError(ā<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f488efff210>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolutionā)ā
When checking the /etc/resolv.conf in the nginx container, i can only see 127.0.0.11 as a nameserver but if i add one row with 8.8.8.8 the dns lookups are working and i can get certs from letsencrypt.
But when i restart the container, the changes i made to resolv.conf gets overwritten.
Anyone seen this problem before?
[edit]
sorry, this was caused by me setting my Truenas dns settings to point to my Pihole (hosted on the truenas). Problem solvedā¦
Now with nginx update, this behaviour is there again. It wont deploy even with uid guid 0
Luckily there is snapshot rollback
Yes, Iām affected by this as well. This appears to be a bug with the Nginx Proxy Manager itself, not a deployment issue with TrueNAS. Only solution is to roll back to the immediate previous version.
I was running into the same issue with the following log output:
/etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh: line 41: syntax error: unexpected end of file
s6-rc: warning: unable to start service prepare: command exited 2
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
I had the following environment variable set
S6_STAGE2_HOOK
sed -i $d /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
I changed it to the following (source)
S6_STAGE2_HOOK
sed -i 34,36d /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
I then began getting a lot of cloudflare-related error messages in my NPM log and came across this solution (switch ānginxā for your docker container name).
docker exec -it nginx sh -c "sed -i 's/cloudflare==4.0.\*/cloudflare/' /app/global/certbot-dns-plugins.json" && docker restart nginx
Unfortunately, this command needs to be executed every time the container is restarted.