How to get TrueNAS apps their own IP

Hello,

i finally masde the move from core to scale with a fresh install and trying to figure out the new apps on 24.10.1

the fact that it doesn’t use dedicated IP for each apps is very upsetting for ease of use in my setup. from autofilling password to port not being open on the main IP of the host mean i can’t adopt anything.

how to i force truenas apps to use dedicated IP like it was in CORE?

You can either handle this with custom apps manually using macvlan to create IP aliases, or you wait for Fangtooth 25.04 expected in late April that will implement GUI support for this.

…but the real answer is to change the question. You don’t (with rare exceptions) need distinct IPs for each app; you need to put them behind a reverse proxy and access them by name. And as a bonus, you don’t need to mess with (or remember) weird ports either.

2 Likes

I’ve used this setup (Caddy) for quite a while, but I will say that there is a slight caveat to doing it this way though. That is, if the reverse proxy is down (eg. for maintenance or maybe some other catastrophe), You would then be forced to remember the IP/port combination to access them.

1 Like

@neofusion thanks for proposiing a solution to me. could to point me a useful ressource i can access to get me familiar on how to implement macvlan for my dockers. i don’t seems to find something that is close enough to my usecase.

my TrueNAS server is on the same subnet with default Vlan 1 as my PC and the docker apps should be also on the same subnet and vlan as truenas. i’d like to keep the same ip as before with jails as a won’t have to resetup all portforwarding and shortcut i had before.

@dan i don’t see remembering a port as a bonus as autofill is good enough with every ip plus i actually just use bookmark so i don’t mind the port. on the otherhand having no dedicated ip make it more difficult to make any apps interact with the outside… unifi controller is not adopting either local or remote AP, plex seems to be double NAT and can’t turn on remote access. and i’m just starting, it’s the first 2 and 0/2 are working properly. so it’s a bad start. And i’m im the mindset of make it simpler rather than more complex. i’d rather set proper ip from the start than having to mess around looking why a new app is not functionning and trying to figure out if it’s that new layer of complexity that breaks things or not.

Respectfully, I think you may be falling into the XY Problem. as @dan said, the usage ends up being far simpler if each service is just open on one port of the host IP. You can get around the password manager issue by using a reverse proxy.

Your proposed setup sounds far more complex than the standard setup, where your server has a single host IP, with specific ports for each application, and you can have a reverse proxy forward requests to those ports. You may have to re-do your port forwards as you said, but the end setup is much simpler this way, although it may not be the exact way you are used to.

Here is an example, showing my setup (this is in Nginx Proxy Manager, but any other reverse proxy would work):

1 Like

…or even by telling your password manager to pay attention to the port.

I can’t say why this is, but it isn’t anything to do with Plex not having its own internal IP address.

…but what you’re requesting makes things more complex.

Again, it’s not that this is necessarily a bad thing, and it looks like it’s Coming Soon™. But in the large majority of cases where it’s being requested, it just isn’t needed; there’s a better way to handle it.

1 Like

Are you running this with the native app or custom YAML app?

Native for now, but I might run the YAML if simpler In the long run. I haven’t take that problem yet as I got unexpected stuff needing my attention in the last week

I’m interesting in running native to TrueNAS, while I have used portainer and dockge on a pure Ubuntu install, I’d like to keep my TrueNAS reliable :).

Thanks. I’ll look into it or because I might want to use TSDProxy I might have to roll a custom. I have not decided yet.

The only way I got plex to work outside of local network on the native was to check the host checkbox in the setup of the app if that can help you.

Couple questions for your nginx proxy setup. Are you directing native 80 on your TrueNAS to the proxy and then directing from there to the other ports? Are you only advertising your TrueNAS UI on 443?

I am starting to setup the nginx and was thinkg since it will want a high port I can just redirct the UI 80_> nginx port and go from there.

Hi, I am also wondering how can I make the native Docker apps use a specific IP. Given that I have a public DNS/DHCP record set up for the service and my server has two network ports, one is my organization’s private, one is the public network. I want the Docker app to use the public network (ideally: specifiy the IP address to use), and not bind to the default (private) network/interface. It does not seem to help if I setup the according public address pool in the Configuration->Settings screen. Do I have to use custom App or something like Portainer to handle this task? I am confused :thinking:. Thank you.

I just tested this… did I get it wrong… or this is only for mapping INCOMING traffic for container ports? It is not meant to be used for outgoing traffic… that is, to assign each container a specific IP even when they generate outgoing traffic…

Looking at most docker containers, the specified port will include both incoming as well as external traffic. For instance “portainer”, the WebUI port is the port where you can reach the web interface and that is interactive so both incoming and outgoing?
You may find this posting interesting, because it discusses different routing for incoming and outgoing traffic Apps and Custom Routing 25.04

I meant “container-initiated” outgoing traffic which, as far as I can tell, is not using the new Host-IP feature (released on all apps on June 1st).
I am resorting to use portainer + macvlan for now…

Here is my script I use for my docker traffic routing. I have it configured as a postinit script and it works for the most part. The requirement is the container needs to be running for the docker bridge to be initialized and be available for the script to manipulate it with IPTABLES. I also have all of my containers configured through YAML configuration and I specify the specific bridge network they will run on in that YAML configuration.

If you plan to use this script, you will need to update the bridge interface and your network interfaces at the very least.

Also, my knowledge comes from Googling and I don’t fully know what I am doing, so use with caution.

#delay script on startup to allow for containers to startup
sleep 10m

#Create a custom routing table
grep -qxF '200 docker_traffic' /etc/iproute2/rt_tables || echo '200 docker_traffic' | sudo tee -a /etc/iproute2/rt_tables
grep -qxF '201 docker_homeassistant' /etc/iproute2/rt_tables || echo '201 docker_homeassistant' | sudo tee -a /etc/iproute2/rt_tables


#Set up routing rule and default route for Docker network
#Set up routing rules only if they don't already exist
ip rule show | grep -q 'from 192.168.237.0/24 lookup docker_traffic' || ip rule add from 192.168.237.0/24 lookup docker_traffic
ip rule show | grep -q 'from 192.168.238.0/24 lookup docker_traffic' || ip rule add from 192.168.238.0/24 lookup docker_traffic
ip rule show | grep -q 'from 192.168.239.0/24 lookup docker_homeassistant' || ip rule add from 192.168.239.0/24 lookup docker_homeassistant

# Routes for docker_traffic (table 200)
ip route show table 200 | grep -q 'default via 10.0.7.1 dev vlan7' || ip route add default via 10.0.7.1 dev vlan7 src 10.0.7.50 table 200
ip route show table 200 | grep -q '10.0.7.0/24 dev vlan7' || ip route add 10.0.7.0/24 dev vlan7 src 10.0.7.50 table 200

# Routes for docker_homeassistant (table 201)
ip route show table 201 | grep -q 'default via 10.0.7.1 dev vlan7' || ip route add default via 10.0.7.1 dev vlan7 src 10.0.7.51 table 201
ip route show table 201 | grep -q '10.0.7.0/24 dev vlan7' || ip route add 10.0.7.0/24 dev vlan7 src 10.0.7.51 table 201

# Enable NAT for Docker bridge traffic
#"-o br-..." is the docker bridge interface
#Always remove and re-insert SNAT rule for bridge to ensure it is first. 
#When docker container starts and initializes docker bridge, a masquerade rule is created and set as first rule in IPTABLES
iptables -t nat -D POSTROUTING -s 192.168.237.0/24 ! -o docker0 -j SNAT --to-source 10.0.7.50 2>/dev/null
iptables -t nat -I POSTROUTING 1 -s 192.168.237.0/24 ! -o docker0 -j SNAT --to-source 10.0.7.50
iptables -t nat -D POSTROUTING -s 192.168.239.0/24 ! -o br-33bf59121eaf -j SNAT --to-source 10.0.7.51 2>/dev/null
iptables -t nat -I POSTROUTING 1 -s 192.168.239.0/24 ! -o br-33bf59121eaf -j SNAT --to-source 10.0.7.51
iptables -t nat -D POSTROUTING -s 192.168.238.0/24 ! -o br-c5bbe5b233b7 -j SNAT --to-source 10.0.7.50 2>/dev/null
iptables -t nat -I POSTROUTING 1 -s 192.168.238.0/24 ! -o br-c5bbe5b233b7 -j SNAT --to-source 10.0.7.50

I had to manually create the docker bridge networks with the below commands.

#bridge creation
sudo docker network create --driver=bridge --subnet=192.168.238.0/24 --ip-range=192.168.238.128/25 --gateway=192.168.238.1 --opt com.docker.network.bridge.host_binding_ipv4=10.0.7.50 docker_vlan7_bridge
sudo docker network create --driver=bridge --subnet=192.168.239.0/24 --ip-range=192.168.239.0/24 --gateway=192.168.239.1 --opt com.docker.network.bridge.host_binding_ipv4=10.0.7.51 docker_homeassistant_bridge

It would probably be better to handle this with mac_vlan or some other method, I just haven’t looked into anything else yet as I have been busy with some other things as my setup is mostly around having a single entry point (Nginx Proxy Manager) proxying traffic inside to my containers.

Yes. Its IP Binding. It allows you to bind the exposed ports to a specific IP Alias.

Which is different to allowing each app to “have its own ip”, which is technically macvlan support for apps.

If you’d like that feature, id suggest voting for it

1 Like

I believe your conclusion is correct.
As far as I can tell you can compare it to what a router does when you port forward incoming WAN traffic to an internal server. That is basically what is happening here, but instead it’s your normal LAN traffic that is being forwarded to the specific Docker container on the internal Docker network.

If you have an app that looks at the network it’s connected to, to decide what is on the same subnet and what isn’t, the new alias functionality may not be what you’re looking for.

@Stux Are you familiar with mDNS? Is macvlan support the feature needed for mDNS device discovery to work for installed applications? I was hoping the app IP update would allow it but it does not seem to be working still.

1 Like