Apps and Custom Routing 25.04

I tried doing a quick search or this topic, but couldn’t find anything. So sorry if it is a duplicate.

I know 25.04 implemented the ability to bind an app to a host IP for incoming requests. Is there any functionality to handle the routing for return traffic or outbound based traffic?

I have a test 25.04 instance that I spun up to do some basic testing on before I upgrade my 24.10 instance. In my test the system has two IPs, 192.168.1.20/24 and 172.16.0.20/24. It only allows to specify a single gateway to be configured. So if the management network and gateway is setup as 192.168.1.1, but I specify an app to listen for traffic on 172.16.0.20, response traffic will be on the 192.168.1.20 IP if the request originated from the 192.168.1.0/24 network or from a separate network. Outgoing traffic originating from the container will also use the 192.168.1.20 IP and it’s gateway.

I currently solve this problem on 24.10 with some custom routing table rules through IPTABLES.

I have certain containers that I want isolated and be subject to traffic inspection through a firewall. With these custom rules, the return traffic and any outgoing traffic is required to route up to the firewall before being able to reach a different network.

A better search should return a number of posts by @pmh explaining that this is standard IP behaviour.
If you want some apps to have their own networking, separate from the rest of the NAS, run these in VMs with their own NIC passed through: Separate network stack and routing.policy.

1 Like

I will just continue with my solution of controlling it with IPTABLES. It’s capable of managing and maintaining the traffic with the current stack. Still probably not a supported solution, but it works for me and keeps me from having to maintaining more VMs.

1 Like

Hi @clifford64 I was under the idea that the new functionality would work for outgoing traffic, not for incoming (single) ports.
I am now pondering whether to use mac_vlan (with portainer) or smth else.
You mentioned using custom iptables SNAT rules… could you share your solution for this? Where do you intercept the iptables configuration in Truenas?
thx!!

This is standard IP behaviour but I was arguing this point concerning file sharing services and the UI - because these share a single IP stack.

In CORE you could easily map both jails and VMs to a different layer 2 interface and since both come with their own isolated IP stack you could run them as required and expected by the OP.

I still do that on my CORE system. All externally reachable jail based services are in VLAN 2 and all VMs in VLAN 3 all separated by my OPNsense. No asymmetric routing anywhere. Like e.g. VMware and port groups.

Only file sharing and the UI share the single “trusted” VLAN. Not possible to really separate these.

Now I just tried the new IP address per app feature and it seems like no real separation is taking place as the OP wrote.

If this is the intended behaviour, the IP address per app feature is mostly useless, IMHO. I expected something like I do with jails.

@clifford64 Could you do a short write up about what exactly you do with IPtables? I am not a Linux person :wink:

2 Likes

so… I had a 24.04 system that I had to upgrade, which had an apps setup that made extensive use of container IP addresses (which on the old k3s system would basically add an interface to the container and allow you to define the route in the container, simple but powerful). this is hugely important because controlling outbound traffic is necessary for a lot of use-cases, for example I have services that need to request web traffic from multiple different WAN connections.

And so given that Container IP addressing was supposed to come June 1st I held out until last week, expecting that once I upgraded from Dragonfish my apps would just not work for a few days… not pleasant but I dealt with it.

only today I find out that I need to create aliases on the bridge interface which provides binding and not addressing… I’m sorry this isn’t the same thing? and it especially doesn’t help my use-case unless I want to redo all my apps again outside of the truenas apps ui… fantastic.

UPDATE: I was able to make a docker ipvlan network on the cli, convert the apps to YAML and add them to this with not too much fuss.

1 Like

Here is my script I use for my docker traffic routing. I have it configured as a postinit script and it works for the most part. The requirement is the container needs to be running for the docker bridge to be initialized and be available for the script to manipulate it with IPTABLES. I also have all of my containers configured through YAML configuration and I specify the specific bridge network they will run on in that YAML configuration.

If you plan to use this script, you will need to update the bridge interface and your network interfaces at the very least.

Also, my knowledge comes from Googling and I don’t fully know what I am doing, so use with caution.

#delay script on startup to allow for containers to startup
sleep 10m

#Create a custom routing table
grep -qxF '200 docker_traffic' /etc/iproute2/rt_tables || echo '200 docker_traffic' | sudo tee -a /etc/iproute2/rt_tables
grep -qxF '201 docker_homeassistant' /etc/iproute2/rt_tables || echo '201 docker_homeassistant' | sudo tee -a /etc/iproute2/rt_tables


#Set up routing rule and default route for Docker network
#Set up routing rules only if they don't already exist
ip rule show | grep -q 'from 192.168.237.0/24 lookup docker_traffic' || ip rule add from 192.168.237.0/24 lookup docker_traffic
ip rule show | grep -q 'from 192.168.238.0/24 lookup docker_traffic' || ip rule add from 192.168.238.0/24 lookup docker_traffic
ip rule show | grep -q 'from 192.168.239.0/24 lookup docker_homeassistant' || ip rule add from 192.168.239.0/24 lookup docker_homeassistant

# Routes for docker_traffic (table 200)
ip route show table 200 | grep -q 'default via 10.0.7.1 dev vlan7' || ip route add default via 10.0.7.1 dev vlan7 src 10.0.7.50 table 200
ip route show table 200 | grep -q '10.0.7.0/24 dev vlan7' || ip route add 10.0.7.0/24 dev vlan7 src 10.0.7.50 table 200

# Routes for docker_homeassistant (table 201)
ip route show table 201 | grep -q 'default via 10.0.7.1 dev vlan7' || ip route add default via 10.0.7.1 dev vlan7 src 10.0.7.51 table 201
ip route show table 201 | grep -q '10.0.7.0/24 dev vlan7' || ip route add 10.0.7.0/24 dev vlan7 src 10.0.7.51 table 201

# Enable NAT for Docker bridge traffic
#"-o br-..." is the docker bridge interface
#Always remove and re-insert SNAT rule for bridge to ensure it is first. 
#When docker container starts and initializes docker bridge, a masquerade rule is created and set as first rule in IPTABLES
iptables -t nat -D POSTROUTING -s 192.168.237.0/24 ! -o docker0 -j SNAT --to-source 10.0.7.50 2>/dev/null
iptables -t nat -I POSTROUTING 1 -s 192.168.237.0/24 ! -o docker0 -j SNAT --to-source 10.0.7.50
iptables -t nat -D POSTROUTING -s 192.168.239.0/24 ! -o br-33bf59121eaf -j SNAT --to-source 10.0.7.51 2>/dev/null
iptables -t nat -I POSTROUTING 1 -s 192.168.239.0/24 ! -o br-33bf59121eaf -j SNAT --to-source 10.0.7.51
iptables -t nat -D POSTROUTING -s 192.168.238.0/24 ! -o br-c5bbe5b233b7 -j SNAT --to-source 10.0.7.50 2>/dev/null
iptables -t nat -I POSTROUTING 1 -s 192.168.238.0/24 ! -o br-c5bbe5b233b7 -j SNAT --to-source 10.0.7.50

I had to manually create the docker bridge networks with the below commands.

#bridge creation
sudo docker network create --driver=bridge --subnet=192.168.238.0/24 --ip-range=192.168.238.128/25 --gateway=192.168.238.1 --opt com.docker.network.bridge.host_binding_ipv4=10.0.7.50 docker_vlan7_bridge
sudo docker network create --driver=bridge --subnet=192.168.239.0/24 --ip-range=192.168.239.0/24 --gateway=192.168.239.1 --opt com.docker.network.bridge.host_binding_ipv4=10.0.7.51 docker_homeassistant_bridge

It would probably be better to handle this with mac_vlan or some other method, I just haven’t looked into anything else yet as I have been busy with some other things as my setup is mostly around having a single entry point (Nginx Proxy Manager) proxying traffic inside to my containers.

1 Like

Thank you for sharing your scripts!
I did some tests on my truenas testing VM.
I was able to get containers talk to the Internet with a different IP with this approach:

  • Assign some(*) IP Aliases to the Truenas interface that has access to the Internet.

  • Create a custom bridge with masquerading turned OFF (this prevents any traffic leakage to the Internet until I add a SNAT rule):

docker network create \
  --driver=bridge \
  --subnet=192.168.250.0/24 \
  --opt com.docker.network.bridge.enable_ip_masquerade=false \
  --opt com.docker.network.bridge.host_binding_ipv4=192.168.0.204 \
  custom_bridge
  • And then, in a post-init command (with a long timeout), wait for docker to start and then insert a SNAT rule:
until systemctl --quiet is-active docker; do sleep 1; done ; /sbin/iptables -t nat -I POSTROUTING -s 192.168.250.0/24 ! -o docker0 -j SNAT --to-source 192.168.0.204

(*) Why “some” IP aliases: I noticed that when assigning only 1 or 2 additional IP addresses, sometimes one of those is chosen as the “main” one for outgoing traffic, which would result in other containers (+the nas itself) to go to the Internet with the wrong IP.

I tested this with an alpine docker container that starts on boot and check if there were any leakages on reboot (by capturing on the router), and there were none.
this is the docker compose I used for testing:

version: '3.8'
services:
  ping_container:
    image: alpine
    container_name: always_on_ping
    command: ["sh", "-c", "while true; do ping -i 0.2 -c 5 1.1.1.1; sleep 0.1; done"]
    restart: unless-stopped
    networks:
      - custom_bridge

networks:
  custom_bridge:
    external: true

I also tested macvlan and they work fine, but I like the idea of assigning just 1 IP to a series of containers so I’m sticking to the SNAT approach for now.

It is unfortunate that this is not covered by the TN gui :frowning:

cheers.

I wasn’t aware of the ability to disable masquerading when creating a bridge. That sounds like a much better approach.

LXC is the equivalent of jails and should/does support this afaik.
The docker networking stack is just… Not optimal and not well documented in some places, thats not really an iX fault, actually even for many other docker-container based backends forcing egress through specific adapters for specific pods is troublesome at best.

1 Like