Bridge Network Doesn't Default to Home Network

Hey all, I just upgraded to Fangtooth from EE, rebuilt my VMs that host my Ubiquiti UniFi controller, and noticed that the default network is set to a random 10. IP instead of the server’s 192. IP Normally, I’d change the setting over to the correct setup, but when I do, the Apps catalogue no longer updates and throws either a

“[EFAULT] Failed to clone ' repository at ‘/mnt/.ix-apps/truenas_catalog’ destination: [EFAULT] Failed to clone ‘’ repository at '”
or
[ENOENT] Unable to locate at '"
If I try to install or update an app. Am I just missing something? Even when I change the default bridge information the VMs still don’t get a network connection. TYIA

1 Like

(I have no clue why but the new forum did NOT like any links I had in my post)

I’m going to guess those are the default github links; are you able to ping them? At you able to ping them via ipv4 specifically? Does the NAS have access to internet? If yes to those questions, then is this resolved after unsetting & resetting the app pool in GUI?

Any changes if you go to ‘Discover Apps’ and ‘Refresh Catalog’?

If answer is ‘no’ to pinging to github, you’re going to have to go troubleshoot networking & likely have to provide some additional details about the setup for more help.

I’m also guessing that it didn’t link the links due to new account; I think when you create your account you’ll get an auto DM with a basic tutorial that you have to complete to get access to post links/pictures.

1 Like

You’re right, didn’t realize there was a tutorial in TrueNAS Bot chat.

I tried unsetting and resetting the pool, but no dice. My NAS does have internet access, but when I try to ping 8.8.8.8 while the subnet is to 192.168.x.x, it doesn’t reach. When I change it back, it’s fine. As a side note, I am still able to access the web GUI during this, so it appears to be messing with specifically the internet traffic, not the local network for my server.

This is going to sound really stupid, but it has randomly worked for me in the past; make an arbitrary change to your network config using the GUI, apply, & then revert instead of save it. Not sure why this has ‘fixed’ things in the past for me where bouncing interfaces hasn’t.

Otherwise I’m going to assume it is a deeper config issue & will need details on your network config if you want more help.

Tried reverting through the GUI, but no change. I have just performed a full config reset, so if you’d like something in particular, I can give you whatever information. For specifics, my Instances Global Config Bridge is set to automatic and my actual network settings for the server are nothing special, just the default interface with DHCP enabled.

No clue as to what would cause it so hard to ask for specifics, I guess lets get some outputs of the following & maybe we’ll find something?

ifconfig
ip link

**Ian IPs of your gateway & VMs would also be of use.

ifconfig:

br-74b6ae837ae9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.1.1 netmask 255.255.255.0 broadcast 172.16.1.255
inet6 fe80::42:14ff:fed8:6179 prefixlen 64 scopeid 0x20
inet6 fdd0:0:0:1::1 prefixlen 64 scopeid 0x0
ether 02:42:14:d8:61:79 txqueuelen 0 (Ethernet)
RX packets 698 bytes 183464 (179.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 653 bytes 227539 (222.2 KiB)
TX errors 0 dropped 2 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.16.0.1 netmask 255.255.255.0 broadcast 172.16.0.255
inet6 fdd0::1 prefixlen 64 scopeid 0x0
ether 02:42:8c:ef:42:59 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 24 overruns 0 carrier 0 collisions 0

enp6s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.4.75 netmask 255.255.252.0 broadcast 192.168.7.255
inet6 fe80::dabb:c1ff:fe66:b491 prefixlen 64 scopeid 0x20
inet6 fd8a:aec5:80a3:1:dabb:c1ff:fe66:b491 prefixlen 64 scopeid 0x0
ether d8:bb:c1:66:b4:91 txqueuelen 1000 (Ethernet)
RX packets 5932 bytes 4956878 (4.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3623 bytes 1806541 (1.7 MiB)
TX errors 0 dropped 1 overruns 0 carrier 0 collisions 0
device memory 0x82100000-821fffff

incusbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.125.143.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::216:3eff:fee0:4594 prefixlen 64 scopeid 0x20
inet6 fd42:ca0e:dea:e3ab::1 prefixlen 64 scopeid 0x0
ether 00:16:3e:e0:45:94 txqueuelen 1000 (Ethernet)
RX packets 834 bytes 197915 (193.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2179 bytes 2787889 (2.6 MiB)
TX errors 0 dropped 10 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback)
RX packets 2960 bytes 2727113 (2.6 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2960 bytes 2727113 (2.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

tapef52c2d1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 96:20:a3:39:e1:7e txqueuelen 1000 (Ethernet)
RX packets 834 bytes 209591 (204.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2178 bytes 2787659 (2.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vethb981618: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::c448:8cff:feaf:d349 prefixlen 64 scopeid 0x20
ether c6:48:8c:af:d3:49 txqueuelen 0 (Ethernet)
RX packets 698 bytes 193236 (188.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 666 bytes 228705 (223.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

ip link

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether d8:bb:c1:66:b4:91 brd ff:ff:ff:ff:ff:ff
3: incusbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:16:3e:e0:45:94 brd ff:ff:ff:ff:ff:ff
5: br-74b6ae837ae9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:14:d8:61:79 brd ff:ff:ff:ff:ff:ff
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:8c:ef:42:59 brd ff:ff:ff:ff:ff:ff
8: vethb981618@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-74b6ae837ae9 state UP mode DEFAULT group default
link/ether c6:48:8c:af:d3:49 brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: tapef52c2d1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master incusbr0 state UP mode DEFAULT group default qlen 1000
link/ether 96:20:a3:39:e1:7e brd ff:ff:ff:ff:ff:ff

The LAN gateway is 192.168.4.1 and the VMs are currently set to (by “default” generation) 10.125.143.1/24

So reverse engineering what I’m looking at + a quick summary of my understanding of issue + solution; feel free to correct my assumptions

enp6s0 ← main interface on NAS
tapef52c2d1 ← virtual interface for VM
incusbr0 ← bridge for your VMs to be able to talk to NAS/each other
(ignoring other interfaces to keep things simple for myself)

I’m guessing that with the update away from EE, things aren’t as simple as just setting you VM interface to attach to your main bridge & that this is the reason you now got 2 subnets to mess about with. It seems that the bridge exists to host the virtual interfaces of your VMs instead of being the main bridge for everything.

I guess the solution to get everything back to a single subnet would be to have the main interface be a member of the bridge & move over its IPs to incusbr0 (remember to remove any other IPs incusbr0 and also remove all IPs from enp6s0); then have the VM interfaces be attached to incusbr0.

Mind you I’m still on EE, so I’m not going to be of much use with specific steps.

That’s the weird part, the only interface that I can see via GUI is the enp6s0. The virtual interface is only available in the backend as is the incusbr0 (though I can somewhat mess with that via the instances config)

I’m guessing then incus operates similar to how apps works then - with its own happy little bridge on a different subnet for the VMs to play on…

Setting the incus bridge & your main interface to the same subnet is going to cause problems, so sadly you’re going to need someone more experienced than me (I’ve straight up never used it, so bar to entry is low) to help you figure out if you can configure things back to how they were on EE.

Hopefully I was at least helpful in figuring out what happened?

I’m far from an expert, but I have one Incus VM attached to the server’s network. My Incus Default Network is set to use an automatic bridge, and the network is also configured to a random 10. network. I didn’t change anything; these are the default Incus settings.

I also have a bridge br0 that I created in TrueNAS, following @Stux’s YouTube video. The only thing I did to put my Incus VM on the same 192 network as my TrueNAS was to uncheck “Use default network settings” and select my bridge under Bridged Adaptors when creating the VM, as shown in the screenshot below.


EDIT: I forgot to say, if you don’t want to have the bridge on TrueNAS, you can also select your NIC under Macvlan. However please note, if you use macvlan, the VM will have communication with other devices on the network but the VM and TrueNAS will not be able to communicate with each other.

2 Likes

This actually works for me, the VM I’m using has no need to communicate with the server. Thanks for the suggestion!