Migration from core to scale, issues with Bridge/VM and addressing

Background: Long time CORE (13.3) user, but migrating to SCALE (25.10.X). For a variety of reasons, I decided to stand up identical hardware for SCALE right next to my existing CORE setup. Too chicken to run the upgrade-in-place, plus rebuilding new but with old for reference and switching shares/services one at a time made a lot of sense. On CORE one did not need a bridge for VM’s to access drive shares on the same machine, but SCALE requires that. It is the bridge setup/configuration that is causing me pain.

Self-imposed Constraints: I have seven (1gbps) nics built into my mainboard. I would prefer the truenas gui and all my VM’s use the same single nic. I also want everything configured via DHCP. The addresses will all be dynamic, but the DHCP server will always deliver the same (static) address based on MAC. I’d prefer the parent nic to get an IP via dhcp, and the bridge to have no IP address at all. VM’s would use the bridge for their nic, to send out a DHCP request and each get their own address.

What I did that mostly worked: Following the instructions in the official docs as well as several online how-tos, I stopped all VM’s and removed the virtual nics. Then on my primary nic I unchecked “use DHCP” (which means STATIC radio button pressed, clearing DHCP radio button) and then removed the IP address on it. I then created the bridge interface, and gave it the IP/subnet that HAD been on the primary nic. I then re-added virtual nics to all VM’s, and had them use br1.

The above worked (vm’s could get to shares). But I could not access the truenas gui by dns name. I had to reference it via the static IP address on the bridge instead. This is probably because my DHCP server has the mac address of the primary interface (not the bridge) associated with the hostname which it updates DNS with. Despite some articles saying the bridge uses the mac of the first member interface, it does not. It gets its own, which for some really bizarre reason isn’t shown on the UI. You have to shell and ‘ip a’ to see it.

I got around the “test/save” process by switching temporarily to a static on the primary interface, and a different static on the bridge interface, and once those saved, then I could go back and change to dynamic on the primary, and no IP on the bridge, without failing to reconnect for the “save”. On the plus side, the primary mac gets assigned the right IP and its put in DNS now. The bridge has no IP address, all VM’s are reachable, and they can mount shares from the host. But the downside - I cannot access the truenas gui at all. I’m out of town, but once I get back I can access the console to make changes.

I guess my first question is - is it definitely OK for the bridge to have no IP address at all? Is there something I’m missing that is preventing this from working? If it MUST have an IP, I assume I can assign an IP to the primary IP that is different, either statically or via DHCP.

Perhaps what would help me the most, is if someone could explain the relationship and use of the mac address and IP for the primary member interface, and the mac address and IP for the bridge. Perhaps understanding that, I could fix my own issue. Thoughts?

Fortunately I was able to answer my own question through further study. For those that may hit this in the future: In a nutshell, a bridge does NOT need to have its own IP in certain circumstances (where no access to the underlying host is needed, for example if you have an interface dedicated to just VM’s). But if you’re trying to access the host as well as all the vm’s, you must have an IP on the bridge. I changed my setup to reflect that (no IP on member interface, only the host IP goes on the bridge). It worked just fine. For a more detailed answer, the answer I found (including mac address interplay) was at:
StackExchange Answer