I’ve just setup a new TrueNAS SCALE server, and am having some difficulties achieving my desired network setup.
The server has two onboard 1Gb NICS (eno1 and eno2), and a PCIe dual port 10Gb card (enp2s0 and enp2s0d1).
I’ve assigned a 172.16 address to enp2s0, and a 10.0 address to enp2s0d1. I’ve then setup a bridge (br0) with no IP, consisting of both of the en1 and en2 adapters.
My expectation here, is that the bridge will be used for VMs and attached them to the network.
Then, any traffic bound for the TrueNAS 10.0 IP would go over enp2s0d1 interface.
This doesn’t seem to happen however, and when sending receiving data to the 10.0 address of TrueNAS, I am seeing the traffic go over the en0 interface.
I may be misunderstanding how bridges work in Linux, but this isn’t what I expected to happen.
Can anyone guide me on how to configure the interfaces so I have a bridge of en0 and en1 that will be used by VMs and VMs only, then the TrueNAS traffic go over enp2s0d1 interface?
Can anyone offer me any guidance on configuring TrueNAS networking in this regard?
I’ve managed to achieve a similar setup in Proxmox using OVS bridges and IntPorts, but can’t get the same working in TrueNAS.
For example, I just attached a VM NIC to a bridge, that contains one interface that is physically disconnected, yet I can ping the VM, presumably as the bridge is interfacing with a second bridge I have setup for TrueNAS management traffic.
Even though the bridge of two onboard NICs has no IP assigned, my UniFi network is telling me it sees duplicate IP addresses for the TrueNAS IP, on both the 10GB port, and the two onboard ports.
It seems the management IP is binding or traversing both the 10Gb link and the bridge?
Thanks, but unless I’m missing something, I’m not sure it helps me?
Are you/Stux saying that the TrueNAS management IP will bind itself to every bridge that I create?
I’m coming from an ESXi world, where a VMKNERNEL interface can only ever be attached to one vSwitch, so I was expecting the same behaviour here.
How can I get it so that I have a bridge with both my onboard NICs on, but TrueNAS itself does not attach to it, and it is instead used exclusively for VM traffic?
Do I have to just bind it to a single physical interface instead? I can do that, but then have no link redundancy?
Many thanks
Eds
EDIT: I’ve just had a look at adding a link aggregation interface. I hadn’t realised it has failover as an option, so guess what I should be doing instead of a bridge, is a failover link aggregation instead?
Screenshot from Core, but the tool tip may give you an idea on what to look for.
Sorry, I didn’t understand the problem. I think you just need to change the 0.0.0.0 to a static address of your interface you wish to use.
I think this may solve my secondary issue (I hope), whereby TrueNAS sometimes registers its storage network address in DNS, but I don’t think it helps with my underlying issue.
My problem is the IP I have set on my LAN network (10.0.0.35), is appearing across multiple interfaces (One being a bridge on my onboard interfaces, and one being a bridge on my 10Gb interfaces).
What I am trying to achieve is have the LAN IP only available on one interface, and not any of the others. I think my issue is that in using a bridge on my onboard NICs, the LAN IP gets connected to that bridge as well. I think I need to instead to a link aggregation, in failover mode, consisting of both onboard NICs. As it’s not a bridge, it won’t communicate with the host?
I’ve been tinkering, but this is how it currently looks:
I created a bridge for my 10Gb LAN interface in the event I create VMs I DO want to connect directly to the TrueNAS host.
I did have a second bridge consisiting of both eno interfaces, but have since deleted that.
I think what I need to do is create a link aggregation with eno1 and eno2, then attach the VM to that?
I don’t need VLANs on that, as it will use the native VLAN of the physical switch it connects to.
If you’re connecting both ports to the same switch, that would be the way to go. Don’t put an IP address on that LAG or either eno interface if you don’t want TrueNas to use it. It should be in an entirely different subnet than what you have allocated to any other interface TrueNas has access to, and you should handle all of your traffic decisions for this network upstream.
That keeps things a little easier. For me, I wanted my VMs to benefit from the 10Gb interfaces while keeping their traffic separated and filtered at the firewall.
I don’t know how you have your Unifi network setup, but if you’re not using VLANs, and you have all of those 1G and 10G ports connected to the same switch, it might be the reason for the behavior that you’re seeing.
I apologize if I’m throwing advice out there that you already have a handle on. Just trying to hit on all the networking gremlins I discovered while attempting to separate my VMs.
So this is where my understanding of how Linux handles networking falls down;
I had my VM on the same bridge that TrueNAS was using. I could ping the VM from an external client.
I created a bond using both the en1 and en2 interfaces, but neither of them are physically connected.
I moved the VM’s NIC over to the bond and expected the pings to stop (as it was set to use a bond, on NICs, that have no physical connectivity).
The VM is still pingable…
The only interface that remains on this subnet, is the TrueNAS management IP bridge. So for some reason, the VM is still sending and receiving traffic via this bridge, and NOT via the bond?
For the sake of testing, I have removed the TrueNAS management bridge, and just assigned the IP directly to the physical interface. The VM is now no longer pingable.
Connecting one of the eno interfaces, the VM still does not ping.
Does the VM need to be stopped and started for the transition of network to take effect or something daft like that?
EDIT: Just stopped and started the VM, and yes, it does seem that is the case.
Look into the possibility that your motherboard is doing this.
I remember having to change something in BIOS or BMC regarding NIC bonding/bridging/virtual switching for IPMI so that I could get all of my on-board ports to act independently on my Asrock board. Your Super Micro(?) could be doing something similar. I would think it strange that your PCIe card get wrapped up in that, but it may be a place to look.
I wish I could be more specific, but it’s been 2 years since I built my system.