Hey. I’m new to TrueNAS. We have a spare ixsystems in the office and I have been learning and playing around with it. I have a linux VM running on it, but this VM couldn’t access the NFS datastore. I discovered I needed a bridged network interface, so in setting that up the VM can now communicate to the NFS store, but I have lost access to the management console.
What are my options for restoring access to the management console?
There are 4 NIC’s in this system.
I could hook up a KVM locally if it would help.
The VM running on the system CAN ping the old management IP.
For restoring access your best option is to wait about 60 secs. If you apply a change, but don’t save it within 60 secs the system should revert to it’s previous networking state.
As for your way forward, I bet I know what happened. You created a bridge a, added your mamangement interface to it and things stopped working, right?
Bridging is a little funky with interfaces and IPs. So once you’re back in you have a couple options.
The 1 NIC solution. You need to move your management IP address from your physical interface to the bridge you’ve created. Roughly speaking your order of operations are as follows.
Turn off your VM (youu bound it to your NIC I’m guessing and that created a bridge automatically, we need to get rid of it)
Have a static IP already set for your management interface (this is doable with DHCP, but it’s easier with static).
Remove the IP from your physical interface.
Create bridge.
Add physical interface to bridge.
Add static IP to bridge.
Apply these settings all in one shot. It’ll think for a a couple seconds, but should come back.
Save the settings before the 60 sec timeout.
Go to your VM and change the network interface to be attached to the bridge now.
OPTIONAL I’d recommend changing your VM network interface to virtio, that will give you 10G networking internally for your NFS.
This could also be done with 2 NICs if you want to separate the physical management interface and the VM traffic. Then you won’t need to assign an IP to your second NIC that’s for VM traffic, but your NFS traffic will have to traverse your physical switch.
Or for an advanced 2 NIC setup you could have your management IP on 1 physical NIC, and assign a NFS IP to a different subnet on the bridge. If you understand what I’m suggesting here, then you’ve fully wrapped your brain around what’s going on with the networking and you can set it up anyway you like. There’s only one rule with truenas, you can only have 1 interface on 1 subnet at a time. So like if your management interface is on 192.168.1.0/24, you can’t have another interface on that subnet, it’ll have to be something else.
I discovered the BMC interface on the Gigabyte motherboard, launched the virtual local console view, then was able to edit the NIC config. So, I now have access to the TrueNAS management console!
I think I did exactly as you suggested… I created a bridge which was associated with my physical NIC which had the management IP address assigned. It is a bit confusing to say the least, but this is why I’m here and learning. I hope you don’t me asking some follow up questions…?
In the local console, it shows a list of interfaces with columns for “aliases” and “state.aliases”. What is the difference? Google isn’t offer a good explanation.
Must a bridge interface be associated with a physical interface? Or can it purely be virtual?
I have 4 NICs (2x1GbE & 2x10GbE). Can I setup LACP on the 2x1GbE for management only. Then setup the LACP again for the 2x10GbE for VM traffic?
If I understand correctly, creating a LACP for the VM traffic, the VM will need to traverse the physical switch which isn’t ideal. LACP isn’t essential, so perhaps there is a better way by using these two fast ports independently?
Not familiar with state.alias, are you in the shell running commands or running through the CLI menus? Generally the webUI is where you want to be making changes if you can get in there. The consoleUI is more for recovering something so you can bring the webUI back up.
It can be purely virtual if you desire it what way.
Sure, that’s totally valid.
Honestly not sure on this. Also, I seem to remember LACP and VLANs having a limitation with bridging. Also, LACP is pretty dated as a solution, it’s not a simple at 10+10=20. There’s a whole hashing algo, and any single connection can only go as fast as a single link. Works great on a corporate campus switch that’s aggregating hundreds or thousands of connections, but not so awesome with a small VM home lab. But if you just want some link redundancy, it’s definitely a solution that’s proven.
With ESXi, the better solution would be 2 uplinks on a single virtual switch, but I’m not sure the truenas bridge is that clever. I think it’s a pretty dumb switch by modern standards and 2 links would just create an ethernet loop and activate the spanning tree protocol (assuming your physical switch isn’t too dumb to support STP/RSTP).