VLAN connectivity issues from VM to TrueNas

I’m scratching my head here in confusion. It’s been that kind of day.

I kind of messed up my Nextcloud server in an attempt to get vlans working today. Long story, I gave up on long time ago and I’m rebuilding.

The issues I’m running into now is trying to pass NFS to a VLAN.

On TrueNas I’ve configured a management interface and two VLANS. One is my internal (office) vlan. the other is public. Yes I’m going to be hosting nextcloud publicly behind a proxy.

The plan is (and tell me if it’s a dumbplan, I’m no network engineer)
Public traffic comes into the public vlan, firewalled off from literally everywhere.

SSH is allowed out of my management vlan into the server and NFS is served on the public VLAN for the file storage needed for nextcloud.

So in the NFS servcice config I’ve selected the public vlan to bind to.
My virtual machine is given an address from the public vlan.

And I have disabled and removed all firewall rules and allowed all traffic.

I can ping from VM to my desktop on a different VLAN
I can ping from my desktop to the VM and TrueNas (on both the interface addresses and the NFS address)

I can ping nothing on TrueNas from my VM. I can however ping my desktop (on a different vlan)

My head is starting to spin at this point.

if there is an easier/better/more secure way of serving the data storage to this VM, I’m all ears. From my reading it sounded like this was better then allowing traffic in from the public VLAN to another VLAN to serve the data storage.

I think, assuming I am understanding the issue - You need a bridge for the VM to talk to TrueNAS

I JUST found another post here on that same topic here, right as your post came in…Gonna go give that a try. I had assumed that I could simply connect the NFS share to the vlan. Which I guess makes no sense…

Okay, so yea… this just got messy again. Now I have no network access. Even bridge down in the VM and still no network.

Looking elsewhere I’ve come up with the following:

Created a bridge interface with no member ports
Gave it 1 IP address: 10.1.69.15
Tied that to the VM as a new NIC (ens4)

Tied that to the NFS service on TrueNas

Restarted the VM and NFS service.

Upon reboot I’ve lost all network connectivity on the primary adapter (ens3) from the VM.

Updating netplan I’ve configured it as follows.

network:
   version: 2
   renderer: networkd
   ethernets:
      ens3:
         addresses:
            - 10.1.70.12/24
         routes:
            - to: default
              via: 10.1.70.1
        nameservers:
            addresses: [8.8.8.8]
       ens4:
          addresses:
              - 10.1.69.25/24

Tried my best to format this correctly but I’m working through the spirce remove console as no ssh is available with broken network connection.

Okay… it works…

Note to self: check the order of adapters. Even specifying the boot order (which ironically adding the new adapter added it LOWER in the boot order than the inital adapter…) had them backwards in the VM. I ended up just swamping ens3 and ens4 around