VLAN passthrough to VM

I have a bunch of VLANs, everything working correctly, the configuration is like:

Physical Interfaces x 2 → bond0 → vlan1 ->br1
bond0 → vlan1 ->br1
bond0 → vlan2 ->br2
bond0 → vlan3 ->br3
and so on…

So I have an IP on one of the bridges for the TrueNAS Scale box itself for its own services and webUI etc, then the rest of the bridges have no IP addresses and are allocated to VMs via virtual NICs so they can interact with their desired VLAN.

So far so good.

Now, I want to do something slightly odd (it’s really only for failover, so I know it’s weird and not a production config), I want to nest a hypervisor by installing Proxmox VE in a VM, then host some VMs inside it. Like I said, it’ll really just be a (usually shut-down) DR facility so I could restore some proxmox backups on it and fire the VMs up if the actual proxmox server failed, it’s just a home LAN so this is a satisfactory DR solution for me.

Is it best to:

  • Pass all the bridges through via lots of vNICs each carrying one VLAN into the proxmox VM. ie. Proxmox sees untagged traffic as if it were on physically separate NICs.

  • Give the proxmox VM a single vNIC which is on br0 which is a bridge I have directly to the bond0 LAG group. ie. Proxmox sees tagged traffic on what it thinks is a single NIC.

I prefer the second solution, but I’m wondering before I try it whether anyone has any input, has anyone done it before? The main reason I’m reluctant is just because I can quite think through what could go wrong with it, it SEEMS sane at first glance - both the TrueNAS Scale box and the nested proxmox simply tag their traffic and put it onto the physical interfaces, but is there something I haven’t thought about that would break stuff?