Direct network connection between CORE and Proxmox?

I’ve seen versions of this question before, but I’m not great at either networking or virtualization, so forgive me if I’m repeating something that’s well-covered elsewhere.

I currently have a TrueNAS CORE machine running on bare metal, hosting a couple of VMs and some jails; this has a 10G networking card in it. I’m now finalizing a new build, running Proxmox, with TrueNAS Scale running in a VM. This box has two built-in 10G ports. Going forward, I will be moving the CORE VMs and jail apps to Proxmox (converting the jail apps to Docker, containers, or something); I’ll be using the Scale VM for storage only, over NFS. All of the VMs/containers/Docker will be stored on a Scale NVMe pool, not on the Proxmox boot drive. After the migration, I’ll be using the CORE machine as a backup box only.

I’d like to connect the CORE machine to the Proxmox machine directly, with an Ethernet cable to the 10G ports on each. What is the process for setting up networking such that these boxes will talk to each other using this connection, rather than going out over the LAN? I believe it’s only the TrueNAS deployments that will need to communicate this way; i.e. since anything Proxmox is running will in fact be stored on the Scale VM, there won’t be any need for Proxmox itself to communicate with the CORE machine (but I may be misunderstanding something here).

This is my first time using Proxmox, so my understanding of how all this is meant to work is limited.

The problem essentially breaks down into two parts - how you do the physical connection between the two 10Gb physical NICs on CORE and Proxmox and then how you deal with the Proxmox NIC.

The physical NICs may or may not need a cross-connected LAN cable - LAN cables are normally designed to connect a client to a switch; if you are connecting two clients then in theory you need a cross-connect cable or adapter to swap some RG45 pins. However modern NICs (and 10Gb NICS are obviously reasonably modern) often have the ability to detect this and you can use a normal cable - but it will depend on your NICs.

Then because they are directly connected without a DHCP server on the (minimalist) LAN that the ethernet cable is creating, you will need to use static IP addresses. You can use any private IP address range you don’t currently use for these.

Finally you need to decide how to connect the physical 10Gb Proxmox port to your SCALE VM. Given that it is temporary until you decommission your CORE instance, I would probably try to dedicate it to SCALE through passthru (if Proxmox allows that) because it will minimise any overhead inside Proxmox and give the fastest throughput).

1 Like

They won’t; crossover cables haven’t been a thing since the days of so-called “Fast Ethernet” (i.e., 100 Mbit). Anything gigabit, or 10G over copper, will be fine with standard Cat cables. For fiber, you’d need to make sure the right strand goes to the right place.

Quoting this for emphasis–if OP wants to connect the machines this way, the IP addresses on those interfaces need to be on a different subnet than is otherwise being used.

But really, I’m wondering why this is desired in the first place, unless it’s just a lack of 10G ports on a switch.

1 Like

Likely that or physical constrains.

I don’t think it is that unusual to have a direct link to facilitate fast direct transfers.

Perhaps his LAN switch is 2.5Gb or 1Gb - and perhaps a 10Gb switch would be a waste of money for normal use because all his other links are (say) only 1Gb.

But even if he has got a 10Gb switch, you probably want to keep the bulk transfer on a separate NIC in order to avoid it impacting the responsiveness and throughput of your normal traffic.

This is the bit I question. I mean, sure, I get it in principle. But if you have a 10G connection, what are the chances you’re going to come anywhere close to saturating it? An all-flash NAS probably could, if it were configured properly. But how often would that be happening?

But with that said, it is of course a perfectly valid configuration.

1 Like

Maybe he has a proper ISP 10G connection too, so he saturates the network when he downloads (or uploads) something. First world problems :smiley:

In this specific situation of transferring TB of data between one NAS and another, actually pretty high.

1 Like

Hi, OP here, thanks for the many useful responses and discussion.

Several reasons. First, I acknowledge that I don’t really have need for superfast everything; I have few users, I don’t have a supertuned array of NVMe drives, etc. But I’m trying to do things as best as I can.

At the moment, I don’t have a 10G switch, so the only way I can get 10G speeds is directly connecting the two machines. But also, as @Protopia suggested, the only way that I will need to use this is transferring directly between these two machines, so it seems sensible to avoid sending the traffic across the rest of the network. (No, even when I have 10G switching, I’m not going to be otherwise saturating the network, but that’s not the point.)

Meanwhile: I’m having a problem with the networking on the CORE box, but I’ll need to post about this later; I don’t have the time right now to fully explore what’s going on.