X540-T2 can be found incredibly cheap secondhand via places like Ebay and is more than capable.
You’re going to have much more choice at that pricepoint with SFP+ though (especially in regards to efficiency), if you don’t have any specific reason not to be looking into it I’d recommend that instead.
I just installed Scale on my previous pfSense server which has an X550-T2 installed. No issues at all. Bought it refurbished for about a $100. A bit more than you mentioned but totally worth it. I have 2 in my network. Not a single issue with FreeBSD nor Linux.
Or use a SFP+/RJ45 adapter. Those bring their own issues to the table (especially if you’re using lots of them in a single switch), but they generally work pretty well. SFP+ remains the technically-superior solution, but cabling’s definitely more expensive.
But as to the NICs, there are three solid recommendations: Intel, Chelsio, and Solarflare. Mellanox is considered to be a step down. Others will work, but those are the recommendations.
I have one of those adapters myself and it works well for what it is, but it was relatively pricy and it runs so hot that the Mikrotik switch I use it in specifically instructs you to not put two of them side-by-side.
In case it comes up, I would stay away from Aquantia. I perceive them as having reliability issues.
So do I, and I’m using one. Edit: and posting this gave me the metaphorical kick in the pants I needed to adjust the bridge interface in my UGREEN NAS to use the Chelsio T420 rather than the onboard Aquantia NIC. We’ll see if that’s more stable.
I had 2 interfaces in TerraMaster f4-424 max and never had a single issue with them. Combined both in 802.3ad bond they easily did close to 20gbps throughput
I had no issue with the throughput with my Aquantia NIC; the problem I had was that it would fall over and drop connections under heavy load. Not always, of course, but pretty often.
Just went through this journey - the converters are expensive IMHO - FS.com (cheap) 10Gig-T SFP+ to copper converters are $90+CAD each. For my setup, I would have needed 12 of them so it was way cheaper for me to buy new HPE NICs ($15 to $25 each for HPE cards - needed 3 LOMs and 3 NICS). Also, using copper allowed my to stick with a copper based 10Gig switch - much cheaper than SFP+ in the Cisco world.
I may have gone a little overboard but overkill is underrated.
I am lucky enough to have a 10Gig 3850x-24 UPOE and thought it would be a good idea to run dual connections for iSCSI and dual LACP connections for access on 3 DL380 servers. Cause, you know, why not?
Yes, as I said before I’ve been running TerraMaster F4-424 Max with 2 x Marvel AQtion in 802.3ad bond without a single problem. I’ve run SMB, Plex, Jellyfin and MinIO for 3-4 months, speed has always been there.
My client is Windows on Asus Pro WS TRX50-SAGE WIFI with same Marvel AQtion card. Not a single issue.