Ive done this, as have many others.
If you have such a specific need, and can accommodate the cards (free slots OK), I would ditch the switch. You donāt need it. Get a 4 way SFP+ card 2nd hand off eBay for the server - Chelsio 540 or Intel x710 T4 (T4R if available, 10G power is not trivial) would be an example of what Iām thinking, or 2 x 2 port cards, and direct connect each client to it.
That way you get full bandwidth on a 10G only setup, rather than needing a 40G+ setup.
As to details, those are the only 2 cards Iād consider really. You donāt need esoteric, commodity solid are plenty good enough, and both product lines have plenty of used on sale cheaper that can almost always be trusted. Both have solid *nix and windows drivers. Chelsio windows drivers used to be better replaced by chelsioās website own DLs, to get fullest configurability access, donāt know if thatās true any more.
Your next job is to decide copper RJ45 (10GBase-T), copper direct attach (DAC) or optical links (usually 10GBase-SR).
RJ-45 used to cost more but thatās come down a lot. It uses considerably more energy (hence heat when you put 2 or 4 on a card). But its very common, hence cables are cheap, and crucially you can skip the cost of 8 SFP+ modules that way.l, the cables plug direct into the sockets as usual. In future you can expand with a used 10G RJ45 switch cheap used, as well. Its exactly as youāre used to, and will switch down to lesser speeds if needed.
DAC is suggested by others. I donāt have experience. I went for the 3rd option when 10GBase-T cost more, my latest one I went directly for 10GBase-T because I had existing Cat6A cables.
Optic cabling was new to me but turned out easy. Needs SR SFP+ modules to plug into SFP+ network cards, and OM2 cables. As theyāre optical, you canāt cut them, buy (very?) long and curl them up. Very low power, very reliable.
Gotchas and stuff? Honestly not many. Main one is, when considering card cost, factor in modules and cable cost not just card. Some cards need modules, some donāt. Also if you care about it, factor in cost of extra power for copper vs optical. Last and maybe most important, if you DIY your own network cables, be aware 10G is much more sensitive to interference, untwisted length at plug, all cores close to same length (nanoseconds count), quality of connection, shielding at plug, etc. Research whatās needed for it (not difficult, like 1G but a bit more care). If unsure buy not make, but you shoud be able to make 10G if you make 1G.
Also look at options. Key ones are queue lengths, jumbo frames, buffers, interrupt coalescing, and offloading. If you need explaining ask. Basically they ensure the NIC does all the work it can, as 10G is demanding on the host, especially 4 port 10G.
Last, for the server, look carefully at your system spec. Feeding a 40G pipeline will pull data fast off the pool - and thereās a big gotcha there. The CPU has to checksum it all, at 40G rate pretty much, in real time. Some systems can, some hit CPU starvation. Your pool (ssds for special vdevs I hope, and fast enough for data vdevs) need to be able to feed the pipe. And so on. (And if your pool has dedup enabled, say so, thatās going to ramp the system demand up 10x.)
Allocate plenty RAM to network and zfs, give it larger queues maybe. 10G can empty a buffer in milliseconds, if either end stalls.
And, critically, CPU starvation and inability of disks and checksumming to keep up, can cause the system to starve usual network processing, causing TCP window to plummet to zero, and killing e.g. not just data links but SSH and webUI connections too. If you see a pattern where data is fine for many minutes then hits zero and stalls mid session for āno obvious reasonā, and sessions get dropped including SSH or web or iscsi, thatās what to suspect. If that happens, ask.