I’m in the process of upgrading my network from 10GbE to 100GbE.
I’m mainly gaming on my Windows 11 machine and would like to install Steam/Game libraries onto my Server to play off or maybe just use as storage and copy over games to local NVME when I want to play.
Mac based equipment will be used for Video editing & Music production. Music production uses a lot of heavy VST libraries that I want quick speed for.
The plan is to direct connect Windows PC to Server via 100GbE network.
Direct connect MacBook/Mini to Server via Thunderbolt 4.
Both systems will use a striped pool of the NVME drives.
Server/Nas (Proxmox, TrueNas core or scale?)
CPU - Intel® Core™ i5-12500 Processor
Motherboard - Asus ProArt Z690-CREATOR WIFI
RAM: 64GB DDR4
12X 14TB SATA HDD
4X 2TB PCIE 3.0 x4 NVME
I’m missing 100GbE NICS and would like advise in purchasing.
I’ve been looking at Mellanox ConnectX-4 EDR+100GbE cards and can buy locally here in Japan 2 for USD 186, or Mellanox ConnectX-5 EDR+100GbE 2 for USD 319.
Will either work fine with Windows 11?
Wil either work fine with TrueNas? Also should I go for Core or Scale, I’ve read several posts that say Core is faster but Scale has ability to expand Vdevs that would be handy
100GbE is fine for bandwidth, but latency will be poor.
No, that works. But I am unaware of anything over 10GBit/s.
I would not recommend it. Way to complicated, unnecessary layers make if more failure prone and more complex to troubleshoot.
Like for example, simple getting half decent performance will be a challenge with linux bridges. And it does not even make sense hardware wise IMHO since the needed hardware is way different. And both will battle for ARC.
Sure you can circumvent these things, but it could be that you run into a wall way sooner than you think. Like for example bifurcation support on that consumer mobo.
No one forces you to use Linux Bridge to allow communication on 100gbe, you can PCIe passthrough individual ports and connect them via 100gbe switch.
I have Linux bridge running on 25gbps LAN interface shared between Opnsense and TrueNAS. I get 22-23gbps iperf3/speed test speed from TrueNAS. MTU 9000, no firewall, multiqueue enabled.
Havent tested on 100gbe, but 25gbe was super easy to setup
Core is dead (OK, Patrick, “mostly dead”). It’s been on life support for the last several years, and iX has finally confirmed they’re pulling the plug. Do not use it in new deployments.
I’m using Chelsio T62100-CR on Windows 11 Pro and Truenas Core (bare metal): direct connection (no switch between the two machines).
This is and old (PCIe Gen3 x16 so you can’t reach 100 gbe on each port) card with 2 QSFP28 ports.
It is easy to find updated drivers for both linux (22-11-2024) and windows (07-05-2024) on Chelsio web site.
I can play Call Of Duty via an ISCSI disk without problems (I’m not a pro player so I can’t say if there are latency problems).
Best Regards,
Antonio
Yeah, I’m starting to understand now the limitation of the Motherboard I have.
With both PCIE slots filled, they will run at x8. I don’t mind using the RTX 3090 at x8 as that is negligible.
With ConnectX6 which uses PCIE 4.0, Most of them are 4.0 x16, If I got a single port and ran it at 4.0 x8, I would still get 100GbE? Or is there a specific model I should get?
For now PC will connect driectly to Server. In the future I’ll look to buying the Mikrotik.
You will NOT get 100GbE because this is actually four 25GbE links in a single package, so any single client is basically limited to 25Gb/s, and your spinning drive array is not going anywhere near delivering 100 Gb/s.
I’m sure this is a dumb question (since I see a number of folks with the same goal), but what benefit do you expect from this? iSCSI is going to be a single-client system, and you’re never going to get the same kind of performance across a network (even 100 GbE) as you will with the storage being local. A striped pool isn’t going to give you any redundancy. So what’s the benefit of all the added cost?
I have 8 nvme disk (4 mirror) installed in 2 PCIe cards.
The goal is to have small disk in each computer used only for boot and OS.
All data/programs that must be accessed fast in ISCSI disk.
Big and slow access data in SMB and NFS.
But now I’m trying to learn how Truenas and fiber connections works.
Best Regards,
Antonio
I get the objective, I guess, but why? You’re adding complexity, you’re adding (fairly significant) cost, and you’re reducing performance. What do you get in return? As far as you’ve described, you only have two clients, and very distinct sets of data for each. It seems a much simpler and safer course of action would be to simply put the drives in the respective computers and use them locally.
There really isn’t anything unique to TrueNAS in that regard–use a decent NIC (Mellanox is OK; Intel or Chelsio are better) and you’re good to go. No doubt there’d be some tuning that could be done, but that gets you set up.
Of course, you’ll need appropriate optics and fiber patch cables, but there’s nothing unique to TrueNAS in that regard. A good source for both is fs.com.