Another 100GbE journey

I’m in the process of upgrading my network from 10GbE to 100GbE.

I’m mainly gaming on my Windows 11 machine and would like to install Steam/Game libraries onto my Server to play off or maybe just use as storage and copy over games to local NVME when I want to play.

Mac based equipment will be used for Video editing & Music production. Music production uses a lot of heavy VST libraries that I want quick speed for.

The plan is to direct connect Windows PC to Server via 100GbE network.
Direct connect MacBook/Mini to Server via Thunderbolt 4.

Both systems will use a striped pool of the NVME drives.

I have the following hardware:

Gaming PC (Windows 11):
CPU - AMD Ryzen™ 9 7950X
Motherboard - ProArt X670E-CREATOR WIFI
RAM: G.Skill Trident Z5 64 GB DDR5-6400

Mac Mini M4 & Macbook Pro M3

Server/Nas (Proxmox, TrueNas core or scale?)
CPU - Intel® Core™ i5-12500 Processor
Motherboard - Asus ProArt Z690-CREATOR WIFI
RAM: 64GB DDR4
12X 14TB SATA HDD
4X 2TB PCIE 3.0 x4 NVME

I’m missing 100GbE NICS and would like advise in purchasing.

I’ve been looking at Mellanox ConnectX-4 EDR+100GbE cards and can buy locally here in Japan 2 for USD 186, or Mellanox ConnectX-5 EDR+100GbE 2 for USD 319.

  1. Will either work fine with Windows 11?
  2. Wil either work fine with TrueNas? Also should I go for Core or Scale, I’ve read several posts that say Core is faster but Scale has ability to expand Vdevs that would be handy
  3. Windows through iSCSI for steam library
  4. Will try Mac through iSCSI other wise SMB

Which one is it?
To offload games or play older games over iSCSI, TrueNAS is fine.
For modern games, latency is way too high compared to a local NVME.

Not gonna work with TrueNAS. TrueNAS does not support Thunderbolt.

TrueNAS is a great NAS and a not so great Hypervisor.
Proxmox is a great Hypervisor and a not so great NAS.

This mobo has only 8 SATA Ports, not 12.

Yes.

Core seems to be currently faster when it comes to SMB.

Yes.

No support for iSCSI, so I would go with SMB or NFS.

Was hoping with 100GbE I could play modern games through iSCSI.

Even with IP over Thunderbolt?

Sorry, I will run TruNas on Proxmox.

The server has a backplane the HDDs are connected to via HBA.

What’s EDR+

ConnectX-4 (non Lx) version supports dual port 100gbe

I noticed you have consumer grade motherboards, plus you have GPU? and at least 2 NVMes.

Did you bother to check whether you would have enough PCI lanes for 100gbe??? PCIe 3 would require x16, PCIe 4 x8 - per port

What about switching? Where are you going to plug 100gbe to?
Mikrtoik makes cheap 100gbe switches tho.

100GbE is fine for bandwidth, but latency will be poor.

No, that works. But I am unaware of anything over 10GBit/s.

I would not recommend it. Way to complicated, unnecessary layers make if more failure prone and more complex to troubleshoot.
Like for example, simple getting half decent performance will be a challenge with linux bridges. And it does not even make sense hardware wise IMHO since the needed hardware is way different. And both will battle for ARC.
Sure you can circumvent these things, but it could be that you run into a wall way sooner than you think. Like for example bifurcation support on that consumer mobo.

If you still want to go down that road: Yes, You Can (Still) Virtualize TrueNAS

This is completely false information.

  1. No one forces you to use Linux Bridge to allow communication on 100gbe, you can PCIe passthrough individual ports and connect them via 100gbe switch.

  2. I have Linux bridge running on 25gbps LAN interface shared between Opnsense and TrueNAS. I get 22-23gbps iperf3/speed test speed from TrueNAS. MTU 9000, no firewall, multiqueue enabled.
    Havent tested on 100gbe, but 25gbe was super easy to setup

# bin/speedtest -s 43030

   Speedtest by Ookla

      Server: Init7 AG - Winterthur (id: 43030)
         ISP: Init7
Idle Latency:     1.44 ms   (jitter: 0.08ms, low: 1.42ms, high: 1.70ms)
    Download: 23453.57 Mbps (data used: 23.0 GB)
                  3.30 ms   (jitter: 3.67ms, low: 1.13ms, high: 26.84ms)
      Upload: 22000.78 Mbps (data used: 22.7 GB)
                  1.25 ms   (jitter: 0.11ms, low: 1.08ms, high: 1.94ms)
 Packet Loss:     0.0%
iperf3 -c speedtest.init7.net -P 8

Connecting to host speedtest.init7.net, port 5201
[SUM]   0.00-10.00  sec  25.1 GBytes  21.6 Gbits/sec  18819             sender
[SUM]   0.00-10.00  sec  25.1 GBytes  21.6 Gbits/sec                  receiver

Core is dead (OK, Patrick, “mostly dead”). It’s been on life support for the last several years, and iX has finally confirmed they’re pulling the plug. Do not use it in new deployments.

1 Like

Hello,

I’m using Chelsio T62100-CR on Windows 11 Pro and Truenas Core (bare metal): direct connection (no switch between the two machines).
This is and old (PCIe Gen3 x16 so you can’t reach 100 gbe on each port) card with 2 QSFP28 ports.
It is easy to find updated drivers for both linux (22-11-2024) and windows (07-05-2024) on Chelsio web site.
I can play Call Of Duty via an ISCSI disk without problems (I’m not a pro player so I can’t say if there are latency problems).
Best Regards,
Antonio

I’m using a RTX 3090 which is PCIE 4.0 x16.

Yeah, I’m starting to understand now the limitation of the Motherboard I have.

With both PCIE slots filled, they will run at x8. I don’t mind using the RTX 3090 at x8 as that is negligible.

With ConnectX6 which uses PCIE 4.0, Most of them are 4.0 x16, If I got a single port and ran it at 4.0 x8, I would still get 100GbE? Or is there a specific model I should get?

For now PC will connect driectly to Server. In the future I’ll look to buying the Mikrotik.

You will NOT get 100GbE because this is actually four 25GbE links in a single package, so any single client is basically limited to 25Gb/s, and your spinning drive array is not going anywhere near delivering 100 Gb/s.

1 Like

What’s this obsession with 100gbe?

You’re running very average hardware in terms of CPU and connectivity but want to have 100gbe? What’s the point?

You can buy ConnectX-4 Lx dual 25 for $50 on eBay and this will be plenty of bandwidth for your hardware.

If you really want 100gbe, I would upgrade to Epyc or ThreadRipper first

1 Like

I’m sure this is a dumb question (since I see a number of folks with the same goal), but what benefit do you expect from this? iSCSI is going to be a single-client system, and you’re never going to get the same kind of performance across a network (even 100 GbE) as you will with the storage being local. A striped pool isn’t going to give you any redundancy. So what’s the benefit of all the added cost?

This is my experience:

 .\iperf3.exe -c 192.168.252.10 -l 1M -P 1
Connecting to host 192.168.252.10, port 5201
[  4] local 192.168.252.20 port 63299 connected to 192.168.252.10 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  3.18 GBytes  27.3 Gbits/sec
[  4]   1.00-2.00   sec  3.15 GBytes  27.1 Gbits/sec
[  4]   2.00-3.00   sec  3.15 GBytes  27.1 Gbits/sec
[  4]   3.00-4.00   sec  3.15 GBytes  27.0 Gbits/sec
[  4]   4.00-5.00   sec  3.14 GBytes  27.0 Gbits/sec
[  4]   5.00-6.00   sec  3.10 GBytes  26.7 Gbits/sec
[  4]   6.00-7.00   sec  3.13 GBytes  26.9 Gbits/sec
[  4]   7.00-8.00   sec  3.13 GBytes  26.9 Gbits/sec
[  4]   8.00-9.00   sec  2.99 GBytes  25.7 Gbits/sec
[  4]   9.00-10.00  sec  2.99 GBytes  25.7 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  31.1 GBytes  26.7 Gbits/sec                  sender
[  4]   0.00-10.00  sec  31.1 GBytes  26.7 Gbits/sec                  receiver

iperf Done.

With big “length of buffer” I can go over 25 gb/s.

Best Regards,
Antonio

I have 8 nvme disk (4 mirror) installed in 2 PCIe cards.
The goal is to have small disk in each computer used only for boot and OS.
All data/programs that must be accessed fast in ISCSI disk.
Big and slow access data in SMB and NFS.
But now I’m trying to learn how Truenas and fiber connections works.
Best Regards,
Antonio

Try replacing -l 1M with -P 4 or 8

1M buffer is pointless in real life application but multiple connections is a standard thing with SMB or NFS. Are you using MTU 9000?

I get the objective, I guess, but why? You’re adding complexity, you’re adding (fairly significant) cost, and you’re reducing performance. What do you get in return? As far as you’ve described, you only have two clients, and very distinct sets of data for each. It seems a much simpler and safer course of action would be to simply put the drives in the respective computers and use them locally.

There really isn’t anything unique to TrueNAS in that regard–use a decent NIC (Mellanox is OK; Intel or Chelsio are better) and you’re good to go. No doubt there’d be some tuning that could be done, but that gets you set up.

Of course, you’ll need appropriate optics and fiber patch cables, but there’s nothing unique to TrueNAS in that regard. A good source for both is fs.com.

1 Like

Above -P 4 doesn’t seems to change much:

.\iperf3.exe -c 192.168.252.10  -P 4
...
- - - - - - - - - - - - - - - - - - - - - - - - -
[  4]   8.00-9.00   sec  1.83 GBytes  15.7 Gbits/sec
[  6]   8.00-9.00   sec  1.82 GBytes  15.7 Gbits/sec
[  8]   8.00-9.00   sec  1.80 GBytes  15.5 Gbits/sec
[ 10]   8.00-9.00   sec  1.79 GBytes  15.4 Gbits/sec
[SUM]   8.00-9.00   sec  7.24 GBytes  62.2 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[  4]   9.00-10.00  sec  1.81 GBytes  15.5 Gbits/sec
[  6]   9.00-10.00  sec  1.80 GBytes  15.5 Gbits/sec
[  8]   9.00-10.00  sec  1.78 GBytes  15.3 Gbits/sec
[ 10]   9.00-10.00  sec  1.77 GBytes  15.2 Gbits/sec
[SUM]   9.00-10.00  sec  7.16 GBytes  61.5 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  18.6 GBytes  16.0 Gbits/sec                  sender
[  4]   0.00-10.00  sec  18.6 GBytes  16.0 Gbits/sec                  receiver
[  6]   0.00-10.00  sec  18.5 GBytes  15.9 Gbits/sec                  sender
[  6]   0.00-10.00  sec  18.5 GBytes  15.9 Gbits/sec                  receiver
[  8]   0.00-10.00  sec  18.4 GBytes  15.8 Gbits/sec                  sender
[  8]   0.00-10.00  sec  18.4 GBytes  15.8 Gbits/sec                  receiver
[ 10]   0.00-10.00  sec  18.3 GBytes  15.7 Gbits/sec                  sender
[ 10]   0.00-10.00  sec  18.3 GBytes  15.7 Gbits/sec                  receiver
[SUM]   0.00-10.00  sec  73.9 GBytes  63.5 Gbits/sec                  sender
[SUM]   0.00-10.00  sec  73.9 GBytes  63.5 Gbits/sec                  receiver

iperf Done.

And yes I’m using MTU 9000 (“Jumbo Packet 9014” in Chelsio driver).

Best Regards,
Antonio

So a 25GbE card will be the same speed for 1 client connection as a 100GbE?

What what?

You’re now pushing 63.5gbps instead of 26.7. that’s 2.4x speed up

No, 100gbpe card will achieve 100gbpe if both client and receiver are able to handle it. 25 only 25