Another 100GbE journey

@bugacha @dan You give very good points. I was thinking in terms of running NVME drives in one place so my PC and Mac can access fast storage.

Very must similar use case as @Tony-1971

PC can have direct NVME storage but I would like to give more fast memory for my Mac.

The price of 25Gbe card and 100Gbe are not to vast for me, so was thinking more of buy once and not need to buy again in the future.

https://www.akitio.com/expansion/node-duo

…and other similar devices.

Probably you are right: with only 2 PC doesn’t make sense. In return I have “centralized” disk with all the pro of ZFS.

I’m trying to say that I’m not very expert in TrueNas.
And also not expert in fiber connection (so for example now I’m using multimode duplex fiber (ok for close PC), but seems that singlemode is better for high speed). But If i change fiber I must change transceiver too.

Best Regards,
Antonio

I’m aware of these devices but didn’t want to take desk space with them, I already have a server rack and a NAS server, so thought adding NVME drives in one place where my PC and Mac can both access would save money and space.

Both multimode and single mode fiber come in Simplex and Duplex variants.

Single mode fiber is not better for high speed, fiber is fiber it allows light to travel, light travels at light speed. So speed of both is identical.

The main differences between Single and Multi mode fiber in real life are :

  1. Working distance - single mode fiber can transmit light over much longer distances
  2. Price of SFPs - because of above, single mode usually more expensive to make, not the fiber itself but transceivers.
  3. Diameter of cable and most importantly bend radius

In 2025, the price difference starts to be big above 25G, especially if you require more than 10km of distance.

I use single mode fiber at home, because I run max 25gbps and price is identical BUT single mode fiber comes in very small diameters allowing me to run it in the wall tubes easily.

1 Like

26.7 using .\iperf3.exe -c 192.168.252.10 -l 1M -P 1
63.5 using .\iperf3.exe -c 192.168.252.10 -P 4
So I think that something is not going well: 63.5 seems too low.
I see a core in windows client going up to 100% usage (with -P 4).

Best Regards,
Antonio

I’m not following. To me 26.7 is very low for 100gbpe connection.

Try -P 8

Also try both directions via -R

I’ve found MCX651105A-EDAT that runs at 4.0 x8. This could fit the system and only have the GPU hit a lil dip?

Have it now ready for the future when I’m ready to upgrade to EPYC or Threadripper.

My understanding is that copy-protection mechanisms prevent games from being run out of SMB or NFS shares; iSCSI works because it presents as a local disk.
But I’m always dumbfounded to see people throwing iSCSI, and the corresponding costs, at the issue of hosting a game library on their NAS… :astonished:

1 Like

Sure, makes sense as far as it goes (well, as much as any attempt at copy-protection makes sense, anyway).

Corresponding costs and limitations. It’s still going to be a single-client arrangement. And while for some purposes (e.g., VM storage) iSCSI makes sense, I just don’t see how it does here. Particularly when the pool’s going to be striped, thus losing many of the benefits of ZFS. And OpenZFS on macOS is reasonably mature, so whatever of those benefits would exist on a striped pool could still be achieved with local drives.

There are lots of ways of getting “fast additional storage” that don’t involve building a server and paying for 100 GbE networking (and especially for everything that would make 100 GbE work sensibly).

Now 100 gbe network cards are not very expensive and is also easy to find used ones (I buy them used on ebay). Good SSD are expensive, and I’m not confortable to buy used ones.
And also new switch are not too expensive (Mikrotik, Qnap), but it seems that they don’t support tecnologies like RDMA, etc.

Best Regards,
Antonio

Neither do I…
The games library would do just as fine on the Windows machine where games are played, using the 8 lanes that would go to a 100G NIC. One or two (refurbished) U.2 drives could do for capacity rather than four M.2.

12*14 TB (which HBA?) is enough HDDs to warrant TrueNAS, and its GUI, rather than dealing with OpenZFS on the command line—especially if the storage may be used by two Macs and possibly the Windows PC as well. But then I’d prefer to see ECC RAM for the NAS :wink:

Never said so.

My argument was that you may run into bottlenecks before you reach 100GBit/s and you counter that with an example of you not reaching line speed (23500) but only 22000Mbps?

Especially when considering that you can buy 5GB/s NVME like the Kingston NV4 for under 100$.

Well in theory you could work with cloned snapshots for multiple clients. Not that this makes any sense in 2025, but still.

Yes, also 26.7 gbps doesn’t seems very good, but in this case I don’t understand the reason. Both CPU (client and server) are well below under 100% usage. Probably is something related to “Tunables” in Core: I had read different articles but not find a solution yet.
And as soon as I add -R to .\iperf3.exe I have one core in Windows at 100% (also with -P 1).
Best Regards,
Antonio

I used iSCSI as I thought it would be the best option for speed, but lets say in the future I do want to have a centralized storage where I will have two or three separate game machines that can play games from the NAS so I use SMB.

I’m a software engineer so having DBs, Docker containers, VMs etc in one storage that I can access from PC, Mac etc would also be nice. These things will be nice to have as an available option and since 100GbE cards are not too expensive, I can over provision.

Also, I chose striped because of performance but if RaidZ 1 is not too much of a penalty then getting 1 parity would be nice.

And then the Macs can access the fast storage for video editing and music production.

The HBA is a Lenovo 430-16i LSI connected to the backplane that has 12X 14TB HDDs.

4X 2TB NVME are connected to the Motherboard.

Using symbolic links, Windows can re-direct a local folder to a remote folder. Unless a program very specifically looks at every directory in the path and queries it to see if it is a reparse point and then looks at the target to see if it is truly a non-local drive, the program will never know the data isn’t on a local disk.

The semantics of this are such that even Explorer doesn’t see it. If you drag a file from a directory that really exists on your C: drive to a directory that ends up on a remote drive via a symlink, the file gets moved, not copied, because both source and destination appear to be on the same disk. But, the “move” eventually does the right thing and copies the data because deep inside of Windows, it knows the truth.

Even weirder is that if you now delete the file that is sitting on a remote system, you will not be told that you will delete it forever (due to no network recycle bin), because it will be moved (via copy/delete) to the C-drive Recycle Bin.

This network card has a QSFP56 connector… Different from QSFP28 used in my card: not sure if this is more or less common (if you want to buy second hand transceiver for example).
Best Regards,
Antonio

All QSFP is the same form factor…the only difference is the max speed the device can negotiate. Internally, QSFP+, QSFP28, QSFP56, and QSFP56-DD are different, but you can always plug a slower speed transceiver into a faster device and it will run at the slower speed. You can sometimes plug a faster speed transceiver into a slower device and have it work at the lower speed.

All share the 4x channel architecture, so a QSFP28 that advertises as 100Gbps is actually 4x 25Gbps channels. How data flows through those channels depends on every layer of the communication, so that some applications will be able to get 100Gbps in a single stream, while others will be limited to 25Gbps per stream.

Note that there is a QSFP28 variant with only 2 channels of 25Gbps each, but I believe it uses the exact same signaling method and wiring as the 4x 25Gbps with only 2 channels used.

5 Likes

iSCSI is block storage and demands mirrors for best performance (or for any semblance of performance to begin with, in addition to lots of RAM).
Raidz is well-suited to sharing large files through SMB or NFS.

1 Like

Stripes would, if anything, give better performance, though of course with no redundancy. But certainly any form of parity RAID is a poor choice.