10GBase-T: Best to avoid it if you can

This resource was originally created by user: jgreco on the TrueNAS Community Forums Archive. Please DM this account or comment in this thread to claim it.

I recently wrote a resource on high speed network performance tuning, and in it, I commented “Do not try to use copper 10GBase-T”. @Elliot Dierksen asked about this, and I cranked out a pretty comprehensive post on the topic. Here it is in somewhat expanded form.

Elliot Dierksen said:

I am curious about the “Do not try to use copper 10GBase-T” comment in the resource (which was very helpful). I am sure that there are scars associated with that comment.

Actually not; I own a number of bits of gear with 10GBase-T ports, including a very nice X9DR7-TF+ board and some Dell PowerConnect 8024F switches. These were basically incidental acquisitions where I did not deliberately seek them out, and generally use the ports as conventional 1G copper ports.

The most immediate arguments against 10GBase-T are:

  1. that it consumes more power than an equivalent SFP+ or DAC setup. Which might seem like shrug, except that once you get to the point of burning several extra watts per port on a 48-port switch, it becomes a meaningful ongoing operational expense. Newer estimates are two to five watts per port, whereas SFP+ is about 0.5-0.7. It is worth noting that in a data center environment, if you burn five extra watts in equipment, there is usually about a five to ten watt cost to provide cooling as well. The electrical costs for 10GBase-T add up.

  2. that it experiences higher latency than an SFP+ or DAC setup. SFP+ is typically about 300 nanoseconds. 10GBase-T, on the other hand, uses PHY block encoding, so there is a 3 microsecond step (perhaps 2.5 us more than the SFP+). This shows up as additional latency in 10GBase-T based networks, which is undesirable, especially in the context of the topic of this thread, which is performance maximization. I’m sure someone will point out that it isn’t a major hit. True, but there regardless.

  3. that people argue for 10GBase-T because they’ve already got copper physical plant. The problem is that this is generally a stupid argument. Unless you installed Cat7 years ago, your copper plant is unlikely to be suitable for carrying Cat7 at distance, and that janky old 5e or 6 needs to be replaced. Today’s kids did not live through the trauma of the 90’s, where we went from Cat3 to Cat5 to Cat5e as ethernet evolved from 10 to 100 to 1000Mbps. Replacing physical plant is not trivial, and making 10GBase-T run at 100 meters from the switch is very rough; Cat6 won’t cut it (only 55m), you need Cat6A or Cat7 and an installer with testing/certification gear, because all four pairs have to be virtually perfect, and any problems with any pair can render the connection dead.

By way of comparison, fiber is very friendly. OM4 can take 10G hundreds of meters very efficiently. It’s easy to work with, inexpensive to stock various lengths of patch, and you can get it in milspec variants that are resistant to damage. You can run it well past the standards-specced maximum length in many cases.

On the flip side, 10GBase-T has the advantage of being a familiar sort of plug and play that generally doesn’t require extra bits like SFP+ modules and may be easier to work with inside a rack.

Elliot Dierksen said:

I am starting to see more of my customers wanting to use 10GBase-T

I think the big driver for many people is the familiarity thing I just mentioned; they can wrap their heads around 10GBase-T because at the superficial level it feels very much like 1GbE. There’s a lot of FUD that has slowly percolated into the industry over the last few decades about fiber and fiber installers, because terminating fiber in the field is specialist work that requires expensive gear and supplies. However, these days you can often cheat and get factory-terminated prebuilt assemblies that can avoid the field termination work. Very easy to work with.

On the flip side, Category cable, while familiar on the face of it, is more risky as speeds increase. Category cable is all about twisted pair characteristics, and if you’ll allow me a little liberty for inaccuracy to do a better layman explanation, there are RF components to the issue as well as physical factors:

Category 3 cable, which any monkey should be able to terminate, operates at 16 MHz, handling 10Mbps, and due to the long twist length, it was very common to find people untwisting excessive amounts of cable and just punching it. This continued at least into the Cat5 100Mbps era, and among other things, both 10/100Mbps only used two out of the four pair, which made sloppiness somewhat more forgiving since screwing up a pair still left you operable in many cases.

badtermination.jpg

However, with 1GbE, better signal processing led to the use of 4D-PAM5 modulation encoding, AND all four pairs being used simultaneously in both directions:
View attachment 63328
This is most of how we got to 1GbE without a significant increase in cable bandwidth; Cat5e was only 100MHz-350MHz depending on the era. Crosstalk (RF interference between pairs) and delay skew (difference in transmission times due to differing lengths of the pairs) become significant issues though, and therefore it became necessary for installers to up their games on the quality of field terminations. You had to bring the twist almost all the way up to the terminals, and also make sure that you weren’t causing pair lengths to differ, or shortening one conductor of a pair more than another. Messing with this would cause weird problems and failures.

With 10GbE, we have once again boosted MHz to 500 MHz, and moved to a much more complicated encoding strategy that includes 16 discrete signal levels. This means that it is even more sensitive to field termination errors, and you really need a perfectionist grade punch technique followed by a thorough cable test/certification to get this working reliably.

If you want to know more, I found a very nice summary at https://www.curtisswrightds.com/sit…rnet-Physical-Layer-Standards-white-paper.pdf

Finally, the cost on the used market for 10GBase-T gear is very costly, compared to SFP+. SFP+ has been with us for about two decades, and lots of older gear is being upgraded to 25Gbps or 100Gbps, which is making the used markets a fantastic place to shop for deals on 10Gbps gear. It doesn’t make a lot of sense to spend good money on new 10GBase-T gear that isn’t even a good technology.

2 Likes