Recent News

Optimizing Ethernet For Speed, Power, Reach, and Latency – Data Center Frontier

Data center architectures are evolving to support increasing Ethernet transfer rates. (Image: Shutterstock)































































































































































































In a new white paper, Anritsu discusses Ethernet usage trends in data center networks. They also explore the technologies helping operators to meet growing bandwidth demands and verify network speed, power, latency, and performance.

Ethernet

Get the full report

“Growing demand for information has created an explosion in data center traffic,” according to a new white paper from Anritsu. They say this demand is increasing the need for data center architectures to support ever higher Ethernet transfer rates. As operators seek to “optimize Ethernet media types for speed, power, reach, and latency,” they’re being forced to reevaluate some long-held assumptions in these areas, according to the paper.

The authors explain that the need to reduce latency is increasingly important as data centers transform into edge computing networks. They say, “as computing resources move closer to the edge, the latency key performance indicator (KPI) tightens. This KPI is application-service dependent. Latency affects the user experience for applications and must be considered when deploying Ethernet connects.”

As data center network operators move to 400 Gigabit Ethernet and beyond, they will face new challenges such as signal integrity, network interoperability, and maintaining service level agreements (SLAs) for different applications. – Anritsu, “Ethernet in Data Center Networks

To address concerns around power and speed, operators are turning to optical transceivers and high speed breakout cables but, according to the paper, these technologies are not without their challenges. The authors note that “not all 400G Ethernet optics are created equal and their performance on forward error correction (FEC) KPI thresholds varies.” Likewise, high speed breakout cables are less expensive, but have performance and distance issues.

The paper goes on to explain how networking equipment manufacturers are turning to testing solutions to measure the signal integrity of new high speed optical interfaces.

Anritsu also explores how “with multi-access edge computing and network virtualization, data center providers can maintain different SLAs for different applications.”

Download the full report for more information on technologies that can verify network performance at high speeds.

This UrIoTNews article is syndicated fromGoogle News

About Post Author