Recent News

Separating Fact from Fiction in Multi-Access Edge Computing Featured – The Fast Mode

The IT industry looks poised for the next big disruption, as key pillars of the marketplace – cloud providers, communication service providers (CSPs), enterprises, and others – turn their focus to the edge. With multi-access edge computing (MEC), they aim to push application processing out closer to users, enabling better network performance, lower latencies, and application experiences that weren’t possible before.

A new generation of low-latency use cases now seem within reach, including immersive gaming and augmented reality (AR), industrial automation, autonomous vehicles, and many others. But can real-world networks actually deliver the performance emerging edge applications need? What’s fact in this rapidly evolving space, and what’s fiction? Which MEC use cases can we expect to see soonest, and what technical barriers remain to be overcome?

As a leading global provider of network testing solutions, Spirent is well-positioned to find answers. We recently conducted a study of the state of MEC, examining roughly a dozen CSP networks and interviewing 150 prospective MEC customers around the world. The big takeaway: suppliers are betting big on MEC services, and customers are anxious to use them. But determining which use cases initial MEC offerings will actually support – and which will have to wait for networks to catch up with customers’ wish lists – remains a work in progress.

The “killer app” for MEC: low latency

For prospective MEC customers, the biggest draw for new edge services – and the highest-priority demand – is lower latency. Achieving it will enable a wide range of new use cases, from immersive virtual reality (VR) gaming to AR-enabled training, to precision automated robotics, and more. But the actual latencies that MEC suppliers need to deliver vary a great deal depending on application.

In the gaming industry, for instance, customers envision immersive mobile gaming experiences that use AR, VR headsets, haptic controls, real-time scene rendering, and more—capabilities that will require continuous latency on the order of 7-15 milliseconds (ms). The industrial space looks to have even more variability, from process automation applications that require 50-ms latency, to precise cooperative robotic motion control applications that require latencies lower than 1 ms.

We identified a subset of early MEC use cases – led by AR/VR, cloud gaming, and video analytics – that should be mature enough to operationalize, with latencies that networks should be able to deliver sooner. The big challenge for MEC suppliers, however, will be not merely providing average latencies within the right range, but delivering them consistently. This is a critical requirement for the most eagerly anticipated use cases, which can be extremely sensitive to latency fluctuations. And it remains the biggest technical challenge that MEC suppliers will need to overcome.

Mismatched expectations

If there’s one word to describe the current understanding of MEC possibilities among suppliers and customers, it’s “uncertainty.” Suppliers are making significant investments in upgrading edge capabilities, but there remains a disconnect between the latencies customers want to achieve for different edge applications and what networks can actually deliver. Suppliers and customers have many outstanding questions:

  • Which latency rates are actually required for the most in-demand use cases, in both the near and long term?
  • How consistent and deterministic does latency actually need to be?
  • Which latency-driven services can be delivered now, with existing infrastructure?
  • How will suppliers assure (and monetize) new edge services?

These are complicated questions – especially when initial trials are ongoing, and it’s not yet clear what latencies early edge cases actually need. Indeed, some suppliers are skeptical that the latency targets customers have set – based on 3GPP surveys of various market sectors – are accurate. The upshot: suppliers are moving forward rolling out MEC service offerings, and customers are planning to use them. But there remains a gap between what customers believe they need and what operators are actually preparing to deliver.

Looking ahead

In testing current networks, we measured mean latencies that support some MEC use cases outlined in 3GPP Release 16. But we also found significant latency fluctuations, by time and across regions, and a lack of symmetry in uplinks and downlinks – issues that will be nonstarters for several of the most in-demand gaming and video use cases. Optimizing infrastructures to address these issues should be the top priority for MEC suppliers looking to monetize the edge.

The good news is that, once suppliers can deliver services with consistent low latencies, customers will pay for them. In our survey, 56% of prospective MEC customers said they’d pay a premium for service-level agreements (SLAs) that guarantee latency that always remains within a predefined window. Our benchmarking also revealed that edge investments need not be as geographically distributed as suppliers initially anticipated. In multiple cases, we found that just a few edge clouds across a vast region could significantly lower latency. Adopting 5G Standalone networks and architectures (deploying edge clouds at peering points in major cities, in central offices, and in private MEC solutions on the customer premises) will go even further.

The underlying truth, however, is that many factors can impact latency: overhead in hybrid 4G/5G networks, inefficiencies in air interfaces and wired transport, applications themselves. Some are not caused by the network at all, and customers will need to be educated about issues outside the operator’s control. But if CSPs want to support the most lucrative MEC use cases under SLAs, they should take steps to manage latency end-to-end across RAN, transport, and core networks. These efforts should include:

  • Making sure that MEC requirements are aligned: Enterprises may overestimate the latencies they need, and suppliers may underestimate what they can support. For early use cases, suppliers should expect to collaborate closely with customers and conduct extensive trials to determine precise real-world needs.
  • Striving for continuous visibility: MEC suppliers can’t fix what they can’t see. They should conduct ongoing benchmarking and impact assessments to help identify sources of extra latency inside and outside the network.
  • Prioritizing consistency: Given the requirements of the most in-demand early MEC use cases, testing should prioritize consistency – not just achieving low average latency. And to ensure consistent performance for all users, suppliers should perform testing in all target markets.
  • Measuring from the end-user perspective: Improving latency starts with understanding the application as experienced by the end-user or -device. Active testing can play a key role in both emulating the profile and behavior of expected MEC application traffic, and segmenting the network to isolate sources of high or inconsistent latency.

This UrIoTNews article is syndicated fromGoogle News

About Post Author