Like companies all around the world, US fast-food chain Taco Bell responded to the pandemic’s commercial impact by accelerating its shift to the cloud. As customers’ traditional patterns of restaurant and drive-through consumption changed rapidly – and permanently – to include kiosk, mobile and web ordering, often through third-party delivery services, Taco Bell moved the remainder of its group IT to cloud services.
But this 100% cloud-based approach stops at the restaurant door. Given that many of its 7,000 outlets don’t have fast and/or reliable internet connections, the company has recognised the limitations of the public cloud model and augmented its approach with edge computing. This set-up enables the company to process data near the physical point at which it is created, with only a periodic requirement to feed the most valuable material back to the cloud and receive updates from it.
Taco Bell is just one of thousands of firms seeking to exploit the fast-evolving – and much-hyped – distributed IT capability that edge computing can offer.
“Edge computing is getting so much attention now because organisations have accepted that there are things that cloud does poorly,” observes Bob Gill, vice-president of research at Gartner and the founder of the consultancy’s edge research community.
Issues of latency (time-lag) and limited bandwidth when moving data are key potential weaknesses of the centralised cloud model. These drive a clear distinction between the use cases for cloud and edge computing. But the edge is also a focus for many organisations because they want to add intelligence to much of the equipment that sits within their operations – and to apply AI-powered automation at those end points.
Early adopters include manufacturers implementing edge computing in their plants as part of their Industry 4.0 plans; logistics groups seeking to give some autonomy to dispersed assets; healthcare providers with medical equipment scattered across hospitals; and energy companies operating widely dispersed generation facilities.
“For such applications to be viable and efficient, their data must be processed as close to the point of origin or consumption as possible,” says George Elissaios, director of product management at Amazon Web Services. “With edge computing, these applications can have lower latency, faster response times and give end customers a better experience. Edge computing can also aid interconnectivity by reducing the amount of data that needs to be backhauled to data centres.”
Combining cloud and edge computing
In some ways, the emergence of edge computing represents a new topology for IT. So says Paul Savill, global practice leader for networking and edge computing at Kyndryl, the provider of managed infrastructure services that was recently spun out of IBM.
Companies are looking at the edge as “a third landing spot for their data and applications. It’s a new tier between the public cloud and the intelligence at an end device – a robot, say,” he explains.
But most organisations don’t expect their edge and cloud implementations to exist as distinct entities. Rather, they want to find ways to blend the scalability and flexibility they have achieved with the cloud with the responsiveness and autonomy of internet-of-things (IoT) and satellite processors installed at the edge.
Gill believes that “cloud and edge are pure yin and yang. Each does things the other doesn’t do well. When put together effectively, they are highly symbiotic.”
They will need to be, as more and more intelligence is moved to the edge. More than 75 billion smart digital devices will be deployed worldwide by 2025, according to projections by research group IHS Markit. And it is neither desirable nor realistic for these to be interacting continuously with the cloud.
“When you start to add in multiple devices, you see a vast increase in the volume, velocity and variety of the data they generate,” says Greg Hanson, vice-president of data management company Informatica in EMEA and Latin America. “You simply can’t keep moving all of that data into a central point without incurring a significant cost and becoming reliant on network bandwidth and infrastructure.”
In such situations, edge IT performs a vital data-thinning function. Satellite processors sitting close to the end points filter out the most valuable material, collate it and dispatch it to the cloud periodically for heavyweight analysis, the training of machine-learning algorithms and longer-term storage. Processors at the edge can also apply data security and privacy rules locally to ensure regulatory compliance.
Gill notes that edge computing has shifted quickly “from concept and hype to successful implementations. In many vertical industries, it is generating revenue, saving money, improving safety, enhancing the customer experience and enabling entirely new applications and data models.”
Before achieving such gains, many edge pioneers are likely to have surmounted numerous significant challenges. Given that the technology is immature, there are few widely accepted standards that businesses can apply to it. This means that they’re often faced with an overwhelmingly wide range of designs for tech ranging from sensors and operating systems to software stacks and data management methods.
Such complexity is reflected in a widespread shortage of specialist expertise. As Savill notes: “Many companies don’t have all the skills they need to roll out edge computing. They’re short of people with real competence in the orchestration of these distributed application architectures.”
The goal may be to blend cloud and edge seamlessly into a unified model, but the starting points can be very different. There are two fundamentally different – though not totally contradictory – schools of thought, according to Gill. The ‘cloud out’ perspective, favoured by big cloud service providers such as Amazon, Microsoft and Google, views the edge as an extension of the cloud model that extends the capabilities of their products.
The other approach is known as ‘edge in’. In this case, organisations develop edge-native applications that occasionally reach up to the cloud to, say, pass data on to train a machine-learning algorithm.
Adherents of either approach are seeing significant returns on their investments – when they get it right.
“We may be in the early phase of exploiting that combination of IoT, edge and cloud, but the capabilities enabling these distributed architectures – the software control and orchestration tools and the integration capabilities – have already reached the point where they’re highly effective,” Savill reports. “Some companies that are figuring this out are seeing operational savings of 30% to 40% compared with more traditional configurations.”
In doing so, they are also heralding a large-scale resurgence of the edifice that cloud helped to tear down: on-premises IT – albeit in a different form.
“In the next 10 to 20 years, the on-premises profile for most companies will not be servers,” Elissaios predicts. “It will be connected devices – and billions of them.”
This UrIoTNews article is syndicated fromGoogle News