Recent News

When to architect for the edge – InfoWorld

Edge computing refers to geographically locating infrastructure in proximity to where data is generated or consumed. Instead of pushing this data to a public or private cloud for storage and computing, the data is processed “on the edge,” using infrastructure that can be simple commodity servers or sophisticated platforms like AWS for the Edge, Azure Stack Edge, or Google Distributed Cloud.

Computing “at the edge” also has a second meaning around the upper boundaries of performance, reliability, safety, and other operating and compliance requirements. To support these edge requirements, shifting compute, storage, and bandwidth to edge infrastructure can enable scaling apps that aren’t feasible if architected for a centralized cloud.

Mark Thiele, CEO of Edgevana, says, “Edge computing offers the business leader a new avenue for developing deeper relationships with customers and partners and obtaining real-time insights.”

The optimal infrastructure may be hard to recognize when devops teams are in the early stages of developing low-scale proofs of concepts. But waiting too long to recognize the need for edge infrastructure may force teams to rearchitect and rework their apps, increasing dev costs, slowing timelines, or preventing the business from achieving targeted outcomes.

Arul Livingston, vice president of engineering at OutSystems, agrees, “As applications become increasingly modernized and integrated, organizations should account for edge technologies and integration early in the development process to prevent the performance and security challenges that come with developing enterprise-grade applications.”

Devops teams should look for indicators before the platform’s infrastructure requirements can be modeled accurately. Here are five reasons to consider the edge.

1. Improve performance and safety in manufacturing

What’s a few seconds worth on a manufacturing floor when a delay can cause injury to workers? What if the manufacturing requires expensive materials and catching flaws a few hundred milliseconds earlier can save significant money?

Thiele says, “In manufacturing, effective use of edge can reduce waste, improve efficiency, reduce on-the-job injuries, and increase equipment availability.”

A key factor for architects to consider is the cost of failure due to a failed or delayed decision. If there are significant risks or costs, as can be the case in manufacturing systems, surgical platforms, or autonomous vehicles, edge computing may offer higher performance and reliability for applications requiring greater safety.

2. Reduce latency for real-time actions

Sub-second response time is a fundamental requirement for most financial trading platforms, and this performance is now expected in many applications that require a quick turnaround from sensing a problem or opportunity to responding with an action or decision.  

Amit Patel, senior vice president at Consulting Solutions, says, “If real-time decision making is important to your business, then improving speed or reducing latency is critical, especially with all the connected devices organizations are using to collect data.”

The technological challenge of providing consistent low-latency experiences is magnified when there are thousands of data sources and decision nodes. Examples include connecting thousands of tractors and farm machines deployed with machine learning (ML) on edge devices or enabling metaverse or other large-scale business-to-consumer experiences.

If action needs to be taken in real time, start with edge computing,” says Pavel Despot, senior product manager at Akamai. “Edge infrastructure is right-fit for any workload that needs to reach geographically distributed end-users with low latency, resiliency, and high throughput, which runs the gamut for streaming media, banking, e-commerce, IoT devices, and much more.”

Cody De Arkland, director of developer relations at LaunchDarkly, says global enterprises with many office locations or supporting hybrid work at scale is another use case. “The value of working closer to the edge is that you’re more able to distribute your workloads even closer to the people consuming them,” he says. “If your app is sensitive to latency or ‘round-trip time’ back to the core data center, you should consider edge infrastructure and think about what should run at the edge.”

3. Increase the reliability of mission-critical applications

Jeff Ready, CEO of Scale Computing, says, “We’ve seen the most interest in edge infrastructure from industries such as manufacturing, retail, and transportation where downtime simply isn’t an option, and the need to access and utilize data in real time has become a competitive differentiator.”

Consider edge infrastructure when there’s a high cost of downtime, lengthy time to make repairs, or a failed centralized infrastructure impacts multiple operations.

Ready shares two examples. “Consider a cargo ship in the middle of the ocean that can’t rely on intermittent satellite connectivity to run their critical onboard systems, or a grocery store that needs to collect data from within the store to create a more personalized shopping experience.” If a centralized system goes down, it may impact multiple ships and groceries, whereas a highly reliable edge infrastructure can reduce the risk and impact of downtime.

4. Enable local data processing in remote locations or to support regulations

If performance, latency, and reliability aren’t major design considerations, then edge infrastructure may still be needed based on regulations regarding where data is collected and consumed.

Yasser Alsaied, vice president of Internet of Things at AWS, says, “Edge infrastructure is important for local data processing and data residency requirements. For example, it benefits companies that operate workloads on a ship that can’t upload data to the cloud due to connectivity, work in highly regulated industries that restrict data residing within an area, or possess a massive amount of data that requires local processing.”

A fundamental question devops teams should answer is where will data be collected and consumed? Compliance departments should provide regulatory guidelines on data restrictions, and leaders of operational functions should be consulted on physical and geographic limitations.

5. Optimize costs, especially bandwidth on enormous data sets

Smart buildings with video surveillance, facility management systems, and energy tracking systems all capture high volumes of data by the second. Processing this data locally in the building can be a lot cheaper than centralizing the data in the cloud.

JB Baker, vice president of marketing at ScaleFlux, says, “All industries are experiencing surging data growth, and adapting to the complexities requires an entirely different mindset to harness the potential of enormous data sets. Edge computing is a part of the solution, as it moves compute and storage closer to data’s origin.”

AB Periasamy, CEO and cofounder of MinIO, offers this recommendation, “With the data getting created at the edge of the network, it creates distinct challenges in application and infrastructure architectures.” He suggests, “Treat bandwidth as the highest cost item in your model, while capital and operating expenditures operate differently at the edge.”

In summary, when devops teams see apps that require an edge in performance, reliability, latency,  safety, regulatory, or scale, then modeling an edge infrastructure early in the development process can point to smarter architectures.

This UrIoTNews article is syndicated fromGoogle News

About Post Author