The AI Edge: Why Edge Computing and AI Strategies Must Be Complementary – MeriTalk

By: John Dvorak, Chief Architect, North America Public Sector, Red Hat

If you’re like many decision-makers in Federal agencies, you’ve begun to explore edge computing use cases. There are many reasons to push compute capability closer to the source of data and closer to end users.

At the same time, you may be exploring or implementing artificial intelligence (AI) or machine learning (ML). You’ve recognized the promise of automating discovery and gaining data-driven insights while bypassing the constraints of traditional datacenters.

But if you aren’t actively combining your edge and AI strategies, you’re missing out on transformative possibilities.

Edging Into AI at Federal Agencies

There are clear indicators that edge and data analytics are converging. For starters, data creation at the edge will surge a compounded 33 percent through 2025 to account for more than one-fifth of data, according to IDC. By 2023, data analytics pros will focus more than 50 percent of their efforts on data created and analyzed at the edge, Gartner predicts.

It’s no wonder that 9 in 10 Federal technology leaders say edge solutions are very or extremely important to meeting their agency’s mission, according to Accenture. And 78 percent of those decision-makers expect edge to have the greatest effect on AI and ML.

Traditionally, agencies need to transmit remote data to a datacenter or commercial cloud to perform analytics and extract value. This can be challenging in edge environments because of increases in data volumes, limited or no network access, and the increasing demand for faster decision-making in real time.

But today, the availability of enhanced small-footprint chipsets, high-density compute and storage, and mesh-network technologies is laying the groundwork for agencies to deploy AI workloads closer to the source of data production.

Getting Started With Edge AI

To enable edge AI use cases, identify where near-real-time data decisions can significantly enhance user experiences and achieve mission objectives. More and more, we’re seeing edge use cases focused on next-generation flyaway kits to support law enforcement, cybersecurity and health investigations. Where investigators once collected data for later processing, newer deployment kits include the advanced tools to process and explore data onsite.

Next, identify where you’re transmitting large volumes of edge data. If you can process data at the remote location, then you only need to transmit the results. By moving only a small fraction of the data, you can free up bandwidth, reduce costs and make decisions faster.

Take advantage of loosely coupled edge components to achieve necessary compute power. A single sensor can’t perform robust processing. But high-speed mesh networks allow you to link nodes, with some handling data collection, others processing, and so on. You can even retrain ML models at the edge to ensure continued prediction accuracy.

Infrastructure as Code for Far-Flung AI

A best practice for AI at the edge is infrastructure as code (IaC). IaC allows you to manage network and security configurations through configuration files rather than through physical hardware. With IaC, configuration files include infrastructure specifications, making it easier to change and distribute configurations, and ensuring that your environment is provisioned consistently.

Also consider using microservices and running them within containers and automating the iterative deployment of the ML models into production at the Edge with DevSecOps capabilities such as CI/CD pipelines, GitOps, etc. Containers provide flexibility to write code once and use it anywhere.

You should seek to use consistent technologies and tools at the edge and core. That way, you don’t need specialized expertise, you avoid one-off problems, and you can scale more easily.

Edge AI in the Real World (and Beyond)

Agencies from military to law enforcement to those managing critical infrastructure are performing AI at the edge. One exciting and extreme-edge example is space and, specifically, the International Space Station (ISS).

The ISS includes an onsite laboratory for performing research and running experiments. In one example, scientists are focused on the DNA genome sequencing of microbes found on the ISS. Genome sequencing produces tremendous amounts of data, but scientists need to analyze only a portion of it.

In the past, the ISS transmitted all data to ground stations for centralized processing – typically many terabytes of data with each sequence. With transitional transmission rates, the data could take weeks to reach earth-based scientists. But using the power of containers and AI, research is completed directly on the ISS, with only the results being transmitted to the ground. Now analysis can be performed the same day.

The system is simple to manage in an environment where space and power are limited. Software updates are pushed to the edge as necessary, and ML model training takes place onsite. And the system is flexible enough to handle other types of ML-based analysis in the future.

Combining AI and edge can enable your agency to perform analytics in any footprint and any location. With a common framework from core to edge, you can extend and scale AI in remote locations. By placing analytics close to where data is generated and users interact, you can make faster decisions, deliver services more rapidly and extend your mission wherever it needs to go.

This UrIoTNews article is syndicated fromGoogle News

About Post Author