Since it was added as one a field of study in 1956 at Universities, Artificial Intelligence has gone through both periods of optimism and pessimism in equal measure. There is no doubt that today we are witnessing one of great optimism.
Data Science is the third most sought job post worldwide. In fact, in our recent study about the State of Edge Computing in Spain, Data Scientist is the most wanted professional amongst Spanish companies in a market that is experiencing exponential growth and is expected to reach 190 billion dollars by 2025.
Such is the prominence of AI in the market industry that it no longer makes sense to speak of it as a single technology but as many branches that serve different uses to different industries.
Among the trends identified as the most mature and closest to the production stage are the ones we can identify in our daily routine. For example, plain language processing that we use when we talk with increasingly human-like chatbots, the machine imaging that makes it possible to automate real-time video processing, and semantics searches, which lead to better search results.
At the other extreme, there are more futuristic ones that will not emerge for at least 10 years. Some interesting examples are the AI TRISM (Trust, Risk and Security Management) technologies, which make it possible to regulate AI models making them more resilient to security and privacy attacks, and transformers, which make it possible to adapt AI models to fit the context and will have a great impact on improving applications such as translators, automatic document creation, or the analysis of biological sequences.
Between the two extremes are other enabling technologies that will take to two to five years from deployment to market maturity, which can be called as the “Near Future of AI.” Among these are human-centered AI, generative AI, the orchestration and automation of AI, and, leading all the others on the maturity curve, AI on the Edge, also known as “Edge AI.” In 2021, Edge AI became the technology that will mature in the near future.
Edge AI and the Distributed Intelligence Revolution in the Industrial World
Edge AI or AI on the Edge can be summed up as the ability to execute artificial intelligence algorithms on devices (IoT devices, edge devices) that are very close to the source of data.
This technology is growing exponentially, supported by a daunting statistic: more than 60 percent of industrial organizations do not have a Cloud infrastructure in place that help them innovate efficiently.
So, if we take a magnifying glass to Edge AI projects, what are the most disruptive trends that we will witness in 2022 and 2023?
Below is a summary of our top 5:
1. Critical Industries Will Be Major Drivers: from SCADA to Edge AI
At Barbara IoT we are finding repeated patterns in industries that are at the forefront of Edge AI: all of them handle many critical distributed assets. In other words, they are industries that face great challenges from technological fragmentation, scalability, and cybersecurity. These can be minimized by executing AI algorithms at the edge. We can forecast these industries will develop very ambitious and transformative use cases.
The SCADA systems that have been used since the 80s have similar purposes in terms of data capture and processing. However, SCADA systems need to be complemented by more modern technologies so that they can respond to the increasingly demanding requirements for interoperability, openness, and security. This is where Edge AI can help: to multiply the value of these systems.
2. Thin Edge Will Complement Thick Edge
There are different interpretations about the meaning of what “edge” is when we refer to Edge AI. Traditionally, the edge has been identified as the network operator infrastructure closest to the user. For example, when we talk about 5G networks, we refer to operators that are rolling out a multitude of nodes called “Multiaccess Edge Computing” that are used for up-close data processing. These nodes are installed on servers very similar to those that can be found in a data center designed to host cloud services, and they have high potential and capability to process complex AI algorithms. This is what some analysts name the “Thick” Edge.
However, recently Edge nodes of another type are beginning to be developed: ones directly connected to sensors and switches, that, when installed on low-power devices such as gateways or concentrators, serve to run simpler AI algorithms with shorter response times that are closer to real-time. This new type of Edge, called “Thin” Edge, will make it possible to rapidly and flexibly tackle larger scale projects that include remote locations or requirements for high security and the isolation of the data.
3. Edge Mesh as the New Paradigm to Enable Distributed Artificial Intelligence
Edge AI is traditionally based on decision models that are trained using large data. The model, consisting of a series of mathematical formulas, is installed on Edge Nodes. From there, each node is able to make its own decisions depending on the data it receives and the model that has been installed.
The new paradigm, known as Edge Mesh, makes it possible for a node’s decision to be conditioned by another node’s decision as if it were a lattice network. One good example for understanding the power of this new architecture is a smart traffic system.
An Edge node can make decisions about the time of a traffic light using AI algorithms that take into account the number of cars and people detected by sensors. However, this decision could be perfectly complemented by the decisions being made by other nodes in nearby streets.
The aim of Edge Mesh is to distribute intelligence amongst various nodes in order to offer better performance, response times, and fault tolerance than with more traditional architectures.
4. Lifecycle Management Using MLOps Is Increasingly More Important
As the industry moves towards rolling out Edge AI with more distributed nodes and more complex training algorithms, the ability to maintain the lifecycle of these trained models, and the devices that execute them, will be key to the future of this technology.
In this sense, the projects and companies that apply the DevOps philosophy for the development, roll-out, and maintenance of AI algorithms will be enhanced.
This way of working is called MLOps, a combination of Machine Learning and DevOps.
But what exactly is it? Basically, it aims to reduce the development, testing, and implementation times of AI on the Edge models through the ongoing integration of equipment and development environments, testing, and operations.
5. Edge AI Enables Sovereign Data Exchange
There is no doubt that data sharing will be paramount for improving processes in industry sectors with many stakeholders within the value chain.
Let’s look into the near future electricity grid model: a Smart Grid. To be able to receive or offer better service, it is essential for suppliers to be able to analyze and process information from a number of stakeholders like prosumers, operators, distributors, and aggregators. Without a transparent, agile data exchange, it will be impossible to reach the required grid optimization by 2050.
With Edge AI, an ON-centralised data processing is possible, which will help overcome some of the obstacles the industry is currently facing such as data security, privacy, and sovereignty.
This UrIoTNews article is syndicated fromGoogle News