Top Hardware Trends To Do Artificial Intelligence in IoT Edge Computing

Top Hardware Trends: The Internet of things has explosive growth in the last few years and has provided the platform for the rise of edge computing. Edge computing has grown in its popularity due to the benefits it can deliver to the Internet of things. Edge computing is bringing the computing closer to the edge. As edging computing is evolving, the nature of computing at the edge has grown from just storage and processing to capability to run machine learning and building artificial intelligence at the edge.

There is a race for building software and hardware to propel edge computing to the next generation. Designing the software and hardware for an edge will be completely different from the way it is done on the cloud. These Top Hardware Trends provide a good perspective. In typical IOT, the data from the edge devices are sent to the cloud. These are server farms that have enormous processing power and troves of data. Clouds use GPU to run AL and ML workloads, which could make it super-fast at the same time power-hungry. In the cloud the data is trained, and machine learning models are created and based on inference actions are driven to the edge devices. Cloud has a vast processing resource and an enormous amount of data to generate training and inferences. It does not have a constraint on the power and price as much it would be on edge.

With edge computing the machine learning and Artificial intelligence moves to the edge system. Edge computing brings real-time processing to the Internet of things. Real-time processing machine learning and artificial intelligence are required to be present at the edge. Imagine examples such as detecting intruders at home and trigger alarm or identifying defects or machine failures on a factory floor or find lost hikers on the remote places or self-driving car. All of these examples need real-time processing. There is going to be a latency if the data is sent to the cloud to run AI.

In some cases, the situations cannot afford the delay. For this to happen, the AI needs to occur on edge. AI support on edge could be in two ways running just the inference on edge or running the full ML functions on edge. Sometimes it also about data privacy. The situation does not need the data to be sent to the cloud; even the training happens at the edge. There have been several advancements both in hardware and software to bring the AI to edge a reality.

When we talk about the hardware in the Edge computing, it includes processors, sensors, cameras. One of the critical pieces of equipment for the edge devices to support AI and ML is the processing chip.

Two primary considerations play a crucial role in moving the AI to the edge. Edge hardware is not designed to handle compute-intensive; they are low power consuming devices, mostly battery operated. So to bridge the gap, the processor chip needs to be custom-designed to tackle this constraint to meet the market requirement.

Depending on whether the edge is designed to support only inference or includes support for machine learning, the processor used could vary. Various type of processor enables AI on edge devices like CPU, GPU, FPGA, and ASICs. The compute power necessity varies depending on whether the device runs just the inference or the machine learning functions. When designing edge hardware to support AI and ML, the processor is designed to have high compute power. At the same time, the edge device is the endpoints, like a thermostat, camera, or doorbell or sensor in an autonomous car. These devices have a constraint on power, size, and cost. So, the chip designed to do ML on these devices should be capable of supporting low power, low cost, and should be smaller in size. Software customization is also necessary to support AI on edge. ML Frameworks like Caffe and Tensor flow have reduced code footprint and customized version to run on edge devices

Major chipmakers Intel, Nvidia, Qualcomm, ARM, AMD, MediaTek, Samsung have released a custom processing chip for edge AI market. The Edge processor optimized for AI is the future of driving edge computing to the next phase. It’sIt’s expected that the size of the edge AI processor could reach 1.5 billion units by 2023. Several types of processors are used for the ML and AI, CPU, GPU, NPU, FPGA, ASIC, or special SOC capable of running workloads for ML and AI.

  • CPUs are central to computing in the system and play a pivotal role in running the ML workload necessary today at the edge.
  • GPU initially intended for graphics processing, plays a significant role in the edge AI. They can run a highly parallel fixed workload for processing images and videos.
  • NPU or the neural processing units are designed specifically to run the hyper-efficient AI computing and are very task-specific. NPU helps in faster execution of AI applications and is very quick in performing dense vector and matrix computations.
  • FPGA provides the high performance required for deep learning and can be energy efficient.

Here is the look at some of the AI processor available that provide support are explicitly designed for the edge devices.

Nvidia has released Jetson processors are equipped with GPU that supports hardware acceleration for the edge. These are specifically suited for embedded applications like security cameras, robots, and sensor platforms with low power consumption. Jetson’s family has Jetson Nano and Jetson Tx2. Nvidia has released a software pack Jetpack with drivers, runtime libraries that can run AI on edge.

Arm – Arm recently announced Ethos N77, a smaller die, low power 4-Tops ML NPU, Ethos N57 2-Tops, and ethos N37 1 top model for edge AI. Arm also has the Mali -G72 chips

Intel released their Movidius Myriad VPU, its a SOC shipping with dedicated neural compute engine for hardware acceleration of deep learning inference at the edge. Myriad2 comes packaged in a USB thumb drive and can be plugged into any device to run inferencing.

Xilinx has announced the Xilinx AI platform that has a 7nm Versal processor with support for neural networks designed explicitly for the Edge network.

Samsung is developing an NPU technology called “on-device AI.” MediaTek will be shipping octa-core A73 and A53 MediaTek i500, which has an AI processor. Rockchip has integrated NPU into its RK3399Pro. Google also has released a Tensor processing unit for the edge, which is an ASIC designed to run AI at the edge. There are also several startups releasing custom hardware for enabling AI on edge. GreenWaves Technologies has released its GAP8 processor, which is low power that it can run AI on a battery-operated system in IOT. These Top Hardware Trends gives an overall picture that is building in the market.

Top Hardware Trends here defines the market today. While AI is becoming the norm in edge computing, specialized AI hardware will drive better performance and lower power consumption necessary for edge computing. Mobodexter’s Edge Marketplace aims to incorporate the products from these Top Hardware Trends.


  • Mobodexter, Inc., based in Redmond- WA, builds internet of things solutions for enterprise applications with Highly scalable Kubernetes Edge Clusters. 
  • Check our Edge Marketplace for our Edge Innovation. 
  • Join our newly launched affiliate program here.
  • We publish weekly blogs on IoT & Edge Computing: Read all our blogs or subscribe to get our blogs in your Emails. 

This UrIoTNews article is syndicated fromMobodexter