Listen to this article
Automation in the Age of IT-OT Convergence
Companies are constantly looking for better ways to augment their existing operational workloads, optimize production processes, and reduce overall Carbon Dioxide (CO2) emissions. The use of Operational Technology (OT) and Information Technology (IT) has helped enterprises better control their operations by monitoring valuable assets, reducing repetitive and routine tasks, and enhancing quality control. However, IT and OT have been traditionally developed separately with no ability to exploit operations and production data to make more informed decisions for more optimized workflow and well-planned production and maintenance processes.
The emergence of the Internet of Things (IoT) in recent years blurs the boundaries between the two systems, leading to a more convergent solution. IoT devices belonging to the IT domain can collect operational and production data from the field and communicate them to the OT systems.
The convergence of IT and OT will allow workers to do more and go further with their improvements, striking the right balance between the cost of business and strategic technology investment. Furthermore, this convergence will enable enterprises to accelerate their digital transformation and optimize their existing workflows, all without needing to scale up rapidly.
Graphics Processing Units (GPUs) have proven ideal for executing the data processing based on Machine Learning (ML) algorithms, allowing robots to perform object recognition and sensor fusion. Further advances have come from developing Application-Specific Integrated Circuit (ASIC) that excel in specific ML applications, such as video processing and speech recognition.
One of the technologies that best represent IT-OT convergence is operations management through robotics automation. With automated processes, companies can analyze data, obtain valuable insights, and gain improved visibility of their production sites’ performance, helping them make data-driven decisions. However, robotics development has traditionally been very complex and challenging. As demand for robots is expected to increase rapidly, the current rate of innovation can be accelerated with the proper hardware and software offerings.
The Rise of Robotics
In general, the deployment of robots has led these devices to host new functions aiming to increase workforce safety, reduce strenuous and hazardous tasks for human employees, accelerate e-commerce fulfillment and delivery, and enhance business flexibility and resilience. These new functions require the implementation of high-accuracy sensors that fulfill functional safety and risk prevention requirements, cameras for detection, localization, and navigation, and robotics middleware for application onboarding. In recent years, key advancements in robotics hardware allow robotics Original Equipment Manufacturers (OEMs) to develop robots that can see and sense their environments:
- Computing Processors – Computing processors have become more powerful in recent years, allowing robot OEMs to run resource-intensive applications directly on the robot. For example, Graphics Processing Units (GPUs) have proven ideal for executing the data processing based on Machine Learning (ML) algorithms, allowing robots to perform object recognition and sensor fusion. Further advances have come from developing Application-Specific Integrated Circuit (ASIC) that excel in specific ML applications, such as video processing and speech recognition.
- Sensor Technologies – The proliferation of sensors, such as Two-Dimensional (2D) and Three-Dimensional (3D) cameras, 2D and 3D Light Detection and Ranging LiDAR sensors, Inertial Measurement Unit (IMU), and proximity sensors enables highly-accurate machine vision and sensor fusion algorithms. A typical ground-based autonomous robot may have multiple High-Definition (HD) cameras, 3D depth sensors, and LiDAR sensors. Coupled with the continual improvements in ML-based sensor fusion technology, they become vital enablers of functional safety, Simultaneous Location and Mapping (SLAM) systems, and risk prevention capabilities.
- Edge Computing – The ability to process information in industrial gateways and on-premises servers means robots can collect, process, and store information at the edge. This greatly reduces the latency and connectivity requirements for robotics operations, while alleviating security and privacy concerns.
- Connectivity – A robot generates and collects a lot of information. It is estimated that an autonomous robot can generate up to 500 Gigabytes (GB) of data per hour when including the input and output from AI processes, such as computer vision and path planning. Therefore, a successful robotics deployment needs a reliable connectivity solution to transfer data at high-bandwidth and low-latency levels.
At the same time, advancements in software and services also deserve much attention:
- Open-Source Software – Many modern robots use the Robot Operating System (ROS) and ROS 2 as middleware and a simulator for robotics development. Technically more middleware than an Operating System (OS), the open-source ROS includes capabilities for hardware abstraction and message passing to integrate these various data sources. ROS also comes with a high-fidelity, real-time, and physically accurate 3D simulation engine that can be employed to develop, train, and test robot control software.
- Motion Planning and Navigation – Advances in machine vision provide robots with a new range of cognitive capabilities. After years of relying on magnetic tape and fiducial markers, Visual SLAM (vSLAM) technology is now mature, providing superior location and navigation capabilities. Further development has come in swarm intelligence for multi-robot coordination, especially for Automated Guided Vehicles (AGVs) and Autonomous Mobile Robots (AMRs) deployed in fulfillment centers.
- Cybersecurity – As robots are becoming increasingly connected to the cloud, the employment of microkernels, individual real-time OSs for different safety functions, virtual machines, and hypervisors to isolate safety-critical components are the most efficient and safest way to minimize cybersecurity risk in robotics.
- Robot Operations – Finally, companies are looking for a simplified way to deploy, operate, monitor, and reconfigure robots. Having a single platform that can provide companies with all the robot and sensor data gives them a birds-eye view of the robotics operations. Robot operations, also known as RoboOps, helps them perform remote intervention, end-to-end security, predictive maintenance, continuous improvement, and data integration from other IoT devices.
Through these key advancements, robots can now work alongside a human safely and reliably. Apart from industrial robotics arms, more form factors have emerged in recent years, such as Collaborative Robots (cobots), AGVs, AMRs, Automated Storage and Retrieval Systems (ASRS), and Unmanned Aerial Vehicles (UAVs).
A common denominator across all these robots is their ability to perceive and make sense of their surrounding environment. This autonomy is enabled through several ML models found in the robots, such as object detection and segmentation, localization and collision avoidance, motion planning for navigation and manipulation, pose estimation, and sensor integration.
Edge ML enables robotics users to make sense of the mountain of data they collect from their assets and make much better business decisions based on daily operation, usage trends, and customer behaviors.
Edge ML in Robotics
Robotics OEMs have embedded edge ML into their robots to help with performing critical functions, including sensor processing, odometry, localization and mapping, vision and perception, and path planning. These ML models hosted inside robots automatically process data collected by the robots and generate an output that dictates the robots’ actions. Edge ML holds several key advantages over its cloud counterpart:
- Latency – Industrial robots execute mission-critical functions. They require reliable, high-speed, and low-latency communication and processing when working in a multi-robot environment and/or alongside human employees. They cannot afford to have any delay in their responses and reactions during obstacle detection and navigation.
- Data Protection and Privacy – Robots collect and generate a large volume of data, so they are prone to cybersecurity risks. On-device ML processes allow companies to reduce their reliance on the cloud by minimizing the data transfer outside the production environment. In doing so, robots are now secure as they comply with specific data security and privacy requirements to prevent unauthorized access and control, and the misuse of enterprise and personal data.
- Cloud Computing Cost – Cloud computing infrastructure has garnered massive popularity due to its flexibility and adaptability. Instead of procuring, deploying, and orchestrating their infrastructure, they can move all the workloads to the cloud. However, this still comes with a cost. For example, sending all telemetry data, operating status, and operational information from a robot to be processed and stored in a cloud can be very costly once companies scale up their fleet of robots.
- Connectivity Cost – Likewise, there are costs associated with the connectivity technologies that support the data transfer to the cloud. The larger the robot fleet is, the more data bandwidth is required. For robots that operate outdoors, companies need to rely on high-quality public cellular networks or invest in their own private network.
Edge ML enables robotics users to make sense of the mountain of data they collect from their assets and make much better business decisions based on daily operation, usage trends, and customer behaviors. To achieve seamless edge ML deployment, robotics OEMs require the right type of processors to resolve concerns around data privacy, power efficiency, and low latency, while providing strong on-device computing performance. In addition, OEMs rely on edge ML software support from these processor suppliers to accelerate edge ML deployment. A comprehensive edge ML solution from established vendors can reduce complexity, and accelerate edge ML design and operations, while also providing workload orchestration, training and testing simulation, and model retraining support.
That said, edge ML deployment in robots remains complex. Higher computational capability alone is not sufficient. Companies need pre-trained models, application development and optimization, and ML applications for more hands-on end users.
Tools and Services Speeding Robotics Development
Advanced industrial robotics are packed with multiple features capable of executing a number of mission-critical functions. The execution and the orchestration of these functions require highly sophisticated, densified, and scalable processing solutions that can process multiple concurrent applications, workloads, and AI inference pipelines without the constant reliance on cloud computing resources. These solutions should also support high-speed interfaces to handle the multiple sensors featured in modern industrial robots.
NVIDIA is one of the key processor platform suppliers to dedicate particular attention to this area. At GTC 2022, NVIDIA launched the Jetson AGX Orin developer kit and System-on-Module (SOM) based on the Ampere GPU architecture with up to 2,048 parallel CUDA cores, up to 64 Tensor Cores, and up to 2 Deep Learning Accelerator (DLA) engines. This solution is designed to handle the ever-increasing workload and multi-concurrency demands by enabling up to 275 Tera Operations per Second (TOPS) of processing power, 8X higher than Jetson AGX Xavier, its predecessor.
To accelerate time-to-market, NVIDIA also offers Isaac Nova Orin, which features two Jetson AGX Orin SOMs that provide up to 550 TOPS of AI compute, and a sensor suite consisting of up to six cameras, three LiDAR sensors, and eight ultrasonic sensors. This provides a reference design for companies that want their robots to leverage the full capabilities of Jetson AGX Orin.
That said, edge ML deployment in robots remains complex. Higher computational capability alone is not sufficient. Companies need pre-trained models, application development and optimization, and ML applications for more hands-on end users. First launched in 2018, NVIDIA Isaac was designed to support robotics development through an application framework, software packages with ML algorithms, an upgraded robotics simulation platform, and various reference designs. In September 2021, NVIDIA and Open Robotics, the developer of ROS, entered into an agreement enabling interoperability between Open Robotics’ Ignition Gazebo and NVIDIA Isaac Sim. In addition, for developers that look at existing models to simplify their model development process, the NVIDIA TAO Transfer Learning Toolkit makes it easier for them to further adapt pre-trained ML models by NVIDIA for specific use cases.
With the software support from NVIDIA, robotics OEMs and end users train and optimize robots for a breadth of tasks virtually. Isaac Sim provides a realistic environment to train navigation and manipulation models. In the cases where real-world data are rare and hard to obtain, accurate data can be augmented with synthetic data to reduce the time for model training. Companies operating a large fleet of AMRs at production sites can use the NVIDIA DeepMap platform’s cloud-based Software Development Kit (SDK) to speed robot mapping of extensive facilities from weeks to days, NVIDIA cuOpt Application Programming Interface (API) to enable near real-time routing optimizations, and NVIDIA Metropolis platform to integrate off-the-shelf video cameras and sensors with AI-enabled video analytics.
In addition, NVIDIA has built a growing ecosystem that possesses domain expertise in building robots with the Jetson platform. This includes 105 companies specializing in AI software, hardware and application design services, sensors and peripherals, developer tools, development systems, and more, providing complementary and value-added solutions and services. Leading partners include SICK, LIPS, FRAMOS, Universal Robots, and e-con Systems. Through this ecosystem, robotics OEMs and end users can expect end-to-end, integrated, and tailored experiences based on a deep understanding of their needs.
The intralogistics market for mobile robots is expected to grow from US$9 billion in 2022 to top US$36 billion by 2030. Both AGVs and AMRs are deployed in brownfield and greenfield warehouses for material handling.
Commercial Opportunities Abound
While still heavily centralized in the industry, technological innovations across hardware, software, and business models accelerate robotic deployment across all major market verticals. As a result, the intralogistics market for mobile robots is expected to grow from US$9 billion in 2022 to top US$36 billion by 2030. Both AGVs and AMRs are deployed in brownfield and greenfield warehouses for material handling.
Now, AMRs and forklifts are used for material handling and mobile manipulation in manufacturing, which is expected to go from US$2.3 billion in 2022 to US$36.4 billion by 2030. Moving forward, AMRs and quadruped robots are expected to become more prominent in delivery, data collection, security, and cleaning. The last-mile delivery and retail robotics market are expected to grow from less than US$1 billion and US$1.3 billion in 2022 to US$16.2 billion and US$8.4 billion by 2030, respectively.
Robotics and ML
As companies continue to digitalize and automate their current workflows, they should not overlook the importance of robotics and ML-based automation. The emergence of a data-driven business environment, edge ML technologies, dedicated robotics development platform, and robust partner ecosystem is creating new opportunities for accepting and adopting robots across various markets.
Undoubtedly, current and immediate robotics adoption is heavily weighted toward larger companies. Still, emerging technologies present an opportunity for robotics providers to successfully lower the adoption barrier for small and medium businesses. A comprehensive hardware and software solution, like the one offered by NVIDIA, makes these technologies more accessible to both robotics OEMs and end users. Furthermore, partnering with an experienced company with a suitable robotics ecosystem, i.e., from the edge ML chipset layer to the software and applications layer, allows robotics OEMs to focus on perfecting their hardware design and expanding their market presence.
About the Author
Lian Jye Su, Principal Analyst at ABI Research, is responsible for orchestrating research related to robotics, Artificial Intelligence (AI), and Machine Learning (ML). He leads research in emerging and key trends in these industries, diving deeply into advancements in key components, regional dynamics in robotics and AI adoptions, and their future impacts and implications.