Industry 5.0 Has Arrived, Thanks to the AMR

By Rich Nass

Contributing Editor

Embedded Computing Design

April 28, 2021

Blog

Industry 5.0 Has Arrived, Thanks to the AMR

Note: This is the third in a series of four blogs (and associated podcasts). In the first blog, I looked at what it means to be operating a platform at the edge of the AIoT, where designers should start the process, and what path they should take to develop those systems. In the second, I went through some of the subsystems that go into that AIoT platform, drilling down into the nuts and bolts.

We spend a lot of time discussing artificial intelligence (AI) at the Edge of the IoT, which you now probably know is referred to as AIoT. One key element of a successful AIoT offering is an autonomous mobile robot (AMR). When operated in a factory setting, the benefits can be huge. AMRs can work 24/7/365, having to stop only to recharge their batteries or for maintenance.

To some, the use of AMRs in the factory, where they would potentially interact with humans, is the definition of Industry 5.0. While it might seem obvious that robots and people need to be in the same environment, keeping both safe and secure requires some use of AI in the AMR.

While the benefits of AMRs are obvious, the complexity that’s required to design and build them is also quite large. It starts with a vision system, and we know that whenever imaging is involved, the amount of data that comes into play can be huge.

Other aspects to consider include a precise navigation system, motor control, and wireless communications. There also should be some form of human machine interface (HMI), assuming there is interaction between the AMR and the human.

Powering Your AMR

Another important, and sometimes overlooked technology is the ability to power the AMR. In just about every case, the robot needs to be untethered and must operate for a reasonable time period before requiring a recharge.

Other critical design considerations include how you choose a capable platform to handle all these tasks, based on what kind of control that’s required. One potential starting point is the NVIDIA Jetson edge AI platform, which the company claims is “the AI platform for autonomous everything.” The Jetson is a complete system-on-module (SOM), with CPU, GPU, PMIC, DRAM, and flash storage. That could/should be combined with NVIDIA's Isaac robotics platform, including the Isaac SIM, that’s used for simulation and training of robots in virtual environments before deploying the AMRs in the real world. It also includes the Isaac SDK, an open software framework for efficient processing on the NVIDIA Jetson platform, which makes it easier to add AI for perception and navigation. It also allows for the addition of custom behaviors and capabilities, accelerating robot developments that normally take months, if not years, of engineering effort.

“NVIDIA provides a platform that serves as a great starting point,” says Zane Tsai, Director of the Platform Product Center at ADLINK Technology. “It provides the needed control along with computing efficiency for all aspects of the robot. But it’s the power efficiency that really makes it stand out.”

Lots of Sensors

The AMRs need to incorporate various sensors, ranging from 2D cameras, time-of-flight sensors, LIDAR devices, and inertial measurement units (IMUs), connecting them with high-speed I/O. And all the data from the sensors must be processed in real time on the robot itself to allow it to autonomously navigate in the complex and dynamic factory environment. And in theory, because of the integrated AI functionality, upgrades should be happening dynamically all the time. So field upgrades would be minimized.

According to Amit Goel, the Director of Product Management for autonomous machines at NVIDIA, “The simulation environment is a key component in the development and deployment of AMRs. Developing the actual hardware adds some constraints and affects how quickly they can be designed, and frankly, how many people can be working on the AMR at the same time. Once you move the design into simulation, your development team can be anywhere.”

One question that designers now face is what type of processor should be deployed, or should it be a combination of available processors. While there’s no simple answer, the assumption is that if the AMR needs to accept, read, and calculate information about its environment, then control its behavior, a GPU makes the most sense. The processor should also offer the flexibility to upscale (and potentially downscale).

To hear more detail about designing AMRs, specifically using ADLINK Technology platforms and NVIDIA GPUs, I suggest you check out the podcast below, featuring the two experts quoted here.

 

Rich Nass is a regular contributor to Embedded Computing Design. He has appeared on more than 500 episodes of the popular Embedded Executive podcast series, and is a regular contributor to the Embedded Insiders podcast.

Rich has been in the engineering OEM industry for more than 35 years, and is a recognized expert in the areas of embedded computing, Edge AI, industrial computing, the IoT, and cyber-resiliency and safety and security issues. He writes and speaks regularly on these topics and more.

Rich is currently the Liaison to Industry for the Embedded World North America Exhibition and Conference, and has held similar positions with the global Embedded World Conference and Exhibition.

Previously, Rich was the Brand Director for UBM’s award-winning Design News property. Prior to that, he led the content team for UBM Canon’s Medical Devices Group, as well all custom properties and events.  In prior stints, he led the Content Team at EE Times, handling the Embedded and Custom groups and the TechOnline DesignLine network of design engineering web sites.

Nass holds a BSEE degree from the New Jersey Institute of Technology.

Podcast/Interview Coverage

Sonatus The Garage Podcast

onalytica Interview

Dev Talk with Rich and Vin

Embedded Executive Podcast

Semiconscious Webcast

IEEE Awards Frede Blaabjerg Talks EVS

Atmosic: Embedded Executive: Energy Harvesting Podcast

 

Article Coverage

Embedded AI Isn’t Enterprise AI, and That’s a Good Thing

Tear Down: Google Pixel Watch 4

Protect Your Home from Thieves and Floods

Advantech Teams With AMD To Maximize Performance at the Edge

Tear Down: Noise Luna Ring

 

View additional information

Muck Rack

More from Rich