Machine Vision Leads to an Automated Factory

By Rich Nass

Contributing Editor

Embedded Computing Design

May 19, 2022

Sponsored Story

Machine Vision Leads to an Automated Factory

Machine vision is the concept of using cameras as an aid in industrial applications, including factory automation. “Aid” can mean different things to different people. For example, if you are responsible for product quality on an assembly line, you can use machine vision to inspect each product as it rolls off the line to ensure that it is being manufacturing to the required specifications and that there are no defects.

If you are responsible for security, you could be using machine vision to monitor the people in a given area to ensure that only the proper personnel are in the room or facility. In that same facility, you may be using cameras to monitor your equipment. If something seems amiss, like a machine is vibrating more than it should, a light is out, a printer is printing blanks, and so on, you not only know that you have a problem, but that there’s a good chance you have a solution ready.

Aid can also refer to the artificial intelligence (AI) that’s applied to the application to enhance productivity and/or efficiency. At the same time, machine vision can be applied to automated mobile robots (AMRs) for a significant boost in factory output, especially when paired with a 5G connection.

Machine Vision and Industry 4.0

Going back to the factory-automation example, where Industry 4.0 comes into play, machine vision plays a key role. Such automation involves meshing a wide range of technologies that reduce human intervention while raising productivity. What should—in theory—be taken out of human hands are the decisions and related actions that need to be made in a split second.

While it takes a high-quality camera to produce images at the required resolution, it’s vital to have a back-end that can handle such high amounts of data throughput. Think about having multiple cameras streaming simultaneously, then having to sort through and process all the data.

In addition to the conventional cameras, machine-vision systems rely on digital sensors and then must normalize the data between all the inputs. The result is a huge amount of data. What’s more, these systems must guarantee robustness, reliability, and temperature stability, while maintaining the highest levels of security.

Applying AI to Industry 4.0 could introduce new opportunities to smart factories by adding automation, predictive maintenance, computer vision, or robotics, particularly with the introduction of 5G. Along with machine-vision capabilities, Edge AI brings key functions such as object detection, inspection tracking, recognition, and classification. Such functions are vital for making faster and better decisions on embedded computers with edge video analysis. Such AI-based Machine Vision systems could potentially help organizations in less conventional industries, including construction, manufacturing, healthcare and retail.

Driving the Network

The engines that are typically capable of handling the high throughput would be in the Cloud, where the information needs to be processed and analyzed, and measured against various characteristics for proper decision making. Unfortunately, getting the information to and from Cloud is likely not realistic. Hence, the solution is to process the data within the facility, at the Edge of the IoT or Industrial IoT (IIoT). And thanks to the influx of algorithms aimed at AI applications, a big piece of the puzzle is now available.

A further reason for moving this operation to the Edge of the IIoT is for control purposes. It’s great to have the data and have it processed and analyzed, but if you can’t act on decisions in real time, the capability is far less valuable. Again, this can be accomplished in a Cloud-based architecture, but not in real time.

The quality of the end product coming off the line of an automated factory is likely to be higher thanks to increased quality control by the client, and a potentially better level of cost control from the supplier, as products are only ordered when necessary and more precise amounts can be ordered, eliminating oversupply (or worse, undersupply). In addition, product defects are found quickly and rectified.

 

The Mitac MP1-11TGS fanless embedded system has the chops to implement Edge-based machine vision. Hence, it makes true factory automation a reality.

One product that’s aimed squarely at Edge-based machine vision is Mitac’s MP1-11TGS fanless embedded system. The platform is designed with Intel’s Tiger Lake-UP3 Core i7/i5/i3/Celeron ULV microprocessors, running at up to 4.4 GHz. Additional features that suit the embedded computer for machine-vision applications include:

  • integrated Intel Iris Xe graphics
  • Intel vPro technology support
  • one DDR4 SO-DIMM, with support for up to 32 Gbytes
  • support for four displays via HDMI
  • a hot-swappable SSD/HDD slot
  • two Gigabit Ethernet LANs (with an option for two more)
  • multiple USB interfaces, including 2.0 and 3.1
  • an 8- to 24-V operating voltage range

Such features can come in handy when you’re applying machine vision in an Edge-based AI application or when deploying AMRs in your factory setting. For example, a platform like the MP1-11TGS can take Edge-based machine vision further than already described here, to a level that could include automated mobile robots (AMRs), for example. In that scenario, your camera could be moving around the facility and would permit instant responses to various tasks. Assuming that the AMR was connected via WiFi or 5G wireless, it can also be monitored and controlled from anywhere the administrator happens to be.

Note that Mitac recently engaged with Hailo to bring the company’s Hailo-8 VPU module into a platform like the MP1-11TGS. That module would further enhance the AI inferencing and vision capabilities.

While some may think a factory equipped with machine vision is something that’s further down the road, Mitac experts can show you how you can engage with this technology today.

Rich Nass is a regular contributor to Embedded Computing Design. He has appeared on more than 500 episodes of the popular Embedded Executive podcast series, and is a regular contributor to the Embedded Insiders podcast.

Rich has been in the engineering OEM industry for more than 35 years, and is a recognized expert in the areas of embedded computing, Edge AI, industrial computing, the IoT, and cyber-resiliency and safety and security issues. He writes and speaks regularly on these topics and more.

Rich is currently the Liaison to Industry for the Embedded World North America Exhibition and Conference, and has held similar positions with the global Embedded World Conference and Exhibition.

Previously, Rich was the Brand Director for UBM’s award-winning Design News property. Prior to that, he led the content team for UBM Canon’s Medical Devices Group, as well all custom properties and events.  In prior stints, he led the Content Team at EE Times, handling the Embedded and Custom groups and the TechOnline DesignLine network of design engineering web sites.

Nass holds a BSEE degree from the New Jersey Institute of Technology.

Podcast/Interview Coverage

Sonatus The Garage Podcast

onalytica Interview

Dev Talk with Rich and Vin

Embedded Executive Podcast

Semiconscious Webcast

IEEE Awards Frede Blaabjerg Talks EVS

Atmosic: Embedded Executive: Energy Harvesting Podcast

 

Article Coverage

Embedded AI Isn’t Enterprise AI, and That’s a Good Thing

Tear Down: Google Pixel Watch 4

Protect Your Home from Thieves and Floods

Advantech Teams With AMD To Maximize Performance at the Edge

Tear Down: Noise Luna Ring

 

View additional information

Muck Rack

More from Rich