Machine Vision Leads to an Automated Factory

By Rich Nass

Executive Vice President

Embedded Computing Design

May 19, 2022

Sponsored Story

Machine Vision Leads to an Automated Factory

Machine vision is the concept of using cameras as an aid in industrial applications, including factory automation. “Aid” can mean different things to different people. For example, if you are responsible for product quality on an assembly line, you can use machine vision to inspect each product as it rolls off the line to ensure that it is being manufacturing to the required specifications and that there are no defects.

If you are responsible for security, you could be using machine vision to monitor the people in a given area to ensure that only the proper personnel are in the room or facility. In that same facility, you may be using cameras to monitor your equipment. If something seems amiss, like a machine is vibrating more than it should, a light is out, a printer is printing blanks, and so on, you not only know that you have a problem, but that there’s a good chance you have a solution ready.

Aid can also refer to the artificial intelligence (AI) that’s applied to the application to enhance productivity and/or efficiency. At the same time, machine vision can be applied to automated mobile robots (AMRs) for a significant boost in factory output, especially when paired with a 5G connection.

Machine Vision and Industry 4.0

Going back to the factory-automation example, where Industry 4.0 comes into play, machine vision plays a key role. Such automation involves meshing a wide range of technologies that reduce human intervention while raising productivity. What should—in theory—be taken out of human hands are the decisions and related actions that need to be made in a split second.

While it takes a high-quality camera to produce images at the required resolution, it’s vital to have a back-end that can handle such high amounts of data throughput. Think about having multiple cameras streaming simultaneously, then having to sort through and process all the data.

In addition to the conventional cameras, machine-vision systems rely on digital sensors and then must normalize the data between all the inputs. The result is a huge amount of data. What’s more, these systems must guarantee robustness, reliability, and temperature stability, while maintaining the highest levels of security.

Applying AI to Industry 4.0 could introduce new opportunities to smart factories by adding automation, predictive maintenance, computer vision, or robotics, particularly with the introduction of 5G. Along with machine-vision capabilities, Edge AI brings key functions such as object detection, inspection tracking, recognition, and classification. Such functions are vital for making faster and better decisions on embedded computers with edge video analysis. Such AI-based Machine Vision systems could potentially help organizations in less conventional industries, including construction, manufacturing, healthcare and retail.

Driving the Network

The engines that are typically capable of handling the high throughput would be in the Cloud, where the information needs to be processed and analyzed, and measured against various characteristics for proper decision making. Unfortunately, getting the information to and from Cloud is likely not realistic. Hence, the solution is to process the data within the facility, at the Edge of the IoT or Industrial IoT (IIoT). And thanks to the influx of algorithms aimed at AI applications, a big piece of the puzzle is now available.

A further reason for moving this operation to the Edge of the IIoT is for control purposes. It’s great to have the data and have it processed and analyzed, but if you can’t act on decisions in real time, the capability is far less valuable. Again, this can be accomplished in a Cloud-based architecture, but not in real time.

The quality of the end product coming off the line of an automated factory is likely to be higher thanks to increased quality control by the client, and a potentially better level of cost control from the supplier, as products are only ordered when necessary and more precise amounts can be ordered, eliminating oversupply (or worse, undersupply). In addition, product defects are found quickly and rectified.

 

The Mitac MP1-11TGS fanless embedded system has the chops to implement Edge-based machine vision. Hence, it makes true factory automation a reality.

One product that’s aimed squarely at Edge-based machine vision is Mitac’s MP1-11TGS fanless embedded system. The platform is designed with Intel’s Tiger Lake-UP3 Core i7/i5/i3/Celeron ULV microprocessors, running at up to 4.4 GHz. Additional features that suit the embedded computer for machine-vision applications include:

  • integrated Intel Iris Xe graphics
  • Intel vPro technology support
  • one DDR4 SO-DIMM, with support for up to 32 Gbytes
  • support for four displays via HDMI
  • a hot-swappable SSD/HDD slot
  • two Gigabit Ethernet LANs (with an option for two more)
  • multiple USB interfaces, including 2.0 and 3.1
  • an 8- to 24-V operating voltage range

Such features can come in handy when you’re applying machine vision in an Edge-based AI application or when deploying AMRs in your factory setting. For example, a platform like the MP1-11TGS can take Edge-based machine vision further than already described here, to a level that could include automated mobile robots (AMRs), for example. In that scenario, your camera could be moving around the facility and would permit instant responses to various tasks. Assuming that the AMR was connected via WiFi or 5G wireless, it can also be monitored and controlled from anywhere the administrator happens to be.

Note that Mitac recently engaged with Hailo to bring the company’s Hailo-8 VPU module into a platform like the MP1-11TGS. That module would further enhance the AI inferencing and vision capabilities.

While some may think a factory equipped with machine vision is something that’s further down the road, Mitac experts can show you how you can engage with this technology today.

Richard Nass’ key responsibilities include setting the direction for all aspects of OSM’s ECD portfolio, including digital, print, and live events. Previously, Nass was the Brand Director for Design News. Prior, he led the content team for UBM’s Medical Devices Group, and all custom properties and events. Nass has been in the engineering OEM industry for more than 30 years. In prior stints, he led the Content Team at EE Times, Embedded.com, and TechOnLine. Nass holds a BSEE degree from NJIT.

More from Rich