Models Help Achieve Maximum Operational Efficiency

By Rich Nass

Contributing Editor

Embedded Computing Design

March 23, 2021

Blog

Models Help Achieve Maximum Operational Efficiency
(Image courtesy of Pixabay)

A typical design goal is to maximize the operational efficiencies of your system. However, that likely includes designing systems around the latest GPUs, and that can be a daunting task.

GPUs, such as those offered by NVIDIA, can by quite complex. While the vendor may supply a fair amount of documentation, sample code, and other details, the engineer is still left to deploy the chip/board/system and have it operate at maximum efficiency.

One good course of action is to deploy a software development kit (SDK) or other tools provided by the GPU supplier. But if you need to venture outside what’s offered by those tools, it can get hairy. For example, NVIDIA is very good at assisting with the AI portion of the design. But when you need to make the proper connections, handle security, and ensure that the system can handle the rigors of a rugged environment, you may need to turn to other sources.

A good example is the system that’s connected to multiple cameras and needs to use AI to do some sort of image detection. Think of an airport, a train station, or even a manufacturing facility. As you know, images can be very data intensive, and when you have multiple cameras connected, the amount of data that requires processing can grow very quickly.

Here, the business objective can clearly be outlined—are you checking for luggage, products coming off an assembly line, etc.? To solve that problem, you turn those needs into a deployable solution that drives some action or outcome.

Identifying the Desired Outcome

The next step is to identify the desired action or outcome. Then, you choose what data is needed, which leads to the type of processing that’s required. Not until you answer these questions can you start the design and deployment process. While this path may seem obvious, you’d be surprised at how many people try to start at the last step, then end up redesigning their system because the objectives were not outlined at the start.

Take for example, the “smart space.” Here, you want to detect objects, and if something seems out of place, an alarm will sound. That “alarm” may be something audible, or it could generate a message that’s sent to an operator. This is a far better solution than filling a room with monitors and having one or more people constantly watch those monitors for some action that’s out of place.

ADLINK Technology handles this situation through data capture, specifically by using an NVIDIA DeepStream SDK. That SDK is used to capture the video and write it to disk so that you have some training. Once that occurs and you have enough data, you can move to the next phase, which is the training phase.

Another assist from NVIDIA comes in the form of the Transfer Learning Toolkit that gets you to a deployable processing element for your solution far faster than if you were to start from scratch, potentially (and hopefully) resulting in a greater RoI. To that end, ADLINK has put together a set of use cases which, according to the company, should provide a great starting point for the developer with the optimized domain-specific models. In other cases, the models may come from NVIDIA, such as for public safety, smart cities, or product quality inspection. In instances where a model is not available, the developer may be able to start with an existing model, one that’s similar to hisa use case, and adapt it for those specific needs.

To drive actions or outcomes, that unstructured data is run through the processing models such as DeepStream, resulting in structured data. From there, the information can be sent out through a pipeline to other applications to take an action.

To hear more about ADLINK’s medical-imaging products and technologies, check out the talk delivered by Toby McClean, Vice President of AIoT Technology and Innovation, as part of NVIDIA’s upcoming GTC.

Rich Nass is a regular contributor to Embedded Computing Design. He has appeared on more than 500 episodes of the popular Embedded Executive podcast series, and is a regular contributor to the Embedded Insiders podcast.

Rich has been in the engineering OEM industry for more than 35 years, and is a recognized expert in the areas of embedded computing, Edge AI, industrial computing, the IoT, and cyber-resiliency and safety and security issues. He writes and speaks regularly on these topics and more.

Rich is currently the Liaison to Industry for the Embedded World North America Exhibition and Conference, and has held similar positions with the global Embedded World Conference and Exhibition.

Previously, Rich was the Brand Director for UBM’s award-winning Design News property. Prior to that, he led the content team for UBM Canon’s Medical Devices Group, as well all custom properties and events.  In prior stints, he led the Content Team at EE Times, handling the Embedded and Custom groups and the TechOnline DesignLine network of design engineering web sites.

Nass holds a BSEE degree from the New Jersey Institute of Technology.

Podcast/Interview Coverage

Sonatus The Garage Podcast

onalytica Interview

Dev Talk with Rich and Vin

Embedded Executive Podcast

Semiconscious Webcast

IEEE Awards Frede Blaabjerg Talks EVS

Atmosic: Embedded Executive: Energy Harvesting Podcast

 

Article Coverage

Embedded AI Isn’t Enterprise AI, and That’s a Good Thing

Tear Down: Google Pixel Watch 4

Protect Your Home from Thieves and Floods

Advantech Teams With AMD To Maximize Performance at the Edge

Tear Down: Noise Luna Ring

 

View additional information

Muck Rack

More from Rich