The Recipe for Designing in Deeply Embedded AI

September 16, 2019

Blog

The Recipe for Designing in Deeply Embedded AI

The recipe for an AI design can begin like just any other embedded system?though perhaps the choice of the right MPU/ MCU should consider the availability of an ?AI-friendly? ecosystem.

There are two key aspects of artificial intelligence (AI) that you should be aware of. First, it’s being designed into an increasing percentage of embedded systems at the deep edge of the network, from industrial controls to automotive applications to consumer/mass market devices. So there’s a good chance you’ll be needing a primer on how to work with these AI-related components.

The second aspect is that designing around AI is potentially a complex endeavor. And that’s where we come in. We’re here to increase your productivity in doing so and provide the recipe for success in the AI landscape; we will point you in the right direction so your probability for success is quite high.

The recipe for an AI design can begin like just any other embedded system—though perhaps the choice of the right microprocessor/microcontroller should consider the availability of an “AI-friendly” ecosystem. In this case, we’ll start with an STM32. The ecosystem includes the STM32Cube.AI, a package within the ST toolkit that can interoperate with deep-learning libraries to automatically convert pre-trained artificial neural networks and map such a conversion onto just about any STM32 microcontroller (MCU).

The next ingredient in your AI recipe is the AI deep-learning open software. Various frameworks are available with the most common and popular being TensorFlow, Keras, Pytorch and Caffe. Within your framework, you can generate your neural-network library, which is simplified thanks to ST’s pre-trained models offered in the AI application packs.

Using Keras or TensorFlow for example, you basically create a topological model to represent your neural network, or network of nodes. Each node can be an operation over tensors with varying levels of complexity, from a simple math function up (e.g. add) to a complex multi-variable non-linear equation.

The operations return data that are plotted on the network graph. Where it gets a little tricky is that an operation can consume and produce data with more than two dimensions, called tensors. This conversation gets a little deep and is beyond the scope of this article, but there are some good references available.

Then such a conversion is performed by a tool that can generate a library that produces code you can integrate into your project; the STM32Cube.AI and its output libraries can run on any STM32 MCU. To further ease integration for its customers, ST generated some end-to-end application examples in individual function packs for motion, audio and image analysis.

Now that you have your overlying hardware and software, the next step would be to acquire some test data using your nascent embedded system or from another source. The test data trains the neural network using tools such as Keras or TensorFlow Lite. As you’d expect, this is an on-going, iterative process, so the models are continually being refined, updated, and improved until you achieve the level of accuracy you need. That training process produces a model that can be automatically converted by the STM32Cube.AI tool into optimized run-time libraries for the STM32 MCU.

The STM32L476 family of MCUs provides the main ingredient for AI recipe.

Ready to start your AI design? If so, you can use any of a wide range of MCUs depending upon your application. ST has posted numerous videos demonstrating its MCUs doing a range of applications. While your performance requirements may differ and lead to a different MCU choice, you could do Object Classification on a high-performance STM32H7 or wearable/wellness applications on an 80-MHz STM32L476JGY or similar microcontroller.

The bottom line is that AI is very likely in your future, if it’s not already in your present. So if you aren’t already familiar with how to incorporate it into your designs, it’s time to learn. One important note: AI ecosystems are advancing rapidly so it is wise to choose vendors whose roadmaps show their understanding of the pace of change and whose investments demonstrate their willingness to keep up.