Ten Things to Consider When Developing ML at the Edge

By Yann LeFaou

Associate Director, Touch and Gesture Business Unit

Microchip Technology

November 27, 2023

Blog

Ten Things to Consider When Developing ML at the Edge

From predictive maintenance and image recognition, to remote asset monitoring and access control, the demand for industrial IoT applications capable of running machine learning (ML) models on local devices, rather than in the cloud, is growing rapidly.

As well as supporting environments where sensor data needs to be collected far from the cloud, these so-called ‘ML at the Edge’ or Edge ML deployments offer advantages that include low-latency, real-time inference, reduced communication bandwidth, improved security, and lower cost. Of course, implementing Edge ML is not without its challenges, whether that’s limited device processing power and memory, the availability or creation of suitable datasets or the fact that most embedded engineers do not have a data science background. The good news, however, is that there is a growing ecosystem of hardware, software, development tools, and support that is helping developers to address these challenges. 

In this article we will take a closer look at the challenges and identify ten key factors that embedded designers should consider.

Introducing Edge ML

Fundamental to delivering artificial intelligence (AI), machine learning (ML) uses algorithms to make inferences from both fresh/live data and historic data. To date, ML applications have been implemented with the majority of the data processing performed in the cloud. Edge ML reduces or eliminates cloud dependency by enabling local IoT devices to analyze data, make models and predictions, and to take actions. Moreover, the machine can constantly improve its efficiency and accuracy, automatically and with little or no human intervention.

Edge ML has the potential to provide a great boost to Industry 4.0, with real-time edge processing improving manufacturing efficiency, while applications ranging from building automation to security and surveillance also stand to benefit. As a result, the potential of ML at the edge is huge, as reflected in a recent study by ABI Research, which forecasts that the edge ML enablement market will exceed US$5 billion by 20271. Also, while ML was once in the domain of the mathematics and scientific communities, increasingly it is part of the engineering process and, in particular, an important element of embedded systems engineering.

As such, the challenges associated with implementing Edge ML are not so much about “Where do we start?” but rather, “How do we do this quickly and cost-effectively?” The following 10 considerations should help answer this question.

1. Data Capture

To date, most ML deployments have been implemented in powerful computers or cloud servers. However, Edge ML needs to be implemented in space- and power-constrained embedded hardware. The use of smart sensors that perform some level of pre-processing at the data capture stage will make data organization and analysis much easier, as they cater for the first two steps in the ML process flow (see figure 1).

Figure 1 – The machine learning process flow

A smart sensor can create one of two types of smart model; those trained to solve for simple classification or those trained to solve regression-based problems.

2. Interfaces

ML models must be deployable, and that requires having interfaces between the constituent (software) parts of the machine. The quality of these will govern how efficiently the machine works and can self-learn. The boundary of an ML model comprises inputs and outputs. Catering for all input features is relatively easy. Catering for the model’s predictions is less so, particularly in an unsupervised system.

Of course, interfaces also relate to the physical connection between elements of the hardware. These may be as simple as connections for USB or external memory or more sophisticated interfaces that support connections for video streams and user-specific inputs. By definition, Edge ML applications are space-, power- and cost-constrained, so consideration should be given to the minimum number and type of interfaces needed.

3. Creating Optimized Datasets

The use of commercially available datasets (a collection of data that has already been arranged in some kind of order) is a good way to fast-track an Edge ML development project. The dataset needs to be optimized for use, according to the purpose of the Edge ML device.

For example, consider a security scenario in which the behavior of people must be monitored and suspicious behavior automatically flagged. If a local monitoring device has embedded vision and the ability to recognize what people are doing – standing, sitting, walking, running or leaving a bag/case unattended for instance – decisions can be made at the source of the data.

Rather than train the device from scratch, part of the input dataset would be a training set (see figure 2) such as MPII Human Pose, which includes around 25,000 images extracted from online videos. The data is labeled, so it can be used for supervised machine learning.

Figure 2 – Above, the input dataset includes training data.

4. Requirements for Processing Power

The computational power required for Edge ML varies depending on application. Image processing, for example, needs more computational power than an application based on polling a sensor or conditioning an input feed.

ML models deployed on smart devices work best if they are small and the tasks required of them are simple. As models grow in size and tasks in complexity, the need for greater processing power grows exponentially. Unless satisfied, there will be a system performance (in terms of speed and/or accuracy) penalty. However, the ability to use smaller chips for ML is being aided by improvements in algorithms and in open-source models (such as TinyML), ML frameworks, and modern IDEs to help engineers produce efficient designs.

5. Semiconductors/Smart Sensors

Many Edge ML applications require edge processing for image and audio recognition. MPUs and FPGAs capable of supporting cloud-based processing for such applications have been available for some time but now the availability of low-power semiconductors to integrate this functionality are making edge-based applications much simpler to develop.

For example, Microchip’s 1 GHz SAMA7G54 (figure 3) is the industry’s first single-core MPU with a MIPI CSI-2 camera interface and advanced audio features. This device integrates complete imaging and audio subsystems, supports up to 8M pixels and 720p @ 60 fps, up to four I2S, one SPDIF transmitter and receiver, and a 4-stereo channel audio sample rate converter.

Figure 3: Microchip’s SAMA7G54 with integrated video and audio capabilities

What’s more, no longer is a dedicated device needed to deliver advanced edge-based processing. Engineers are finding that advances in semiconductor technologies and ML algorithms are coming together to make commercially available 16-bit and even 8-bit MCUs an option for effective Edge ML. For many applications the use of such low-power, small form factor devices is a pre-requisite for delivering battery-powered, sensor-based industrial IoT Edge ML systems.

6. Open-Source Tools, Models, Frameworks, & IDEs

In any development the availability of open-source tools, models, frameworks, and well-understood integrated development environments will simplify and speed design, testing, prototyping, and minimize all-important time-to-market. In the case of Edge ML, the emergence of ‘Tiny Machine Learning’ or TinyML is particularly important. As per the definition of the tinyML Foundation, this is the “fast growing field of machine learning technologies and applications including hardware (dedicated integrated circuits), algorithms and software capable of performing on-device sensor (vision, audio, IMU, biomedical, etc.) data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery operated devices.”

Thanks to the TinyML movement, recent years have seen an exponential growth in the availability of tools and support that make the job of the embedded design engineer easier.

Good examples of such tools are those from Edge Impulse and SensiML™. Providing ‘TinyML as a service,’ in which the application of ML can be delivered in a footprint of just a few kilobytes, these tools are fully compatible with theTensorFlow™ Lite library for deploying models on mobile, microcontrollers, and other edge devices.

By choosing such tools, developers can deliver rapid classification, regression and anomaly detection, simplify the collection of real sensor data, ensure live signal processing from raw data to neural networks and speed testing and subsequent deployment to a target device.

7. Development Kits

The increased availability of development kits is another factor helping to accelerate the implementation of Edge ML applications. Many products that go to market are based on the hardware and firmware found within (and drivers, software modules and algorithms running on) embedded system development kits and those suitable for developing ML apps are available from a host of vendors. For example, the Raspberry Pi 4 Model B is based on the Broadcom® BCM2711 Quad-core Cortex®-A72 64-bit SoC (clocked at 1.5 GHz), has a Broadcom VideoCore® VI GPU and 1/2/4 GB LPDDR4 RAM, and can deliver 13.5 to 32 GFLOPS computing performance.

In creating a design it is worth spending some time researching the parts used on the development kits as it can be beneficial to build the final application using the same silicon. For instance, if the ML application requires embedded vision, Microchip’s PolarFire® SoC FPGAs are ideal for compute-intensive edge vision processing, supporting up to 4k resolution with low-12.37 SERDES.

8. Data Security

When it comes to security, the good news is that with Edge ML, there is far less data being transferred to the cloud, meaning the potential surface for cyberattacks is significantly reduced. That said, an Edge ML deployment introduces new challenges as all edge devices—whether ML-enabled or not—no longer have the inherent security of the cloud and must be independently protected just like any other IoT device or embedded system connected to a network.

Security considerations to consider include:

  • How easy is it for hackers to change the data (being input and/or used for training) or the ML model?
  • How secure is the data? Can it be accessed before encryption? Noting here that encryption requires keys to be kept in a safe (not obvious) place.
  • How secure is the network? Is there a risk of unauthorized devices (or seemingly authorized) devices connecting and doing harm?
  • Can the Edge ML device be cloned?

The level of security required will of course depend on the application (it could be safety-critical, for example) and/or the nature of the ‘bigger system’ of which the Edge ML device is a part.

9. In-House Capabilities

Within a typical engineering team there will be varying degrees of ML and AI understanding. Open-source tools, development kits, and off-the-shelf data sets mean that embedded engineers do not need an in-depth understanding of data science or deep learning neural networks. However, when embracing any new engineering discipline or methodology (or investing in tools), time spent on training can lead to shorter development times, fewer design spins and improved output-per-engineer in the long run.

The wealth of information about ML online in the form of tutorials, whitepapers, and webinars (as well as the fact that engineering tradeshows run ML seminars and workshops) provides many opportunities to enhance development team capabilities. More formal courses include MIT’s Professional Certificate Program in ML and AI, while Imperial College, London, runs an online course that includes a module on how to develop and refine ML models using Python and industry standard tools to measure and improve performance.

Finally, it is now possible to augment an engineering team’s capabilities, with generative AI tools, allowing novices to code complex applications, while considering ML training versus direct coding could also lead to shorter development times, fewer re-spins and improved outputs.

10. Supplier Support and Partnerships

Developing an ML-enabled application for a device is made much easier with support from suppliers already active in the field. For example, for cloud-based ML, AWS has its popular ‘Machine Learning Competency Partners’ program. In the case of Edge ML specifically, it is advisable to look beyond simply the product being supplied and consider the potential benefits of the supplier’s existing collaborations. Microchip, for instance, has invested significant resources in establishing relationships with partners ranging from sensor vendors to tool providers to ensure customers have access to everything from basic advice and guidance through to the delivery of turnkey solutions.

Conclusion

While each of the 10 points discussed in brief above could easily warrant an article of its own, the objective of this article was to help embedded systems designers identify some key factors to take into account before embarking on their Edge ML project.

By considering each of these issues carefully at the outset it should be possible to form a strategy that will create optimized solutions that meet size, power, cost and performance objectives at the same time as de-risking the project, minimizing re-spins and reducing overall time-to-market.

1. ABI Research Edge ML Enablement: Development Platforms, Tools and Solutions application analysis report, June 2022.


Yann LeFaou is Associate Director for Microchip’s touch and gesture business unit. In this role, LeFaou leads a team developing capacitive touch technologies and also drives the company’s Machine learning (ML) initiative for microcontrollers and microprocessors. He has held a series of successive technical and marketing roles at Microchip, including leading the company’s global marketing activities of capacitive touch, human machine interface and home appliance technology. LeFaou holds a degree from ESME Sudria in France.

Yann LeFaou is Associate Director for Microchip’s touch & gesture business unit. LeFaou leads a team developing capacitive touch technologies and also drives the company’s Machine learning (ML) initiative for microcontrollers and microprocessors. He has held a series of technical and marketing roles at Microchip, including leading the company’s global marketing activities of capacitive touch, human machine interface and home appliance technology. LeFaou holds a degree from ESME Sudria in France.

More from Yann