Embedded Edge Devices are Getting Smarter, More Efficient

By Ken Briodagh

Senior Technology Editor

Embedded Computing Design

July 03, 2024


Embedded Edge Devices are Getting Smarter, More Efficient

The Edge is nothing new. Edge networking, Edge processing, and most recently Edge intelligence have been where many companies have been innovating and expanding capabilities and product lines for years.

The reasons for this are many, and it’s not the exclusive home of innovation, but there’s a merging of historically disparate horizontal layers driving much of the innovation at the edge. These layers are IoT, Embedded, and AI, and they’re combining into one fused horizontal technical layer that will likely inform every deployment or upgrade at the Edge for the foreseeable future. This is simply because enterprises and end users want their applications to be unified and simple to operate and manage, but also powerful, functional, and capable. That means, they need the power of embedded processing and efficient energy management, Sensor Fusion for IoT connectivity and data collection, and command and control automation from AI and ML algorithms. All of these are being required by customers to flow through single UI and that’s driving this horizontal convergence.

Let’s look at some of the new innovations at the edge that are contributing to this evolution.


Ceva recently announced an extension of its Intelligent Edge IP with new TinyML optimized NPUs for AIoT devices. The Ceva-NeuPro-Nano NPUs are designed to be ultra-low power and deliver powerful performance in a small, Edge-friendly area for consumer, industrial, and enterprise products, the company said.

The market for TinyML is growing along with the horizontal technology convergence. ABI Research has predicted that more than 40 percent of TinyML shipments will be powered by dedicated TinyML hardware rather than all-purpose MCUs by 2030. Ceva has aid that intends to be a leader in that space and is starting now with the new NeuPro-Nano NPU.

“OEMs are trying to force more features into SoCs, but running out of compute,” said Chad Lucien, VP and GM of the sensors and audio business unit at Ceva. “We’re constantly running up against situations where a customer is using an MCU that might not fit the function.”

The company’s new Embedded AI NPU architecture reportedly is fully programmable and executes Neural Networks, feature extraction, control code and DSP code, and supports most advanced machine learning data types and operators including native transformer computation, sparsity acceleration and fast quantization. This brings intelligence into any Edge device without sacrificing power efficiency because it’s optimized, self-sufficient architecture is designed to deliver superior power efficiency with a smaller silicon footprint

"Ceva-NeuPro-Nano opens exciting opportunities for companies to integrate TinyML applications into low-power IoT SoCs and MCUs and builds on our strategy to empower smart edge devices with advanced connectivity, sensing and inference capabilities,” said Lucien. “The Ceva-NeuPro-Nano family of NPUs enables more companies to bring AI to the very edge, resulting in intelligent IoT devices with advanced feature sets that capture more value for our customers."

Ceva-NeuPro-Nano NPUs are available for licensing today. Click here to find out more.


The most popular and desired IoT application right now is vision, be it via LiDAR, motion sensing, or most commonly, cameras. With that in mind, STMicroelectronics has unveiled a new image sensor ecosystem for advanced camera performance called ST BrightSense.

According to the recent announcement, BrightSense is designed to enable quicker and smarter designs of compact power-efficient products for factory automation, robotics, AR/VR, and medical applications, all of which are arguably the most critical non-consumer Edge use cases.

The company also released a set of plug-and-play hardware kits, evaluation camera modules and software at the same time, reportedly to ease development with the BrightSense global-shutter image sensors. BrightSense image sensors sample all pixels simultaneously, ST said, which means the global-shutter sensors can capture images of fast-moving objects without distortion and reduce power when coupled to a lighting system, unlike a conventional rolling shutter.

ST said its CMOS-backed sensors enable backside-illuminated pixel technology and high image sharpness to capture fine details, even in motion. Applications include barcode reading, obstacle avoidance in mobile robots, and face recognition. Since form factor is always a key concern at the edge, the company used a 3D-stacked construction to allow for a small die area to ease integration anywhere space is limited.

Click here to find out more.


Processing and power management are the heart of embedded and they make up the engine driving both IoT and AI at the Edge. Staying with vision, robotics in manufacturing and warehousing, and ADAS and other automated driving systems make vision systems critical infrastructure, and the processing needs are incredible, while form factors need to be compact and robust for these mobile edge devices operating in sometimes intense conditions.

One recent innovation in this area comes from indie Semiconductor, which recently announced its iND880xx product line that is designed specifically for the specialized requirements of Advanced Driver Assistance Systems (ADAS) and vision sensing applications.

The company said that it has seen latency rates in initialization and processing hinder ADAS camera systems and keep them from achieving real-time safety capabilities. That’s why indie Semiconductor says its iND880xx family was built to attack latency specifically through its proprietary technology that reportedly supports simultaneous low-latency processing of four independent sensor inputs that can deliver throughput up to 1400 megapixels per second.

Check it out here.

Of course these aren’t the only examples, just some of the most recent ones. It’s important for engineers and developers to consider all three of these pillars of IoT, AI, and Embedded when designing new products or upgrading existing lines. It’s becoming ever clearer that keeping these layers siloed is going the way of the dodo and Blockbuster. It’s time to think about how you’re going to get to Blu-ray.

Ken Briodagh is a writer and editor with two decades of experience under his belt. He is in love with technology and if he had his druthers, he would beta test everything from shoe phones to flying cars. In previous lives, he’s been a short order cook, telemarketer, medical supply technician, mover of the bodies at a funeral home, pirate, poet, partial alliterist, parent, partner and pretender to various thrones. Most of his exploits are either exaggerated or blatantly false.

More from Ken