Imagimob Announces tinyML for Sound Event Detection Applications on Synaptics AI Chip
June 14, 2022
Imagimob announced the availability of the Imagimob tinyML platform integrated with Synaptics’ DBM10L AI chip.
Synaptics’ existing machine learning algorithms for Sound Event Detection (SED) are fully integrated in the Imagimob platform which means that customers now have an end-to-end platform that will enable them to develop and deploy production-ready SED applications on the Synaptics DBM10L AI chip.
Imagimob AI is a development platform for machine learning on edge devices. It allows developers to go from data collection to deployment on an edge device in minutes. Imagimob AI is used by many customers to build production-ready models for a range of use cases including audio, gesture recognition, human motion, predictive maintenance, material detection, and more.
Sound Event Detection (SED) is the task of recognizing sound events and their respective temporal start and end time in real time. Glass break, baby cry, gun shot, and the sound of a microwave oven are all examples of sound events.The Synaptics SED algorithms are based on many years of research and are proven to be robust in noisy environments, and proven to be efficient when they run on the DBM10L.
The Synaptics DBML10 is a cost-effective, low-power, small-form-factor, artificial intelligence (AI) and machine learning (ML) dual-core system-on-chip SoC based on a digital signal processor (DSP) and a neural network (NN) accelerator. Optimized for voice, audio, and sensor processing, it is suitable for battery-operated devices such as smartphones, tablets, and smart home devices such as remote controls, as well as wearables and hearables —including true wireless stereo (TWS) headsets.
Imagimob will demonstrate the solution at Embedded World 2022 on June 21-23 in Nuremburg, Germany.
For more information, visit: www.imagimob.com