MLCommons announced new results from two industry-standard MLPerf™ benchmark suites: Training v3.0, which measures the performance of training machine learning models, and Tiny v1.1, which measures how quickly a trained neural network can process new data for low-power devices in small form factors.
Machine learning (ML) requires industry-standard performance benchmarks to aid in the creation and competitive assessment of the numerous ML-related software and hardware solutions.
In this week’s Embedded Insiders, Brandon and Rich try to decide if data sheets specs are reliable, or if industry benchmarks are the only reasonably- accurate measure of component performance without actually testing them yourself.
Classic DSP, New Neural Networks & Better Benchmarks Improve Local Voice Activation at the Edge - StoryJuly 19, 2021
If you’ve ever used a virtual assistant, you likely assumed you were talking to a device so smart it could answer almost any question you asked it. Well, actually, Amazon Echos, Google Homes, and other devices like them usually have no idea what you’re talking about.
MLPerf Tiny Inference Benchmark Lays Foundation for TinyML Technology Evaluation, Commercialization - StoryJuly 02, 2021
The speed with which edge AI ecosystems like TinyML are evolving has made standardization difficult, much less the creation of performance and resource utilization benchmarks that could simplify technology evaluation. Edge AI benchmarks would be hugely beneficial to the ML industry as they could help accelerate solution comparison, selection, and the productization process.
MLCommons released new results for MLPerf Training v1.0, the organization's machine learning training performance benchmark suite.
MLCommons launched a new benchmark, MLPerf Tiny Inference, to measure how a trained neural network can process new data for low-power devices in small form factors and included an optional power measurement.