Cadence Accelerates Intelligent SoC Development with Comprehensive On-Device Tensilica AI Platform

By Tiera Oliver

Assistant Managing Editor

Embedded Computing Design

September 13, 2021

News

Cadence Accelerates Intelligent SoC Development with Comprehensive On-Device Tensilica AI Platform

Cadence Design Systems announced its Tensilica AI Platform for accelerating AI SoC development, including three supporting product families optimized for varying data and on-device AI requirements.

Spanning the low, mid, and high end, the comprehensive Cadence Tensilica AI Platform is designed to deliver scalable and energy-efficient on-device to edge AI processing for today's ubiquitous AI SoCs. According to the company, a new companion AI neural network engine (NNE) consumes 80% less energy per inference and delivers more than 4X TOPS/W compared to standalone Tensilica DSPs, while neural network accelerators (NNAs) deliver suitable AI performance and energy efficiency in a turnkey solution.

Targeting intelligent sensor, internet of things (IoT) audio, mobile vision/voice AI, IoT vision, and advanced driver assistance systems (ADAS) applications, the Tensilica AI Platform delivers power, performance and area (PPA), and scalability with a common software platform. Built upon the application-specific Tensilica DSPs, the Tensilica AI Platform product families include:

  • AI Base: Includes the Tensilica HiFi DSPs for audio/voice, Vision DSPs, and ConnX DSPs for radar/lidar and communications, combined with AI instruction-set architecture (ISA) extensions.
  • AI Boost: Adds a companion NNE, initially the Tensilica NNE 110 AI engine, which scales from 64 to 256 GOPS and provides concurrent signal processing and efficient inferencing.
  • AI Max: Encompasses the Tensilica NNA 1xx AI accelerator family—currently including the Tensilica NNA 110 accelerator and the NNA 120, NNA 140 and NNA 180 multi-core accelerator options—which integrates the AI Base and AI Boost technology. The multi-core NNA accelerators can scale up to 32 TOPS, while future NNA products are targeted to scale to 100s of TOPS.

All of the NNE and NNA products include random sparse compute, designed to improve performance, run-time tensor compression to decrease memory bandwidth, and pruning plus clustering to reduce model size.

Per the company, comprehensive common AI software addresses all target applications, streamlining product development and enabling ideal migration as design requirements evolve. This software includes the Tensilica Neural Network Compiler, which supports these industry-standard frameworks: TensorFlow, ONNX, PyTorch, Caffe2, TensorFlowLite, and MXNet for automated end-to-end code generation; Android Neural Network Compiler; TFLite Delegates for real-time execution; and TensorFlow Lite Micro for microcontroller-class devices.

The NNE 110 AI engine and the NNA 1xx AI accelerator family support Cadence’s Intelligent System Design strategy, which enables pervasive intelligence for SoC design, and are expected to be in general availability in the fourth quarter of 2021.

For more information, visit www.cadence.com/go/TensilicaAI.

Tiera Oliver is the assistant managing editor at Embedded Computing Design. She is responsible for web content editing, product news, and story development. She also manages, edits, and develops content for ECD podcasts, including Embedded Insiders.

She utilizes her expertise in journalism and content management to oversee editorial content, coordinate with editors, and ensure high-quality output across web, print, and multimedia platforms. She manages diverse projects, assists in the production of digital magazines, and hosts company podcasts by conducting in-depth interviews with industry leaders to deliver engaging and insightful discussions.

Tiera attended Northern Arizona University, where she received her bachelor's in journalism and political science. She was also a news reporter for the student-led newspaper, The Lumberjack. 

More from Tiera