Webcast

Sponsored by:
Sep 20, 2022 9AM EDT(1 year, 7 months ago)
Deep learning models are advancing at a fast pace, creating difficult dilemmas for system developers. When you begin developing an edge AI system, you select the best available model for your needs. But by the time you’re ready to deploy your product, your original model is obsolete. You’d like to upgrade your model, but your neural network accelerator was designed with previous-generation models in mind, and struggles to deliver top performance and efficiency on state-of-the-art models. The solution is hardware that adapts to the needs of whatever algorithms you choose. Hardware programmability enables Lattice FPGAs to support the latest models and techniques with astounding efficiency, typically consuming less than 200 mW when running visual AI workloads at 30+ frames per second. In this talk we’ll show how Lattice FPGAs, coupled with our production-proven sensAI solution stack, are being used to quickly develop super-efficient AI implementations that enable groundbreaking features in smart edge devices.
Moderator: Rich Nass, Executive Vice-President, Embedded Franchise, OpenSystems Media
Presented by: Hussein Osman, Director of Segment Marketing, Lattice Semiconductor