Webcast

Sponsored by: Intel
Sep 20, 2022 9AM EDT(1 year, 6 months ago)
Did you know more than 1 billion tons of food is wasted every year? What if we could reduce that? What if optimized AI was the key? Building AI applications to solve real-world problems is our common challenge as developers. To solve a challenge like food waste, optimization is important. In this session, we’ll show you how to optimize and accelerate the performance of your deep learning neural network model to help you achieve really fast AI inference at the edge. This is made possible with Intel’s OpenVINO™, an open-source toolkit that enables neural network model optimization and easy deployment across multiple hardware platforms. 3 Takeaways: • How to run fast AI inference with your existing hardware • How to set up and run AI inference with just 6 lines of code • How to use the OpenVINO optimization tools to run faster AI inference
Moderator: Rich Nass, Executive Vice-President, Embedded Franchise, OpenSystems Media
Presented by: Anisha Udayakumar, AI Software Evangelist, Intel