Accelerating AI Inference Models for Computer Vision and Voice in Edge Devices

By Saumitra Jagdale

Freelance Technology Writer

April 04, 2022

Blog

Accelerating AI Inference Models for Computer Vision and Voice in Edge Devices

Accelerating AI inference models has become an essential task as we progress towards creating more sophisticated and efficient AI applications. Flexibile and complete AI assistance are critical components for success in the fast-growing AI sector. 

Credits: Twitter - SiFive

RISC-V is designed to be modular, with several basic pieces and possible additions. The ISA base and expansions result from a collaborative effort between industry, academia, and the research community. Instructions along with their encoding, control flow, registers, memory & addressing logic manipulation, and ancillaries are all specified in the base. With complete software support, including a general-purpose compiler, the base alone may create a simplified general-purpose computer. The standard extensions are designed to function with all of the standard bases, as well as with each other, without causing any conflicts.

The Deep Vision ARA-1 processor, which is built around a full software tool suite, is the clear choice for edge applications that demand high computation, model flexibility, and energy economy. The solution is also designed to support today's most complicated and cost-effective Edge AI applications while consuming the least amount of power. ​

Credits: Fraunhofer IMS

SiFive, the founder and leader of RISC-V computing, recently announced that Deep Vision, an edge AI processor company, will incorporate SiFive RISC-V processor IP into its next-generation inference accelerators to allow more complete computer vision and voice in edge devices. Combining these two super-efficient services, Deep Vision will license SiFive Intelligence X280 and SiFive Essentia S7 processor IP to improve the flexibility and functionality of their solutions, allowing customers to create applications for smart city, smart retail, automotive, and other industries. 

Thanks to the SiFive Intelligence X280 processor, Deep Vision accelerators are now capable of supporting broader neural network models and include floating-point optimizations. TensorFlow Lite and a wide range of AI/ML data formats, including BFLOAT16, are supported by SiFive Intelligence Extensions on SiFive Intelligence processors.

The SiFive Intelligence X280 processor's flexibility and programmability as an application class AI processor extend the Deep Vision accelerators' proven capabilities and performance to quickly support new and evolving AI inference models in hardware, as well as AI pre-processing tasks like image scaling, color conversion, and white noise subtraction. 

Along with other SiFive RISC-V processors, the SiFive Essential S7 processor provides real-time, deterministic processing capabilities to allow command and control applications in a heterogeneous compute cluster. The SiFive Essential S7 processor architecture has better security and real-time determinism while providing area-efficient performance-per-watt, making it ideal for latency-sensitive edge compute applications.

Combining the characteristics of SiFive Intelligence RISC-V processors with Deep Vision inference accelerators is a natural way to manufacture new hardware for the fast-growing AI processor industry quickly and efficiently. SiFive Intelligence Extensions and RISC-V Vectors, as implemented in the SiFive Intelligence X280, are ideally matched to the needs of current AI applications. SiFive is uniquely positioned to efficiently help AI firms construct market-focused systems, thanks to SiFive’s industry-leading RISC-V processor IP portfolio. 

Further, With a best-in-class 11+ SPECint2k6/GHz benchmark result, the SiFive Performance P650 expands its reach into the extreme high-end sector, making it the highest performance licensable RISC-V processor. With multi-core, multi-cluster CPU configurations up to 16 cores and the recently ratified Hypervisor extension, the SiFive Performance P650 provides scalable performance. Over time, SiFive plans to expand its roadmap to include multi-cluster capabilities that enable 128 or more cores in a single SoC.

RISC-V Accelerating the Present AI Inference Models

SiFive introduced RISC-V processor IP into its next-generation AI inference accelerators providing faster and more efficient computing abilities to computer vision and voice in edge devices. The high flexibility and great functionalities of the RISC-V processors allow customers to create sophisticated AI applications like smart cities, smart retail, and automotive with greater simplicity and at a much faster rate than conventional technologies. 

Credits: Author 

With AI markets proliferating, accelerating their performance is crucial for the sector, and the amalgamation of RISC-V with AI inference systems seems to be the best solution available today.

Saumitra Jagdale is a Backend Developer, Freelance Technical Author, Global AI Ambassador (SwissCognitive), Open-source Contributor in Python projects, Leader of Tensorflow Community India, and Passionate AI/ML Enthusiast.

More from Saumitra