New RISC-V Innovations Lead AI to Open Standard
August 21, 2024
Story
If the growing number of new RISC-V announcements aren’t enough proof of the license-free protocol’s momentum, there is a mountain of analyst predictions, trend research, and market analysis that seems to be getting ever more optimistic about the space.
Research firm Omdia recently forecast that RISC-V-based processor shipments could increase by nearly 50 percent per year until 2030. Although it seems likely that the largest RISC-V opportunities will be in the Automotive and Industrial sectors, it looks like AI applications will be driving that growth in almost every vertical.
"The rise of AI, the increase in use cases and capabilities, means a lot of new territory is being revealed and all of it has potential for RISC-V," said Edward Wilford, Senior Principal Analyst for IoT, Omdia. "The growth of RISC-V is concurrent with the rise of AI and especially edge AI, and that will provide a massive opportunity for ISA."
With AI in the driver’s seat, it’s no surprise that many recent RISC-V announcements have centered on advance intelligent capabilities and applications.
SiFive
In a recent announcement, SiFive has announced a new RISC-V processor tailored for datacenter applications with AI workloads. The SiFive Performance P870-D datacenter processor is reportedly designed for parallelizable infrastructure workloads, and the company says that when it’s used in combination with products from the SiFive Intelligence product family, datacenter architects will be able to build high-performance, energy efficient compute subsystems for AI-powered applications.
The P870-D supports the open AMBA CHI protocol to allow users to scale the number of clusters, and boost performance while minimizing power consumption. By harnessing a standard CHI bus, SiFive said, the P870-D can scale up to 256 cores using protocols like Compute Express Link (CXL) and CHI chip to chip (C2C) to enable coherent high core count heterogeneous SoCs and chiplet configurations.
“SiFive brings a clean, modern approach to the AI era with our broad portfolio of RISC-V solutions. The new P870-D enhances our proven performance architecture to bring new levels of performance, flexibility, and scalability ,” said John Ronco, SVP of Product, SiFive. “The full solution offering from SiFive… combined with our intelligence processors for dedicated AI compute makes it easy for our customers to achieve the most effective performance/watt/dollar metrics on AI and Datacenter workloads”
The P870-D processor is sampling to lead customers now with a final production release by the end of 2024.
Akeana
Coming out of stealth this month, Akeana has revealed a full suite of RISC-V IP products targeted directly at meeting or exceeding ARM’s offerings at every level, but at a lower cost.
The company is backed by Kleiner Perkins, Mayfield, and Fidelity Ventures, and although the suite of products run the entire range of processing levels, the company is offering an AI-specific solution to lowering latency while not sacrificing processing power. The AI Matrix computation engine is designed to offload Matrix Multiply operations for AI acceleration, the company said. It’s reportedly configurable in size and supports several data types, and it attaches to the coherent cluster cache block like a core for optimal data sharing.
This announcement and company launch from Akeana comes three years after its founding. They’ve reportedly raised more than $100 million in capital and the product line is available now. The line includes three microcontroller lines, Android clusters, AI vector cores and subsystems, and compute clusters for networking and data centers. In addition to the Matrix Multiply AI Accelerator, the company has launched:
- Akeana 100 Series: a line of configurable processors with 32-bit RISC-V cores that support applications like embedded microcontrollers, edge gateways, and personal computing devices.
- Akeana 1000 Series: a processor line that includes 64-bit RISC-V cores and an MMU to support rich operating systems while maintaining low power and requiring low die area. These processors support in-order or out-of-order pipelines, multi-threading, vector extension, hypervisor extension and other extensions that are part of recent and upcoming RISC-V profiles, as well as optional AI computation extensions.
- Akeana 5000 Series: a line of extreme performance processors. This line provides 64-bit RISC-V cores optimized for demanding applications in next-gen devices, laptops, data centers, and cloud infrastructure. These processors are compatible with the Akeana 1000 Series but with much higher single thread performance.
- Processor System IP: a collection of IP blocks needed for creation of processor SoCs, including a Coherent Cluster Cache, I/O MMU, and Interrupt Controller IPs. In addition, Akeana provides Scalable Mesh and Coherence Hub IP (compatible with AMBA CHI) to build large coherent compute subsystems for Data Centers and other use cases.
"Our team has a proven track record of designing world-class server chips, and we are now applying that expertise to the broader semiconductor market as we formally go to market,” said Rabin Sugumar, Akeana CEO. “With our rich portfolio of customizable cores and special security, debug, RAS, and telemetry features, we provide our customers with unparalleled performance, observability, and reliability. We believe our products will revolutionize the industry."
Akeana is a member of the RISC-V Board of Directors and is also participating in the RISE project to accelerate the availability of software for RISC-V. Check out Akeana here.
This is just a sampling of the many RISC-V product and solution announcements that are built to address the desire for integrated AI and ML in embedded computing. The upcoming Santa Clara RISC-V summit will be just as a-buzz about AI in all likelihood, and that’s no surprise at all.