Product of the Week: Flex Logix InferX X1M Edge Inference Accelerator
March 29, 2022
Every type of edge AI has three hard and fast technical requirements: low power, small form factor, and high performance. Of course, what constitutes “small,” “power efficient,” or “high performance” varies by use case and can describe everything from small microcontrollers to edge servers, but usually you must sacrifice at least one to get the others.
However, one solution that can address everything from edge clouds to endpoints without sacrifice is the FPGA.
FPGAs have been used for decades to deliver low-power, high-performance design flexibility regardless of application or form factor, but they’re not exactly user friendly – a fact that’s compounded by the ongoing evolution of complex AI models and algorithms. However, where there’s challenge there’s also usually opportunity, and for edge AI use cases in the industrial automation, smart city, transportation, healthcare, agriculture, and other markets quickly adopting capabilities like computer vision, that opportunity has been presented in the form of the FlexLogix X1M AI Accelerator.
The Flex Logix X1M AI Accelerator targets real-time, high-resolution computer vision use cases that run small batch deep learning workloads based on models like Yolov3, Yolov4, and Yolov5. To deliver visual edge inferencing at a higher throughput-per-dollar than devices like the NVIDIA Tesla T4, Xavier NX, or Jetson TX2, the new X1M M.2 module leverages Flex Logix' InferX X1 architecture that combines 4K INT8 MAC cores into 64 x 64 tensor processor arrays supported by 8 MB of SRAM and 4 GB of 16 MTps LPDDR4X DRAM.
This at the expense of just 2.5 A current and 8.25 W power consumption.
Given the onboard memory, the X1M AI Accelerator natively supports x2 lanes of PCI Express Gen 3 or 4 as the host bus protocol. PCIe support not only facilitates high-speed data transfers between the tensor array and data or models in memory and storage, it also enables compliance to the M.2 2280 B+M Key form factor specification that measures in at 22 mm (W) x 80 mm (L) x 17 mm deep (with included heat sink).
At roughly the size of a stick of chewing gum and consuming little more power than a clock radio, the X1M AI Accelerator truly occupies the center of technical power-performance-size Venn diagrams.
The InferX X1M Edge Inference Accelerator in Action
The platform’s tensor array enables it to handle deep neural networks with hundreds of layers, dozens of parallel channels, and multiple operator types, which, unlike GPUs, it can apply to megapixel images in batch sizes as small as 1.
Despite exhibiting the performance characteristics of an ASIC, the InferX X1M owns abilities that are unique to FPGAs. These include a reconfigurable data path that allows the device hardware to adapt to new and different model technologies, even after it has been deployed in the field. In essence, this makes the devices future proof.
Importantly, users can access these and other features like control logic without needing to understand hardware development languages or manually reprogram the FPGA bitstream. This is possible thanks to APIs that provide users internal access to low-level platform control functionality and monitoring functions as well as external ones that can be used for application configuration or model deployment.
Moreover, compatibility the Open Neural Network eXchange (ONNX) format allows InferX X1M tools to optimally and automatically map any model represented in the framework to the X1 accelerator.
The solution supports development in both Windows and Linux operating environments.
Getting Started with the Flex Logix InferX X1M Accelerator
Aside from the benefits listed above, maybe the InferX X1M Accelerator’s biggest advantage is that it frees edge AI and computer vision OEMs and systems integrators from having to design their own custom board. These M.2 modules are designed to perform reliably across 0ºC to 50ºC temperature ranges and between 10% and 90% relative non-condensing humidity, all at a competitive cost.
Contact Flex Logic for pricing and availability at [email protected]flex-logix.com or check out the resources below.
- Flex Logix Website: https://flex-logix.com
- InferX X1 Product Page: https://flex-logix.com/inference/inferx-products.html
- Contact Flex Logix: [email protected]flex-logix.com