ADLINK Launches the DLAP x86 Series, a Deep Learning Acceleration Platform for Smarter AI Inferencing at the Edge

By Tiera Oliver

Assistant Managing Editor

Embedded Computing Design

February 02, 2021

News

ADLINK Launches the DLAP x86 Series, a Deep Learning Acceleration Platform for Smarter AI Inferencing at the Edge

A compact, high performing GPU-enabled deep learning acceleration platform for deploying AI at the edge across industrial applications 
 

ADLINK launched a compact GPU-enabled deep learning acceleration platform on the market with its latest DLAP x86 series. The DLAP x86 series targets the deployment of deep learning in volume, at the edge where data is generated and actions are taken. It is optimized to deliver AI performance in various industry applications by accelerating compute-intensive, memory-hungry AI inferencing, and learning tasks.

The DLAP x86 series features:
• Heterogeneous architecture for high performance - featuring Intel processors and NVIDIA Turing GPU architecture delivering high GPU-accelerated computation and returning optimized performance per watt and per dollar.
• The DLAP x86 series’ compact size starts at 3.2 liters; it is optimal within mobility devices or instruments where physical space is limited, such as mobile medical imaging equipment.
• With a rugged design for reliability, the DLAP x86 series can sustain temperatures up to 50 degrees Celsius/240 watts of heat dissipation, vibration (up to 2 Grms), and shock protection (up to 30 Grms), for reliability in industrial, manufacturing, and healthcare environments.

Delivering an optimal mix of SWaP and AI performance in edge AI applications, the DLAP x86 helps transform operations in healthcare, manufacturing, transportation and other sectors. Examples of use include:
• Mobile medical imaging equipment: C-arm, endoscopy systems, surgical navigation systems
• Manufacturing operations: object recognition, robotic pick and place, quality inspection 
• Edge AI servers for knowledge transfer: combining pre-trained AI models with local data sets

For more information, visit: www.adlinktech.com
 

Tiera Oliver, Assistant Managing Editor for Embedded Computing Design, is responsible for web content edits, product news, and constructing stories. She develops content and constructs ECD podcasts, such as Embedded Insiders. Before working at ECD, Tiera graduated from Northern Arizona University, where she received her B.S. in journalism and political science and worked as a news reporter for the university’s student-led newspaper, The Lumberjack.

More from Tiera