ADLINK Launches the DLAP x86 Series, a Deep Learning Acceleration Platform for Smarter AI Inferencing at the Edge

By Tiera Oliver

Assistant Managing Editor

Embedded Computing Design

February 02, 2021

News

ADLINK Launches the DLAP x86 Series, a Deep Learning Acceleration Platform for Smarter AI Inferencing at the Edge

A compact, high performing GPU-enabled deep learning acceleration platform for deploying AI at the edge across industrial applications 
 

ADLINK launched a compact GPU-enabled deep learning acceleration platform on the market with its latest DLAP x86 series. The DLAP x86 series targets the deployment of deep learning in volume, at the edge where data is generated and actions are taken. It is optimized to deliver AI performance in various industry applications by accelerating compute-intensive, memory-hungry AI inferencing, and learning tasks.

The DLAP x86 series features:
• Heterogeneous architecture for high performance - featuring Intel processors and NVIDIA Turing GPU architecture delivering high GPU-accelerated computation and returning optimized performance per watt and per dollar.
• The DLAP x86 series’ compact size starts at 3.2 liters; it is optimal within mobility devices or instruments where physical space is limited, such as mobile medical imaging equipment.
• With a rugged design for reliability, the DLAP x86 series can sustain temperatures up to 50 degrees Celsius/240 watts of heat dissipation, vibration (up to 2 Grms), and shock protection (up to 30 Grms), for reliability in industrial, manufacturing, and healthcare environments.

Delivering an optimal mix of SWaP and AI performance in edge AI applications, the DLAP x86 helps transform operations in healthcare, manufacturing, transportation and other sectors. Examples of use include:
• Mobile medical imaging equipment: C-arm, endoscopy systems, surgical navigation systems
• Manufacturing operations: object recognition, robotic pick and place, quality inspection 
• Edge AI servers for knowledge transfer: combining pre-trained AI models with local data sets

For more information, visit: www.adlinktech.com
 

Tiera Oliver is the assistant managing editor at Embedded Computing Design. She is responsible for web content editing, product news, and story development. She also manages, edits, and develops content for ECD podcasts, including Embedded Insiders.

She utilizes her expertise in journalism and content management to oversee editorial content, coordinate with editors, and ensure high-quality output across web, print, and multimedia platforms. She manages diverse projects, assists in the production of digital magazines, and hosts company podcasts by conducting in-depth interviews with industry leaders to deliver engaging and insightful discussions.

Tiera attended Northern Arizona University, where she received her bachelor's in journalism and political science. She was also a news reporter for the student-led newspaper, The Lumberjack. 

More from Tiera