Axiomtek Leverages Fanless Embedded AI System with Intel Core Ultra Processor

By Tiera Oliver

Assistant Managing Editor

Embedded Computing Design

October 22, 2025

Blog

Axiomtek Leverages Fanless Embedded AI System with Intel Core Ultra Processor
Image Credit: Axiomtek

The demand for modern, rugged computing solutions for industrial automation and machine vision applications is increasing. These advancing applications, deployed in robotics and factory environments, require robust performance, wide operating temperature ranges, reliable networking, flexible storage, and acceleration for AI workloads.

The embedded AI system from Axiomtek, the eBOX630B, can provide these features in a fanless, aluminum, IP40-rated chassis.

Powered by an Intel Core Ultra processor with Intel Arc GPU and Intel AI Boost Neural Processing Unit (NPU), the embedded AI system supports multiple compute engines to enable AI and inference processing for local analytics, machine vision, and other advanced AI workloads.

The dual-channel DDR5-5600 SO-DIMM provides support for up to 96GB of memory, in addition to two 2.5" front-access swappable HDD/SSD and one M.2 2280 M-keyed SSD for fast processing and easy access to data-intensive tasks, further supporting the solution's performance and flexibility.

The eBOX630B system provides a wide range of I/O, with six USB and two USB Type-C with DisplayPort support for connectivity to multiple devices. It also supports multiple display interfaces with one HDMI 1.4b and three DisplayPort (two ports via USB Type-C) and high-speed industrial networking from the three 2.5GbE LAN with TSN and 1 GbE.

The fanless embedded system features an operating temperature range of -40°C to +60°C and a power input range from 9 to 60 VDC.

For additional software support in AI computing applications, the eBOX630B supports the Intel Open Edge Platform, Metro AI Suite with OpenVINO toolkit including Intel oneAPI for AI computing.

Whether you're developing applications for industries such as education, healthcare, manufacturing and robotics, retail, transportation and video, and safety and security, it’s important to rely on solutions that deliver low-latency processing, TPM 2.0 for hardware-based security, and much more in a rugged, compact design for ever-evolving AI applications.

Intel’s AI Edge Initiative

This blog is part of a series showcasing Intel’s AI Edge initiative, designed to highlight the latest innovations of AI and edge computing. Intel recently unveiled its Intel® AI Edge Systems, Edge AI Suites and Open Edge Platform. These solutions are designed to integrate AI into partners’ existing infrastructure, kickstarting development to enhance system reliability and strengthening security.

Intel is co-innovating with its software partners in AI creation and optimization for edge applications, as illustrated by this series of blog posts. Furthermore, Intel is driving innovation alongside its hardware platforms to optimize AI Edge systems for key workloads, offering best fit performance across a range of power levels, sizes and performance options.

To find out more, click here to visit Intel.

Tiera Oliver is the assistant managing editor at Embedded Computing Design. She is responsible for web content editing, product news, and story development. She also manages, edits, and develops content for ECD podcasts, including Embedded Insiders.

She utilizes her expertise in journalism and content management to oversee editorial content, coordinate with editors, and ensure high-quality output across web, print, and multimedia platforms. She manages diverse projects, assists in the production of digital magazines, and hosts company podcasts by conducting in-depth interviews with industry leaders to deliver engaging and insightful discussions.

Tiera attended Northern Arizona University, where she received her bachelor's in journalism and political science. She was also a news reporter for the student-led newspaper, The Lumberjack. 

More from Tiera