Combine the Latest in GPU Compute and Display Technologies for Today’s AI Edge-Based Applications
March 30, 2022
Blog
Precise and accurate high-resolution imaging is a requirement in many applications, such as medical/healthcare. You certainly wouldn’t want your surgeon looking at a display that wasn’t as crisp and clear and as fast-reacting as possible.
Thanks to a host of innovations from ADLINK and some partners, those types of displays are available today.
The ADLINK embedded computers make use of NVIDIA’s GPUs to make AI-enabled platforms a reality. According to the company, ADLINK integrates the technology to effectively increase the bandwidth for transmitting image data both to the processor and then out to the display. With an assist from NVIDIA, this technology is known as GPUDirect®, which is essentially a way to remotely DMA your imaging data. The value-add from ADLINK is the ability to sequence the data in a way that optimizes the visualization.
An example of such a product family, shown here, is the EGX-MXM-A2000 mobile PCI Express module that’s designed with an embedded NVIDIA RTX™ A2000 device. It’s built to the MXM Type A specification, measuring 82 by 70 mm, and offers a peak performance of 9.3 TFLOPS.
Combining the knowledge from the likes of ADLINK and AUO helps designers construct AI-enabled systems that offer the latest in display technology. One great example is the EGX-MXM-A2000 mobile PCI Express module family.
In addition, by partnering with AUO, an expert in optoelectronics, ADLINK gets the best of both worlds. AUO offers leading-edge display technology, including various touch-panel solutions that can dramatically shrink the size of the display, or they can reduce the power consumption for mobile applications. Depending on where the platform is utilized, such features can be critical.
Sidenote: To learn far more about Edge visualization technologies from both ADLINK and AUO, check out the Tech Forum hosted by the two companies on April 6 and 7.
One of the most important features in applications that take advantage of these displays is reliability, especially because the platforms often are found in challenging environments. For example, a manufacturing application could result in a dusty or dirty environment. And there could often be a wide temperature range. And if you’re looking at any aspect of transportation, automobiles, rail, etc., you have to deal with shock and vibration.
Adhering To Regulations
Many of these application spaces also have regulatory requirements. Working with partners like ADLINK and AUO can smooth that process considerably, as most of their platforms have already gone through the required regulatory and certification processes. Or, ADLINK works with systems integrators and equipment builders to assist in getting the products certified. This is where the partner network (ecosystem) comes into play.
Medical imaging is a prime example of where it is imperative to have the latest compute and display technologies available. That’s an area where the ADLINK/AUO partnership shines.
These are applications where latencies are out of the question. Again, think about the surgeon wielding a knife. That’s a key reason why decisions must be made at the Edge of the IoT (or Industrial IoT) versus the Cloud. In applications like medical, the images likely contain large amounts of data, in some cases three-dimensional renderings.
There are also some invasive medical applications, such as endoscopy where a camera is required to enter the body. Having real-time responses is obviously critical, with those responses measured in milliseconds or faster.
When you combine the image rendering with AI operations, which are becoming more commonplace, you’ve upped the ante significantly in the need for both compute power and display characteristics. In the medical space, this is known as computer-aided or image-based diagnostics, which we know are based on AI inferences. This aids the doctor in making a timely and accurate diagnosis.
ADLINK believes that its design can increase the data bandwidth to reduce latency by about 40%, which is a game changer for some applications. It does this by eliminating any and all bottlenecks in the CPU processing pipeline. This technique is enabled by moving the data directly to the GPU, and not involving the system’s main memory. Adhering to an architecture like this one releases the main CPU for other tasks. Alternatively, it would allow the OEM to deploy a less powerful (and more cost-effective) CPU in the design. The result is a cost-effective platform capable of handling AI at the Edge of the IoT.