Xilinx Zync stack enables machine-learning applications

By Rich Nass

Contributing Editor

Embedded Computing Design

March 21, 2017

Xilinx Zync stack enables machine-learning applications

Our readers, who I believe are a great representation of the embedded community, have told us that they really like the Xilinx Zynq programmable processor architecture. It’s useful in a wide array...

Our readers, who I believe are a great representation of the embedded community, have told us that they really like the Xilinx Zynq programmable processor architecture. It’s useful in a wide array of applications, particularly those in the IoT and Industrial IoT spaces. So, I was a little surprised when I visited with the Xilinx folks and they explained how they were going all in on vision applications.

Not that vision isn’t very important, but it is limiting. After hearing the complete story, I’m somewhat a believer (and if you know me, “somewhat of a believer” is generally the best you get). When I say “vision,” that includes machine-learning applications as well, and that’s becoming very important in our space.

To that end, the company announced last week at Embedded World its reVision stack. The stack enables a broader set of developers, even those with limited hardware design experience to develop intelligent vision guided systems more easily, combining machine learning, computer vision, sensor fusion, and connectivity.

Potential applications include high-end consumer, automotive, industrial, medical, and aerospace/defense, as well as drones and autonomous vehicles. They also include a term I recently learned, co-bots, which is short for collaborative robots. These are robots that work alongside humans. This is actually harder than it sounds and there are some pretty stringent standards you have to adhere to when implementing this technology.

Accoding to Xilinx, reVision enables up to 6X better images/s/W in machine learning inference, 40X better in computer vision processing, and 20% the latency over competing embedded GPUs and typical SoCs. The stack, available in the second quarter of this year, includes support for the most popular neural networks including AlexNet, GoogLeNet, SqueezeNet, SSD, and FCN.

Rich Nass is a regular contributor to Embedded Computing Design. He has appeared on more than 500 episodes of the popular Embedded Executive podcast series, and is a regular contributor to the Embedded Insiders podcast.

Rich has been in the engineering OEM industry for more than 35 years, and is a recognized expert in the areas of embedded computing, Edge AI, industrial computing, the IoT, and cyber-resiliency and safety and security issues. He writes and speaks regularly on these topics and more.

Rich is currently the Liaison to Industry for the Embedded World North America Exhibition and Conference, and has held similar positions with the global Embedded World Conference and Exhibition.

Previously, Rich was the Brand Director for UBM’s award-winning Design News property. Prior to that, he led the content team for UBM Canon’s Medical Devices Group, as well all custom properties and events.  In prior stints, he led the Content Team at EE Times, handling the Embedded and Custom groups and the TechOnline DesignLine network of design engineering web sites.

Nass holds a BSEE degree from the New Jersey Institute of Technology.

Podcast/Interview Coverage

Sonatus The Garage Podcast

onalytica Interview

Dev Talk with Rich and Vin

Embedded Executive Podcast

Semiconscious Webcast

IEEE Awards Frede Blaabjerg Talks EVS

Atmosic: Embedded Executive: Energy Harvesting Podcast

 

Article Coverage

Embedded AI Isn’t Enterprise AI, and That’s a Good Thing

Tear Down: Google Pixel Watch 4

Protect Your Home from Thieves and Floods

Advantech Teams With AMD To Maximize Performance at the Edge

Tear Down: Noise Luna Ring

 

View additional information

Muck Rack

More from Rich