Q&A with Jim Douglas, President and CEO, Wind River

February 26, 2019

Q&A with Jim Douglas, President and CEO, Wind River

With Embedded World upon us, it's a great time to sit and chat with one of the leaders in the embedded/industrial space. In this case, that?s Jim Douglas, the President and CEO of Wind River.

With Embedded World upon us, I thought it was a great time to sit and chat with one of the leaders in the embedded/industrial space. In this case, that’s Jim Douglas, the President and CEO of Wind River. Jim claims that Wind River plays in any industry that touches “critical infrastructure.” That would include market segments ranging from aerospace to industrial, defense to medical, and networking to automotive. In Jim’s words, “it’s systems that can’t fail.”

Embedded Computing Design: If we look at applications that don’t have the luxury of failure or downtime, where are we today, and where are we headed?

Jim Douglas: First, it’s clear that we’re headed toward higher levels of automation and autonomy. This is the path that leads to the big benefits of the Internet of Things (IoT) that were promised when the pundits were all beating their chests about all the trillions of dollars we'd capture through the IoT.

The buildup to that is a two-phase approach. First, in these critical infrastructure industries, you've got a lot of equipment that, by design, was separated from enterprise networks (and the internet) because of safety, security, and reliability concerns. That separation comes with limitations, like inadequate analytics, so we started connecting those devices. It’s a slow process and it’s incrementally getting better.

The second phase is adding higher levels of automation and autonomy to systems. This is happening in each of the industries we’re involved with. A lot of the early investment was aimed at intense processing on the cloud, including artificial intelligence (AI). The business case for that was simple: you had firms that had the three key things you needed—access to lots of processing power, access to lots of data, and a business model that could really leverage the investment.

Today, and going forward, to achieve the autonomy in the markets we’re in, we need to drive a high percentage of that intelligence down to the edge. From a technical standpoint, that's a relative paradigm shift.

ECD: Let’s talk about AI. You don't necessarily need AI to do autonomy.

Douglas: Let’s start by defining the term. AI is an umbrella term. There are many forms of AI; machine learning, deep learning, and variety of other technologies/techniques. Based on the type of data and the desired outcome, you need to choose the form(s) of AI that are right for the system.

ECD: Let’s talk about the cloud. People are so hesitant to send data to the cloud for lots of reasons, including the expense and the potential security risks. With that in mind, are your latest offerings predicated on being able to do more on premise?

Douglas: Most definitely. I’ll also add the latencies to the list of reasons to stay on premise. Many critical applications have to operate in real time. Add all those things together, and there’s an obvious desire and need to push more intelligence to the edge. At Wind River, we’re creating platforms at the edge that are the landing zone for that intelligence.

ECD: So what is Wind River's play at the edge?

Douglas: Our belief is that moving intelligence to the edge will require heterogeneous systems, both hardware and software. On the hardware front, different forms of AI will require a variety of specialized accelerators. On the software front, workloads will potentially have mixed levels of criticality or specific performance requirements.  For example, you could have a workload that’s a safety function, maybe something that must adhere to a regulatory requirement. That requires a particular type of operating environment, and potentially specific hardware.

To this end, Wind River has introduced the Helix Virtualization Platform. We will support multiple criticality and multiple operating environments, including RTOSs and Linux. With the platform, you can create an environment that lets you map the right type of operating environment for the right workload.

We also know that many users have legacy equipment that they want to maintain and use, such as operating systems they built years ago, and/or things like Windows, which might be tied to an HMI, for example.

ECD: So if I’m Joe Developer and I’m tasked with designing an autonomous system based on AI, where do I begin?

Douglas: Traditionally, the boxes built for our industries were single-purpose, single-task, or maybe a few tasks, with hardware and software that was customized for that task. But that’s now changing. We're driving more workloads on one box. That shift began in the enterprise sector and is now moving to the industrial arena. Embedded developers are now looking to leverage cloud-scale application development techniques found in the enterprise domain. Techniques like building microservices use automation to deploy applications in containers for faster development and deployment.

ECD: Last question. If we were having this same discussion a year from now, what would you say would be the state of the art from Wind River?

Douglas: We’d be on the path that I laid out, where the embedded world is doing a better job of leveraging the kind of scale we’re seeing in the enterprise domain, but still adhering to the rigor that's necessary in critical infrastructure of performance, reliability, and availability. That's at the system level, but in parallel, it’s also how we build applications.

Secondly, it’s that we're on a path to deliver scale economics, systems that are a lot more intelligent, particularly at the edge. By the end of 2020, we’ll be on an accelerated path, and will just keep picking up steam as we move forward into 2021 and 2022.

Software & OS