M2M Evolution Interviews: Chris Gray, Director, Red Hat Embedded

By Brandon Lewis

Editor-in-Chief

Embedded Computing Design

August 18, 2014

M2M Evolution Interviews: Chris Gray, Director, Red Hat Embedded

Red Hat's Chris Gray discusses the evolution of open source and the Internet of Things.

At the M2M Evolution conference in Las Vegas, Chris Gray, Director, Red Hat Embedded described how the Internet of Things (IoT) is driving open source software (OSS) – and subsequently, his company – from back office IT closer to the embedded edge.

Given the rash of security breaches over the past year, where does that leave open source software?

GRAY: If you look at the history of open source, security has actually been one of its strong points. The reasons for that are abundant. First off, you have so many eyes on the code that problems are identified, and ultimately resolved, faster. Number two, is that because it’s designed and developed across such a disparate set of developers, everything has to be developed in a very componentized, packaged manner. As a result, everything has to interoperate within itself and be able to play well with others, which is something that proprietary companies don’t necessarily have to have the discipline to do. What that does is kind of help to create natural boundaries that help protect the rest of the system. So when you look at that you can combine it with someone like Red Hat that works jointly with the NSA to create Secure Enhanced Linux (SE Linux) as a secure alternative within the Linux kernel, and then you throw that in with our Security Response Team that addresses 98 percent of crtical security vulnerabilities within 24 hours, then you’ve got something that’s pretty comprehensive in terms of addressing security concerns.

With Red Hat’s legacy in networking and IT, what is your position on some of the interoperability challenges that face M2M and IoT connectivity?

GRAY: It all starts with having open standards, and that’s something Red Hat has really subscribed to since the very beginning, because at the end of the day if you have open standards, you have the capability for developers, organizations, and partners, to all work together to create something that’s greater than just the sum of its parts. Then you’re ultimately providing the customer the flexibility to be able to not only to choose vendors but to choose technologies, to be able to integrate various technologies or legacy technologies that are built on other open standards as well. So I think having that open standard by far has to be kind of a prerequisite when going into the Internet of Things (IoT). To do anything else is just going to be sending yourself into another monolithic, proprietary, block in type scenario like we’ve seen in the past.

On top of that, we also have technologies such as our Fuse Enterprise Service Bus (ESB) product that allow you to actually transform various messaging protocols into a common and consistent language. So you can actually take a signal coming from a legacy sensor, put that side by side with a new age, more intelligent sensor, and use Fuse to transform those packets so that they’re coming in looking the same. You can split apart the various parts of the message that may control or provide various pieces of information, and actually do apples-to-apples comparisons with legacy versus new age sensors.

 

 

(Click graphic to zoom by 1.9x)

21
 

 

 

 

So all of those things kind of come together to give our customers more flexibility in terms of how they essentially can go through and not throw the baby out with the bathwater – continue to use their existing networks because they are certainly prevalent out there, yet still interoperate with the next generation.

So where would a product like that reside in the network?

GRAY: Where we typically see that is within this new, emerging class of gateway servers or controllers. Generally what we’ve seen as we’ve been looking at the IoT space from an enterprise perspective, is this migration from really what’s predominantly a two-tier architecture in a consumer IoT use case, to more of a three tier architecture. So if I expand on that a little bit just to give you some context, if I think about my Nest thermostat, my Nest thermostat connects directly to the cloud, ties back into my smartphone, allows me to do all the things that we know and love like adjust temperature, etc. That works in a consumer world because, one, I’m paying for the bandwidth as the consumer, so if there are a couple extra signals that are superfluous, that’s fine – it’s not going to change my data plan or break the bank if I have to send a couple of extra messages. The second piece is that the time to a decision isn’t that critical – if it takes a couple of minutes for the Nest thermometer to adjust, I’m going to be okay. But what we’ve found with our enterprise is that they’re coming to us with a completely different set of requirements. Instead of being really concerned with user experience and convenience and what have you in the consumer led applications, it’s much more around security; it’s much more around how do we minimize transmission costs, how do we make decisions faster? And that’s creating this third tier of what we call a gateway server or controller that is able to take action upon the data coming from the sensors that are geographically close to it, and that allows the enterprise to conduct real-time data processing and analytics, choose to take an action at that point if they want, and also intelligently decide which data needs to be transmitted all the way back to the datacenter and thus incurring the transmission cost for that.

What are the advantages of leveraging Red Hat’s experience in the cloud and extending them down into the datacenter?

GRAY: As we continue to see more and more functionality driven towards them, that gateway server is becoming less of just a pass through. It’s becoming much more functionally capable, much more intelligent. It’s actually taking actions on its own. It’s doing things like complex event processing, where it will not just look at a single signal coming it, but look at both the combination of various signals coming from multiple sensors, as well as looking at the signals from those sensors over time, and having to do that in a real-time, in-memory data analytics type scenario. And that use case ends up looking and smelling exactly like your traditional enterprise server. A lot of these gateway servers are actually now becoming as functionally capable as a server sitting on raised floor was a couple of years ago. So, all of a sudden that need to conduct more and more functionality, to conduct more and more computing there at the edge, to be more intelligent, and most importantly, to do so in a secure manner, means that Red Hat legacy and expertise in the enterprise computing space is very much applicable in that gateway market.

What are some of the challenges and solutions as far as harvesting and storing data?

GRAY: This is something that our customers are absolutely all struggling with, and there are two elements to the answer. The first one, is that most data isn’t going to be thrown away, but you still have to gain the benefit of that data. What we’re finding is that that doesn’t mean you actually need every piece of information, you need every piece of knowledge. If you look at the information lifecycle, there’s a process to distill data into knowledge. What we’ve found is that the more you can push that distillation towards the edge of the network, you can actually go through and say, for example, I have hundreds of undervoltage sensors along a particular grid (we’re working with a number of smart grid companies), I don’t actually need to store the status of each one of those undervoltage sensors at every moment in time forever. What is helpful is if I have a gateway that aggregates those signals, and once action is required when an undervoltage sensor trips or whatever the case may be, if that gateway is intelligent enough to only transmit when an actual even occurs or when some type of action is necessary, then you’re dramatically reducing the amount of data that has to be stored long term without actually impacting the knowledge that you’re deriving from that data. Through our middleware technologies we have the capability to do that event processing to determine when should you transmit versus when you shouldn’t, things of the like. That’s number one – you’ve got to be smart around what data you keep. It needs to be just the data that is ultimately required to help you build knowledge around the status of your particular use case.

Just to give you another example of how that plays in, many industries, especially places like healthcare, energy, etc., have regulatory requirements that will require that if you write anything to a database you have to keep that data for at least six months. Huge cost associated with that. So if you can use something like our JBoss Data Virtualization product to analyze that data and conduct complex event processing at the gateway level without ever writing that to storage – basically just doing in-memory short term data analytics on it and transmitting just the summerical data back to the datacenter – then you get away from having to store that data for long periods of time.

 

 

(Click graphic to zoom by 1.9x)

21
 

 

So that’s one half of the answer. The second half of the answer really ties into our storage story. Especially as it relates to Big Data, we’re seeing more and more of the monolithic, proprietary storage solutions being replaced by software-defined storage (SDS) solutions. If you look at Red Hat’s acquisition of Gluster, our recent acquisition of Inktank, we’re obviously very dedicated to helping our customers be able to build out commodity-based, scale out storage that can be done at a much lower price point, and that can also handle unstructured data stores in a much more comprehensive manner.

Speaking of industries with strict regulations, how do you ensure security, privacy, etc. for applications that demand HIPAA compliance, for example?

GRAY: Whether you’re talking about HIPAA, PCI DSS, Sarbanes-Oxley, whatever the case may be, what we’ve found is that by working with Red Hat and leaning on us to ensure that the security of those devices is maintained, that they’re properly updated, that security fixes are managed in an appropriate manner, as well as being able to have a trusted source and a trusted partner, allows our customers to consume open source technologies that ultimately give them more flexibility in their business while still having the security and confidence of working with an enterprise vendor.

Red Hat

www.redhat.com

[email protected]

@RedHatNews

LinkedIn

Google+

Facebook

YouTube

Brandon Lewis, Technology Editor
Figure 1: The open source Red Hat JBoss Fuse Enterprise Service Bus that allows applications, data, and services to be easily integrated across various disparate systems.
Figure 2: JBoss Data Virtualization an integration solution that allows multiple data sources to be treated as a single source.

Brandon is responsible for guiding content strategy, editorial direction, and community engagement across the Embedded Computing Design ecosystem. A 10-year veteran of the electronics media industry, he enjoys covering topics ranging from development kits to cybersecurity and tech business models. Brandon received a BA in English Literature from Arizona State University, where he graduated cum laude. He can be reached at [email protected].

More from Brandon

Categories
IoT
Networking & 5G