Virtualizing legacy control systems for an efficient, scalable, low-cost IIoT
April 06, 2017
Blog
The irony of the Industrial Internet of Things (IIoT) is that it requires architectures that enable quick, in some cases real-time iteration and change from markets that often rely on industrial...
The irony of the Industrial Internet of Things (IIoT) is that it requires architectures that enable quick, in some cases real-time iteration and change from markets that often rely on industrial control systems (ICSs) that aren’t modified, upgraded, or replaced for years; in some cases, decades.
There are several factors that contribute to the long lifecycles of ICSs, among them the high availability, reliability, and redundancy requirements for safety- and mission-critical systems; an “if it ain’t broke, don’t fix it” mentality; and vendor lock-in. Concerning lock-in, industries such as telecom and networking have been using commercial off-the-shelf (COTS) hardware and software technology for years to mitigate its effects. While datacenter systems are typically not governed by strict constraints around jitter, determinism, and latency that could result in injury or death as in the industrial sector, innovation and cost reductions in that sector are comparatively stratospheric, with software-defined networking (SDN) and network functions virtualization (NFV) being prime examples.
However, advances in virtualization technology are now emerging that provide a migration path from brownfield ICSs that rely largely on legacy ISA-95 or Purdue Enterprise Reference Architecture (PERA) models to intelligent IIoT system constructs. Jim Douglas, President of Wind River, explains.
“When you look at a factory floor you get systems strung together with serial cables that have been there for 30 years, so it’s really hard to get on that innovation train,” Douglas says. “Whether it be a refinery or a batch manufacturing plant, as you look to retrofit or completely overhaul your domain, how do you change that paradigm? How do you get on a path that allows you to constantly refresh without adding risk?
“If you look at topologies in a setting like that, you’ve got actuation at the bottom; you’ve got programmable controllers on top of that; and then you’ve got the control plane and applications,” he continues. “There’s a big opportunity at those top two layers to take what has been very dedicated, custom, proprietary hardware and move a lot of it into software. It doesn’t all become software because you still need hardware platforms, but you can move into a more open COTS environment and start to virtualize a lot of that load. Those layers then provide an avenue to constantly innovate and update. A lot of that innovation will obviously be at the application layer, but now you’ve got a platform that’s dynamic and allows those applications to constantly evolve.”
IIoT virtualization: Decoupling the control plane and applications from hardware through software
With the advent of multicore virtualization over the past several years, hypervisors are now routinely used in industrial systems to separate real-time functions from a general-purpose operating system (GPOS) that runs non-deterministic applications. Although real-time operating systems (RTOSs) and functions often run strictly in virtual machines (VMs) on the target hardware platform when deployed, such designs allow GPOSs to run off-device to save resources and enable virtualized applications to issue commands to the control layer of a connected system as any other real-time input would. One such connected, “outcome-optimized” controller is GE’s Rx3i CPE400 controller, announced as part of the company’s Industrial Internet Control System (IICS) last year (Figure 1).
[Figure 1 | GE’s PACSystem Rx3i CPE400 is part of the company’s Industrial Internet Control System (IICS) and utilizes virtualization to offload and isolate general-purpose applications from resources executing real-time control tasks.]
“Virtualization is already playing a very big role in the Rx3i CPE400 architecture,” says Vibhoosh Gupta, leader of the Automation and Controls Industrial Automation product portfolio at GE. “For example, we used a hypervisor on the platform – which is essentially real-time virtualization – in order to run an RTOS like Wind River VxWorks and a GPOS such as Linux on the same multicore processor. Two cores are dedicated to running real-time deterministic control and the remaining two cores are dedicated to a standard Linux distribution.
“When you talk about connectivity or industrial apps that provide commands to the real-time side to make it more optimized, you don’t want to consume precious real-time resources. You want to do that in a very secure fashion on the same box with a standard interface between the real-time and non-real-time sides, such as the Open Platform Communications Unified Architecture (OPC-UA),” Gupta continues. “What’s interesting about this approach is, because it’s virtualized, whether you’re running that Linux distribution side-by side on a controller or on an industrial PC (IPC), from a technical point of view there is no difference. But what it allows you to do, in the case of a legacy control system that is already deployed, is put the entire Linux distribution with an app execution engine on an IPC next to the installed machine and get the same benefits. If you want to take your installed base and connect it to the Internet or run edge applications right next to the asset, we have a standalone Field Agent technology for brownfield cases (Figure 2). Because of this virtualization, we’re able to solve both cases.”
[Figure 2 | As part of GE’s Industrial Internet Control System (IICS), Field Agent technology provides a rugged, pre-configured solution for secure data collection, conveyance, and cloud-enabled analytics.]
Gupta mentions that while the company invested heavily in ensuring that virtualized real-time functions still provide the determinism required to run the control application, virtualized GPOS applications are now also integrating real-time market data from enterprise systems that advise the control layer. Two such examples are the use of weather forecasts in a waste water treatment plant that can be used to open or close flood gates in anticipation of upcoming weather events, as well as the synchronization and optimization of connected turbines in a wind farm to increase efficiency. In short, the input provided could originate from any other device, cloud, or process previously inaccessible to a disconnected controller.
Sidebar 1 | Industrial storage and memory advance, OEM priorities remain the same
Embedded systems and devices must first and foremost sup-port the sometimes-rigid requirements of target applications – including factory automation equipment prone to frequent shock and vibration or control units deployed in harsh-temperature environments.
According to Scott Phillips, Vice President of Marketing at Virtium, industrial-grade solid-state storage and memory solu-tions must be designed specifically for extreme conditions to maintain high reliability; meet the specific read/write patterns of the target application to maintain endurance; support robust security features to protect data (Figure 1); and provide rich device health reporting through remote alerts, diagnostics, and tools.
[Figure 1 | Virtium’s StorFly self-encrypting drives (SEDs) allow OEMs and system designers to add advanced encryption technology without burdening the host with complex key exchange software.]
Phillips explains that consumer storage is typically designed around an assumption of 70 percent read and 30 percent write, where most data stored is not accessed very frequently. Mean-while, in industrial storage, the assumption is practically re-versed – closer to 30 percent read and 70 percent write – with most data refreshed frequently. These industrial embedded devices are deployed at the edge and capture data on sensors, machine usage, transactions, and more, with data ingest ranging from small data dumps and text strings to massive feeds.
“Storage capacity in the industrial embedded market entails a much wider capacity range – from single-digit gigabytes to terabytes. This is because it is important to provide the correct storage capacity to match the purpose of the device and store data that matters. These capacities impact both cost and design in terms of power consumption and performance,” Phillips says. “For example, jet engines and in-flight monitoring gen-erate terabytes of data each flight. For those applications, capacity requirements are huge and need to be scalable.”
Accommodating this wide range of requirements is likely what will shape the next generation of industrial storage. According to Phillips, in addition to improved capacity, future products will likely be “smart storage solutions” that can perform pre-liminary analytics at the edge, allowing devices to determine what to discard, what to retain, and what to move to the data-center. But while the development of larger storage capacity and smarter, more integrated solutions is inevitable, reliability and endurance will ultimately remain the top priority for original equipment manufacturers (OEMs).
“Storage – and to an extent memory – will be fully integrated into system-on-a-chip designs and multi-function, multi-chip solutions that will supplant traditional drive and module solu-tions. By this time, the bit volume of storage and memory will have grown exponentially and will be integrated into every-thing,” says Phillips. “More advanced software solutions will be needed to effectively manage the reliability and endurance of mainstream storage technology to enable reliable function-ality under the most extreme application and environmental conditions.”
But cloud-like virtualization concepts have also begun to take hold on IIoT plant floors as industrial stakeholders realize the benefits of decoupling hardware from software, evident in the uptake of Wind River’s Titanium Control platform.
Titanium Control is an on-premise cloud infrastructure suite based on open source software such as OpenStack and the Linux Kernel-based Virtual Machine (KVM), as well as other standard building blocks and components that are tuned for real-time and low-latency performance on the order of 10 µs or less (Figure 3). These power a high-performance software switch that allows virtualized applications, control functions, workloads, data, and eventually entire ICSs to be transported from server to server based on resource requirements, availability, or utilization – all on top of generic compute hardware with determinism afforded via real-time service buses such as Time-Sensitive Networking (TSN) or other Ethernet variants.
[Figure 3 | Wind River Titanium Control is based on open source virtualization technologies from the IT industry that have been enhanced with real-time extensions to lower cost and increase scalability in IIoT systems.]
Today, Titanium Control is being leveraged in industrial segments alongside physical control devices to provide layers of redundancy as a virtual failover instead of additional hardware. Thanks to its performance, it has also been used in plant-wide simulation where its capacity to spin up virtualized “digital twins” enables near-real-time modeling.
Outside of the platform’s inherent advantages in terms of overall cost reduction, future proofing on generic hardware, reduced installation complexity, software scalability, and flexibility over traditional implementations, Gareth Noyes, Chief Strategy Officer at Wind River believes that such cloud-like virtualization can also afford longer term benefits during the IIoT transformation.
“What can we achieve if we enable this real-time virtual-ization control system?” Noyes asks. “One is you can start extending control loops beyond physical devices. Today you typically have a controller that may reside on a robot or a motor, but when that control function is virtualized you can run many of them in a single server and start consolidating your control infrastructure on the factory floor. So you could envisage having a cabinet on your factory floor based on TSN or standard Ethernet where all your control systems reside.
“Once you start aggregating control loops on a single virtual platform you can extract contextual data from multiple control loops and start optimizing your overall system processes versus individual control loops,” Noyes continues. “You’re going to want to expose data from those multiple control loops and as many use cases as possible into a large artificial intelligence (AI) learning loop, which will likely be cloud-based or on the factory floor. This is the concept of having two different loops: one doing edge control and your inference model, and the top one doing your training or learning loop.
“Today you can look at a piece of equipment and physically see the controller that’s performing the control task. In the future that will be abstracted away in a virtual control system, so having very good tools that give you visibility into system performance or insights into failures or system usage is very important,” he adds.
Outcome-based platforms from edge to cloud
A final key to virtualizing ICSs and other IIoT infrastructure is the insight it affords at various layers of the information technology (IT) and operational technology (OT) convergence. For industrial organizations, for instance, not only can this be advantageous in allowing them to deploy infrastructure and resources where it best suits their goals, it also provides the opportunity to visualize and act upon analytics where, when, and how they will be most impactful.
On this subject, Gupta recognizes that “only one percent of data being collected is actually being used to deliver any outcomes today,” meaning “data collection is not the problem.”
“The connected controller is just one piece of the equation,” Gupta says. “The second piece, and the most differentiating piece of the equation, is to create a platform where the producer of intellectual property (IP) can actually share value with industrial customers around the world in a much more seamless way.”
This is the concept of an industrial “app marketplace” that GE has promoted in the industrial space through its cloud-based Predix platform. Now, the aforementioned IICS represents the company’s vision of such a platform for the IIoT edge that leverages insights from the Predix analytics suite but also provides on-site data aggregation, storage, analysis, visualization, and control for industrial systems and applications.
A software development kit (SDK) that allows GE customers, original equipment manufacturers (OEMs), and partners to create their own applications for use with the IICS or Predix is currently under development, and scheduled to complete by the end of this year.
“We are working to make sure that we have an app store customers can use to create, house, and deliver those applications to thousands of edge device controllers in the field,” Gupta concludes.