Defining Fog Computing: For those who thought it was just deploying some logic on an edge gateway

June 03, 2016


Defining Fog Computing: For those who thought it was just deploying some logic on an edge gateway

The first installment of this blog series raised a lot of engaging online discussions. Many of the comments have reinforced the interest in Fog Comput...


The first installment of this blog series raised a lot of engaging online discussions. Many of the comments have reinforced the interest in Fog Computing as an essential paradigm for a large class of Internet of Things (IoT) applications. Some of them have highlighted that a few misconceptions still exist around this new paradigm. Thank you very much for engaging and contributing, I hope to see as many, if not more, comments for this blog.

We have already discussed how cloud-centric architectures fall short in addressing a large class of IoT applications and are motivating the need for Fog Computing to address the connectivity, bandwidth, latency, cost, and security challenges imposed by cloud-centric architectures. In this installment I will explain some additional industry trends that are further motivating this paradigm shift and will formulate a more precise definition of Fog Computing. Hopefully this will help in clarifying most of the confusion that exist around it.

Two trends that are at the core of the Industrial IoT revolution are Softwarization and Digital Twins.

Softwarization is a trend that is disrupting several industries. Its mantra is the replacement of specialized hardware implementations, such as a programmable logic controller (PLC) on an industrial floor, with software running in a virtualized environment.

Digital Twins, as the name hints, are a digital representation (computerized model) of a physical entity, such as a compressor or a turbine, that is ìanimatedî through the live data coming from the physical brother or sister. Digital Twins have several applications such as monitoring, diagnostics and prognostics. Additionally, Digital Twins provide useful insights to R&D teams for improving next-generation designs, as well as continuously ameliorating the fidelity of their models.

As Softwarization transforms specialized hardware into software it creates an opportunity for convergence and consolidation. If we take as an example Soft PLCs (i.e., softwarized PLCs), all of a sudden they can be deployed on commodity hardware in a virtualized environment and decoupled from the I/O logic that can remain closer to the source of data. As a side note, the general idea of Softwarization applied to the factory floor is often referred to as Software-defined Machines or Software-defined Automation.

As a result of Softwarization and Digital Twins, there is an opportunity for modernizing the factory floor, consolidating its hardware, improving availability and productivity, improving its manageability, resilience to failure, and innovation agility. In essence, as a result of Softwarization, there is the opportunity to manage these systems as a datacenter. As Softwarization is a trend that is impacting a large class of industries, it is worth highlighting that the transformations described above, along with its benefits, are not just limited to industrial automation.

But there is a catch! The catch is that the large majority of these systems – whether in industrial, transportation, or Medical domains – are subject to the performance constraints I discussed in the last blog. These systems interact with the physical world, and as such they need to react at the pace imposed by the physical entity with which they interact. As a consequence, while traditional cloud infrastructure would be functionally perfect to address these use cases, they turn out to be inadequate as (1) they were not designed with these non-functional requirements in mind, and (2) they are often too heavyweight. Cloud infrastructures were designed for IT systems in which a delay in the response time may create a bored or upset customer, but will not cause a robot arm to smash against a wall, another machine, or even worse, hurt a human operator.

Now it should hopefully be clear that Fog Computing is not just about applying distributed computing to the edge. Fog Computing is about providing an infrastructure that, while virtualizing elastic compute, storage, and communication, is able to address the non-functional properties characteristic of these domains.

Fog Computing makes it possible to provision and manage Softwarized hardware (e.g. a Soft PLC, Digital Twins, analytics, and anything else they may need to run on the system) while ensuring the proper non-functional requirements and bringing all the benefits we listed above in terms of convergence, manageability, availability, agility, and efficiency improvement.

In summary, Fog Computing provides a flexible infrastructure to provision, deploy, monitor, and manage software at the edge. This should clarify how just deploying some logic on an edge gateway isn’t Fog Computing, and neither is Fog Computing traditional distributed computing.

Angelo Corsaro, PhD, is Chief Technology Officer of ADLINK and PrismTech, In his role as CTO, he oversees technology strategy and innovation for ADLINKís Industrial Internet of Things (IIoT) Platform and technology strategy and innovation for PrismTechís Vortex IIoT data sharing platform. Before joining PrismTech, Angelo served as a Scientist at the SELEX-SI and FINMECCANICA Technology Di-rectorate. In this position, he was responsible for the corporate middleware strategy, strategic standardization, and R&D collaborations with top universities.




Angelo Corsaro, Chief Technology Officer, PrismTech