IoT poses serious challenges to the data center infrastructure

March 04, 2016

IoT poses serious challenges to the data center infrastructure

The nature of the data passing through networks and data centers is changing dramatically. The two biggest examples of this are things that are part o...

The nature of the data passing through networks and data centers is changing dramatically. The two biggest examples of this are things that are part of many consumers’ lives – Netflix and IoT. The former is more obvious; most people I know have a Netflix account (or other streaming services), and all of those digital TV shows, songs, and movies mean major traffic of large packets that require a massive network and data center infrastructure. IoT is having just as big of an impact, and that will only increase as tens of millions more wirelessly enabled devices are put into service.

Collectively, these IoT devices will be sending and receiving billions of information bits alongside all the streaming episodes of Breaking Bad, and together they represent a huge and rapidly growing burden on the IT infrastructure that supports them. Whether it’s large packet streaming or billions of small IoT packets, the common theme between them is that they both require real-time processing on a continuous basis. That’s where data centers come in.

Due to deficiencies in their design, construction, and operation, many of today’s existing data center facilities can’t ensure that they’re up to the challenges posed by these demanding processing requirements. In effect, many existing data centers and their supporting network structures weren’t designed and built to effectively process the heterogeneous volumes of data. This data is increasingly required to simultaneously deliver a video within the window a customer defines as acceptable while also performing the analytics necessary for a manufacturing company to track their inventory status in real time.

What does that mean for you and your company? It means that despite how well your wirelessly enabled devices are designed and how carefully they’re deployed with other IoT devices, the whole thing might sputter and underperform and drive everyone mad, just like when you’re trying to stream the next episode and it just buffers and buffers until you want to tear your hair out. This is why every company that’s involved in designing and using IoT needs to be thinking about their data center strategy.

What’s the right data center strategy to provide the support needed by IoT devices? To answer that, let’s start by saying what doesn’t work for IoT, namely a centralized data center strategy. This looks exactly like it sounds: big, centralized facilities that are built to process huge amounts of data and that rely on the network to reach out to where customers and IoT devices are. That may work for certain types of applications, but you couldn’t design a worse data center infrastructure for IoT because it leads to significant latency, flexibility, and computing load issues. The bottom line is that IoT doesn’t like this kind of infrastructure.

What IoT devices and deployments want is a stratified system of data centers that include edge facilities and even micro data centers very close to where the traffic is being sent and received. There’s still a central data center or two at the center of the hub, but having these additional strata of localized facilities is a better fit for the processing, data exchange, storage and other support that IoT depends on to work as expected.

Let’s start closest to a hypothetical IoT device in the real world and work our way out through the layers of this IoT-friendly data center structure. The center closest to the hypothetical device will, in the near future, be a micro data center (less than 150 kW) that will serve as the initial point of interaction with end users, which in this case means the IoT-enabled device your company is designing/selling/implementing. Because of its proximity and design, the micro data center eliminates the latency issues that exist with centralized data centers. In this role, micro data centers will function as screening and filtering agents for edge facilities to move information to and from a localized area. More specifically, they’ll determine what data or “requests” are passed to data centers above them in the hierarchy and also deliver top-level (most common) content directly to end users. The primary impact of this delivery localization will be levels of latency below what’s currently achievable.

The next level within an IoT-friendly, stratified architecture is the edge data center. The specific function of edge facilities is to serve as regional sites supporting one or more micro locations. As a result, they perform the processing and cache functions within the parameters of what would be defined as the lowest acceptable latency for their coverage region. As the regional point of aggregation and processing, edge facilities will require more capacity and reliability than their connected micro sites. Tier III certification will be a standard edge data center requirement due to the heightened reliability requirements, with average capacities coalescing around 1 MW.

One step further from the hypothetical IoT device is the centralized, major data center itself. The central facilities within a stratified structure house an organization’s mission-critical applications and serve as the central applications’ processing points within the architecture, with backups conducted on an infrequent basis in relation to their connected edge data centers. Due to their level of criticality, the applications within a centralized data center can’t be divided. From a practical perspective, this means that it’s inefficient in terms of both cost and overall system overhead to run these functions in multiple facilities. The vast amount of bidirectional data flow prescribed in a stratified structure necessitates that fat pipe connectivity be used between them and the edge locations they support.

This structure is the most effective for supporting IoT because it’s an effective division of labor. Each component has a specific role in the structure that promotes efficiency by reducing overhead and negating distance limitations. By leveraging each portion of the overall data center for the capabilities that are best suited to support IoT devices, companies can better address key issues such as:

• Locating computing/data center capabilities in the geography needed to minimize latency
• Bolstering the IoT deployment security
• Having greater flexibility in meeting growing IoT network computing needs
• Providing mission-critical support exactly where it’s needed
• Taking a more proactive approach to data center and capacity planning

The key takeaway here is that IoT devices require the right kind of data center infrastructure to work as intended. The massive volumes of data and the continually declining definition of acceptable latency are placing new demands not just on the individual data center, but also on their aggregated network architectures.

Chris Crosby is the founder and CEO of Compass Datacenters and a former senior executive and cofounder of Digital Realty Trust. He has more than 20 years of technology experience. Crosby earned his degree in Computer Sciences from the University of Texas at Austin.

Compass Datacenters

[email protected]




Chris Crosby, Founder and CEO, Compass Datacenters