IoT and AI Technologies Require a Different Perspective on Edge Computing
Delivering service to the “edge” of the internet has been a key topic of discussion recently across nearly all industries and sectors. What hasn’t been discussed as much is the importance of delivering service from the edge back to the core.
New advances in connected devices and the Internet of Things, artificial intelligence and automation are neither practical nor possible without service that delivers data from the edge – where data is generated – to core infrastructure. Computing that data, through big data processing, reporting, data analytics and development, requires robust power and storage. Core infrastructure is purpose-built to support these loads efficiently and reliably.
In short, not all compute can be handled effectively with edge computing or colocation alone. A combination of the two is best.
Edge computing and hybrid data centers are not meant to replace large, core data centers. Instead, they augment them by providing a means to capture and cache data at the edge so it can be relayed to the core when appropriate.
It goes without saying that large colocation data centers will not be built in every Tier II or Tier III market due to costs and demands. Smaller, edge data centers with substantial power capabilities are instead emerging to drive edge deployments that rely on federated communication to larger colocation environments. Micro data centers such as Vapor.IO and Edge Micro will tether to edge data centers that can support more compute and communication back to the core. While a new generation of applications that will dictate distributed nodes in various locations near the edge are emerging, they are still dependent on core data centers.
The connection between hyperscale data centers and artificial intelligence
As we move into an automated world where AI intelligence will control everyday functions and tasks, it’s important to understand the role of hybrid data centers and their relationship to core and hyperscale data centers. They enable billions of packets of data to move back and forth, between billions of devices, across a well-connected and low-latency network.
It is estimated that over half of network traffic will be bidirectional by 2022. That’s a big change in comparison to the present landscape, where most data is delivered on a one-way route to end users. Further, IoT is expected to produce internet traffic levels into the data range of hundreds of zettabytes. This growth and the need to analyze, manage and quickly respond to data will pose problems that could cripple today’s infrastructure.
The edge will be critical to locally managing immediate data and controls, but the delivery from the edge to the core will be just as critical. Network connectivity will need to be scalable at massive amounts that was inconceivable three to five years ago. The ability to bring analytical data and the next generation of content back and forth from large core data centers to the near-edge or micro-edge will require 100Gbps (if not Tbps) or more of backbone capability.
Flexible, dynamic bandwidth will be required to compensate for large bursts of traffic from the edge, in addition to accommodating unforeseen growth trends. Connectivity to hyperscalers, subsea cables (for international connectivity) and metropolitan regions all work hand in hand with edge computing and core connectivity.
Flexible and scalable connectivity from the edge to the core requires hybrid network solutions, similar to hyperscale cloud environments. Flexential calls this HybridEdge connectivity. While fixed, high-bandwidth backbones will remain at the foundation to support infrastructure, managing workloads and data will become automated. AI will help automate traffic management and workloads that require connectivity to other devices or core compute. That all means even more data, more compute, more analytics, and of course, more edge-to-core connectivity.