The “edge” is emerging as a critical concept to reduce latency for network-based services in a world that has become increasingly centralized by public cloud services. Consumption habits of services and the need for analytics are shifting beyond core population centers and becoming local or even hyper-local within a region or city. As the online population continues to grow and new services emerge, the ability to handle data traffic securely and close to the customer or application will become a common pattern for the new service evolution.

The goal of the edge is to deliver the heaviest and most latency-sensitive data close to the edge of the internet and customers, often integrating with core or more centralized applications that exist in public cloud centers or corporate data centers. Edge data centers are connectivity-rich and carrier-dense, delivering critical business services over lowlatency networks with response times of 5-7 milliseconds or less. Unlike the traditional network approach that delivers outbound content, the edge also supports bidirectional data precipitated by the Internet of Things and 5G cellular networks.

There are two “edge” data center types: near edge and far edge. Near-edge data centers are traditional data centers that are close to users. Far-edge data centers are composed of micro data centers generally thought to be at the base of cell towers.

Edge technologies and resulting “edge native” (following cloud-native applications) will change the common deployment patterns used by CTOs and developers today. Enhanced real-time data and near real-time decisionmaking should improve scale and availability, which are critical to remaining competitive in today’s dynamic business services landscape.

Click download to receive the white paper via email.