blog

2019 Considerations for IT: Cloud Architectures, DR and Edge Computing

January 29, 2019

One of our favorite things to do late in the year is to reflect on what occurred over the past 12 months and try to predict what the game changers will be for the following year.

Sometimes it’s hard to get it right—e.g., Ethernet inventor Bob Metcalfe wrongly predicted in the nineties that the internet would collapse by 1996. And founder of Digital Equipment Corp. Ken Olson stated confidently in 1977 that there was “no reason anyone would want a computer in their home.”

Undeterred by these cautionary tales, here are five IT initiatives trending in 2019, helping enterprises stay relevant, reduce costs and optimize the customer experience.

1. More virtualization

New data shows that over 80% of companies have adopted virtualization to reduce costs, meet higher data demands and more. Virtualization is one of the fundamentals of cloud computing, which was built on the concept that there’s a level of abstraction that helps remove complexity between layers and provide “fluidity.” This enables workloads and other functions to shift without a lot of impact, downtime nor human interaction.

This abstraction is in the beginning stages of exploration at the network level. We’re familiar with appliances and physical deployments, often isolated in functionality from one another to provide defense in depth with multiple reinforcing layers. The challenge is change rate: how do you keep multiple, disparate systems safe and secure by patching and maintaining monolithic platforms? The risks associated with maintaining numerous, variant systems and technologies compound the general security risk that may already exist within an enterprise.

Enter network function virtualization (NFV). Mini network controllers, virtually deployed, can be targeted to solve discrete problems and remove single points of failure. Controllers can also be rapidly changed and re-deployed. Depending on the scale of the overall need, network functions can be segmented intelligently into services, applications or functions. This provides greater flexibility, ease of deployment and the ability to map costs to consumption. Some common NFV functions include network content filtering, firewall management, application load balancing and network threat management.

Lightweight abstraction via containers allows developers and operators to manage discrete application and functional units above the operating system, without hypervisors. Conceptually, containers still have a management model, libraries and code updates that need to be managed as in any operating system (OS) deployment, but they can be more simply constructed based on business and technical needs, with no additional cost structures. Not all applications require full-fledged hypervisor-based virtualization; containers can work just fine.

2. Less virtualization

The true frontier of edge compute deployment is being led by serverless functions—think AWS Lambda—and by hardware deployments at the bare-metal compute level (generally). Virtualization can add unnecessary management complexity in these some use cases, including superfluous features for multi-tenancy and unnecessary costs in terms of dollars or overhead.

Serverless functions are easy: they can help sort out signal-to-noise issues that arise with too much data, operate quickly (because they can be optimized to the instruction set) and get out of your way. The only thing a developer needs to maintain is the algorithm and logic—no OS, binaries, libraries or state. So where’s the state? It must be elsewhere, remotely accessed on the network, where it can radically shift from many traditional development models. But serverless computing opens many new deployment paradigms with greater flexibility. There is some risk: serverless deployments may require other dependent services that may be too complex to be serverless functions or may require additional state-ful data services for query data and storage.

Bare metal provides the flexibility to choose from different chipsets (e.g., ARM, Intel and NVIDIA) and architecture. For many workloads, bare metal is the most cost-effective. Many “boomerangs” from hyper-scale cloud platforms are optimizing around very specific chip architectures and compute standards (e.g. Open 19) and have dialed in their application needs to run very well on the whole server, with no virtualization. Do you have applications that can safely consume the whole box, horizontally scale and be highly available based on the application architecture? If so, baremetal (hosted by someone inside your own colocation environment) may be the most efficient deployment option.

3. Disaster Recovery Planning

Have you tested your IT disaster recovery plan lately?

Most enterprises have an IT disaster recovery plan—at least on paper—but that doesn’t mean they’ve put it to the test. Here are three reasons to test your disaster recovery plan, so you’re ready when the rubber meets the road:

  • Full-scale disaster recovery implementations can be complicated and costly. Many are hedging their bets by paying for supplemental technology services like off-site tape, real-time data replication and disaster recovery as a service (DRaaS). Some organizations pay for additional network connectivity to ensure they’re always connected at primary or secondary locations. Protect your investment by testing your disaster recovery plan early and often to ensure that it’s effective and yields the results you’ve invested in.
  • Testing your people, processes and technology will increase shareholders’ confidence in your company’s disaster recovery plan, find any gaps and set your business up for success in the event of an actual incident.
  • The disaster recovery process demands decisions about more than just technology: considerations of timing and declaration are key. When it comes to disaster recovery, remove the emotion: view it as another tool in your toolbox, and get comfortable with the investment.

4. The Impact of Shifting Data Consumption Models on Applications

Got edge?

With more applications consuming larger amounts of data and more users commanding traffic from devices scattered all over the world, architectures need to shift. Many applications, whether legacy or cloud-native, are written to be centralized. They have single databases that can be either the crown jewels or the Achilles heel of an application architecture. Also, technology platforms often come with monitoring tools that target a specific layer of the stack or domain, rather than looking holistically at the customer experience.

As the data generated, stored, analyzed, tossed and kept becomes more complex, larger and more powerful centralized network pipes simply won’t be able to keep up with demand and requirements. Application design will need to become more distributed and decentralized, with “store-and-forward” architectures that can quickly make decisions locally and forward only critical data that requires greater computation or long-term storage.

As these architectures shift, it will be crucial to measure, maintain and optimize the customer experience. Enterprises should explore higher level transactional mentoring and multi-site monitoring deployments—and even tool their applications with the ability to dynamically provide feedback while they’re re-architecting for the edge.

5. Machine Learning Deployment

More and more enterprises are considering architecture for edge and distributed computing, machine learning and artificial intelligence toolkits to analyze how data is being consumed and where. This provides an excellent opportunity for gaining experience in this growing and consequential area.

There’s no guarantee that experimenting with machine learning will lead to results, but with skills development, better toolkits (including access to powerful GPUs and easier to understand SDKs) and the continued proliferation of data, it will surely be a game changer. The healthcare industry, specifically, is already benefiting from solving more cases remotely via video. And with artificial intelligence (AI) and machine learning on the rise, diagnostic expertise can scale and solve even more. (Watson anyone?) IT leaders should invest in machine learning for the future or they will miss significant opportunities to help their businesses.

With the convergence of new technologies, more choice in the marketplace and significant platform changes converging, 2019 is shaping up to be an exciting year. Innovations like 5G, self-driving cars and dynamic drone-based package delivery capabilities will make this the most exciting time in history to be a technologist. Hopefully, more enterprises can move from merely exploring these opportunities to realizing the success and business advantages they'll inevitably bring.

Jason Carolan, Chief Innovation Officer

Jason Carolan

Chief Innovation Officer

Jason Carolan leads Flexential’s customer-driven innovation and advisory programs office, providing leading insights to product and technology evolution in a dynamic industry.

Complete the form to sign up for our blog.