Ad Image

Cloud Computing Predictions for 2019 with Rick Kilcoyne of CloudBolt

Cloud Computing Predictions 2019

As 2018 comes to a close, speculation season takes full charge. Enterprise computing professionals across the IT space reevaluate this year’s trends and errors in preparation for next year. In doing so, customers learn about the most relevant solutions to focus on. We spoke with Rick Kilcoyne, VP of Solutions Architecture at CloudBolt, to get his cloud computing predictions for 2019.

CloudBolt offers a cloud management platform for hybrid cloud. Customers gain access to self-service IT, centralized management, lifecycle management, and more.

Containers will make serious inroads in 2019. Additionally, people will care less about where the cloud is running, but more about how containers are running.

We’re moving to a more outcome-based model where users and developers are less concerned about where and how their applications are run, and become more interested in getting faster access to performant applications. Containers offer this speed of deployment and performance, but it comes with deployment complexity. To leverage the power of fast deployment with containers, IT teams need to be able to wrap these containers and their applications in a single bundle that can be easily consumed by end users and tracked by management as they’re deployed across multiple clouds for the reasons of performance, cost and access.

Services providers must grapple with the notion they no longer sell solutions, they sell commodities. Because of this, the industry faces an interesting cost structure predicament. Enterprises want commoditization of the cloud, while vendors want to provide services that lock in customers.

The original vision of cloud was as a pool of commodity compute, network, and storage. Enterprise still buy into this idea and want to leverage this commodity, yet cloud vendors are resistant to the idea of becoming a commodity whose price is a race to zero. To counter this demand, cloud providers are providing value-add services on top of their commodity IaaS platforms that increase margins. This sets up a power struggle between the enterprise and cloud provider over the balance between the interests of the enterprise for commodity compute and the cloud providers need to turn a profit.

“Infrastructure as code” will hit its trough of disillusionment. IT teams will start wrestling with the view that infrastructure is code, and they’ll have to manage it carefully. They must develop protocols to ensure code becomes wrapped. That way, everything executes the same way every time.

CODE is the key work in Infrastructure-as-code. Code is easy to modify, but code is hard to maintain. When working with infrastructure defining scripts, guarantees must be in place. Only the approved version of the script should execute. What if I have 10s of developers checking out these scripts from a git repository and running them? What mechanism ensures that changes to these scripts (code) are merged back to the source code repository so that everyone is using the same version? This will almost certainly become a major headache if it hasn’t already.

Serverless computing is a trap door. It seems simple/enticing, but what manages the code? How do teams ensure the same code runs across cloud providers and that it all works?

Many become tantalized by serverless computing. But one must have full awareness that moving code between serverless platforms introduces extreme difficulties. Even more so due to cloud vendor specific libraries, paradigms, and IAM. Serverless computing is the technological equivalent of a snare trap as there’s virtually no way to easily migrate from one platform to another once committed.

Download Link to Managed Service Providers Buyers Guide

Share This

Related Posts