How to Eliminate Expensive and Unexpected Cloud Spend

Unexpected Cloud Spend

Public clouds come with their fair share of complexity barriers. Many solution providers make their entire business model helping users get more out of AWS, Azure, Google Cloud, and more. Workload and security solutions tend to take priority here. However, cost optimization should never be overlooked.

Cloud costs often become higher than expected. A variety of issues cause this. To learn how to better optimize cloud costs, we chatted with Unitas Global founder and CTO, Grant Kirkwood. Unitas Global offers managed services for a variety of cloud needs. They also provide solutions for hybrid cloud and performance monitoring. Read more about them at the bottom of this post.

How can executives balance cloud costs and desired outcomes?

It starts with clearly defining the desired outcomes and aligning the entire team around those goals. Companies often fail to gain alignment, which leads to scope creep and increased costs.

Outcomes must be specific and constrained. We find that companies look to leverage the cloud for one of two broad themes. This depends on their overall digital competency:

  • Operational improvement and/or efficiency: Primarily tactical/practical in nature, i.e.; shutting down existing data centers, reducing cost, optimizing workloads.
  • Strategic, transformational, revenue opportunities: Strategic in nature, i.e.; leveraging cloud technology to transform the business or create new revenue opportunities, improve customer experiences, etc.

With clear outcomes and scope defined, a plan with specific milestones, timing, and budget can be created. Tracking progress against the plan ensures costs remain within budget.

What overlooked factors drive cloud costs up?

Studies suggest that 20-40 percent of cloud spend goes wasted or under-utilized.


Traditional on-premise workloads are most often “thin provisioned.” Virtual machines are purposely over-provisioned, with the unused capacity going back into the pool on a variable, real-time, minute-by-minute basis. This makes sense for companies that own the underlying hardware. It allows them to use the hardware at maximum efficiency. In an environment where the company owns the underlying hardware, this makes perfect sense as it allows for the most efficient use of the hardware while still allowing individual virtual machines to burst when needed. However, it means that individual virtual machines are consistently under-utilized when measured against the provisioned resources on a VM-by-VM basis.

Studies have suggested that the average CPU utilization of VMs in an enterprise environment may be no more than 30%. In a private environment, this doesn’t carry a cost. However, when migrating these traditional on-premise workloads to public cloud, companies often provision resources on a like-for-like 1:1 resource basis, forgetting they lose the benefit of “thin” provisioning. In other words, they’re paying for 100% of the provisioned VM capacity even if they’re only actually using 50%.


In today’s fast-paced world, resources are provisioned on an as-needed basis. While this is convenient and helps accelerate development and deployment, it means that tracking resources require a very strict adherence to process and enforcement of governance tools. Too often, one week’s project ends on a Friday, with some “leftover” resources left online to be cleaned up later. Monday arrives, and a new project lands on your desk with a looming deadline, and last week’s “clean up” never happens leaving orphaned resources online costing money. Eliminating this takes diligent governance, processes, and tools, which companies often don’t have in place until it’s too late. Identifying orphans requires tooling and patience, but we consistently find that companies have more leftovers than they think.

Transfer costs

Moving data in, out, and within the cloud costs money. The biggest cost is often data to the internet, sometimes called egress costs. Transfer costs between regions in a cloud or services within the cloud also drive up cost. Data transfer to storage services might cost as much as egress cost. Also, we often find companies surprised at how quickly those costs mount. With expensive egress costs, companies can save significant money with a dedicated connectivity strategy.

 Where are organizations lacking visibility to prepare for future needs?

Inventory management in public cloud is a challenge, particularly at scale. The cloud providers don’t make identifying under-utilized resources easy (one could argue that is by design) and thus a whole cottage industry of companies exists, developing software tools to solve this problem.

There’s also a retroactive side to this. Most companies get started with cloud in a small way and then progressively add bit by bit. Before long, they are overwhelmed with the amount of money they’ve spent on cloud, and they have to scramble to bring down costs. Very few companies take the time up front to implement the tools needed to proactively manage cost and future growth, and it’s really difficult to do this after the fact. If you don’t do this up front, you don’t have visibility into what’s happening, which leads to cloud sprawl and wasted money.

Can you share more about Unitas Global and how it helps companies optimize cloud spend?

As more and more mission-critical applications move to the cloud, Unitas Global is designing high-performance network access strategies that connect users directly to their data and bypass the oft-overlooked expensive egress charges. Here are three ways Unitas Global helps companies optimize cloud spend:

  • Unitas Global has a cloud-readiness assessment service. Similar to a workshop, we sit down with clients to help them understand their cloud strategy. This includes implementation time and costs. As part of the offering, we increase efficiency in moving workload to cloud by running the data modeling tools on the workload to understand what the actual requirement is, rather than over-provisioning.
  • For clients just beginning their public cloud infrastructure, Unitas Global will implement the cloud spend control tool up front before the sprawl starts to happen.
  • Unitas Atlas Tooling can be deployed into existing public environments. It does a complete discovery and analysis of resource consumption, which helps us find areas where we can reduce spend and make systems more efficient. Essentially, it’s a cross-platform, cross-cloud inventory and resource management tool that we can deploy after the fact.