Any cloud vendor that loudly proclaims itself to be the “fastest” or “best performing” deserves some skepticism. Speed and performance can be made to look impressive if conditions are aligned a certain way, but that doesn’t mean it will hold up long term.
Companies looking for the right cloud provider are desperate for data on performance expectations since it relates to picking the right host, managing scalability, and effectively spending on resources.
Here are three tips companies can use to accurately gauge cloud performance and reliability:
1. Get the most out of every dollar.
Ideally, you want to receive reliable performance at a fair price while avoiding any hidden charges that can ruin the true ROI. You want servers that can offer superb disk read performance and disk write capabilities that also do well under a variety of different test scenarios.
For instance, ask for test data that shows the servers’ I/O profile for both large and small block sizes. Review several comparable pieces of hardware to find outstanding performers. Why does this type of performance matter?
Here’s an example: If you are running a SQL Server database that typically works on 64k blocks, you want a server that offers consistent storage, reliable performance and no additional charges for provisioned input/output operations per second (IOPS). You want the ideal combination of fewer required resources and transparent costs.
2. Focus on efficient decision making.
Remember the book “The Paradox of Choice,” which explores choice overload and why offering too many options is detrimental? The same issues apply to picking a server.
Some cloud providers offer many server choice options, often leading to customers selecting one that has too much RAM in order to meet another criteria such as having enough CPUs. Look for a vendor that doesn’t bury you under a mountain of canned server sizes, but rather, gives you the choice to define the server capacity that best fit your workload. You also want the flexibility to choose the amount of CPUs or memory that makes sense for your business – similar to how you would purchase traditional servers. Spend less time reviewing dozens of server configurations and more time focusing on app and services development.
3. Scalability and predictable performance are crucial.
Performance testing can help you gauge the best way to scale an application, but you can’t come to a conclusion without knowing how the platform reacts to spikes in capacity.
Use performance metrics from a reputable third-party such as CloudHarmony to compare various cloud servers to a bare metal reference system. You want to be sure this performance metric improves linearly with the addition of CPU cores.
Knowing this data on server performance can help you get the most out of a cloud portfolio. Understanding in advance that you can add resources to a VM before needing more hardware, and that you can lower costs, will give you peace of mind. Choose cloud hardware that can scale both up and out, and you’ll be able to best plan your scaling events.
While performance metrics only capture a moment in time, a long-term performance profile allows you to consistently make informed choices while lowering costs.
About the author: Richard Seroter is director of product management for CenturyLink Cloud. a Microsoft MVP, lead InfoQ editor for cloud computing, blogger, author, Pluralsight trainer and frequent public speaker.
- WATCH: Setting Up Docker on Windows, Mac, and Ubuntu - April 4, 2016
- What’s Changed: Gartner’s 2016 Application Platform as a Service (aPaaS) Magic Quadrant Report - March 31, 2016
- 5 Things Enterprises Didn’t Know They Could Automate - March 28, 2016