My cloud costs what?!

March 13, 2019 Derrick Harris

Debates about the true cost of cloud computing have been happening since Amazon Web Services first debuted more than a decade ago. However, it’s only in the past few years—as early-adopter startups have grown into IPO-ready companies and large, traditional enterprises have really began using IaaS resources in earnest—that we’re getting some useful glimpses into what it really costs to operate in the cloud. And those costs are often higher than what people expected to see. There have been some news stories lately highlighting just how expensive cloud costs can be, complemented by some good commentary on just how reasonable those expenditures are:

The easy reaction for public-cloud critics (especially to some of the more hyperbolic headlines) might be to shout, “I told you so!” If you look a little deeper, though, it’s clear that the answer to high cloud bills is more nuanced than simply pulling everything back into a corporate data center—if only because so many companies have already embraced the cloud as major part of their IT strategy going forward. That cat is out of the bag to stay.

Rather, the right discussions to be having are around things like right-sizing cloud infrastructure to workloads (over-provisioning happens a lot in the cloud, too); getting rogue instances under control; being able to predict future demand; and figuring out where to use cloud-provider services (e.g., managed databases) versus ISV or other third-party services. That last issue is often couched in terms of lock-in, but should also be discussed in terms of vendor priorities and business models. Nothing comes free in enterprise IT, so it’s wise to consider where you’re paying for higher value and where you’re just paying for more stuff.

And, interestingly, containers—which often are touted as a solution to over-provisioning, because you can theoretically pack many of them onto a single machine—are increasingly cited as a cause of additional spending. Some of this is an analogue to zombie VMs and cloud instances (they’re spun up but never spun back down), and some of it comes back to the idea of right-sizing the machine to fit the workload. But it all speaks to some 101-level cloud-native rules around investing in capabilities like automation, monitoring and container orchestration to truly maximize efficiency.

This post originally appeared as part of the March 7 Intersect newsletter. Sign up on the homepage to get it delivered to your inbox every Thursday.

Previous
How CEOs can drive digital innovation without learning to code
How CEOs can drive digital innovation without learning to code

A new report from Forrester (free to read here) offers valuable insights and advice for CEOs trying to alig...

Next
Duke Energy is using data to validate corporate strategies. You can, too.
Duke Energy is using data to validate corporate strategies. You can, too.

Corporate strategy can be inefficient, but applying product principles to it can optimize investment by val...