As part of my opening keynote March 31 at Cloud Connect in Las Vegas, I summarized the latest moves in cloud computing, including the recent price cuts by the major public-cloud vendors: Amazon Web Services, Google and Microsoft.
This recently unfolding cloud price war—one that affects hundreds of thousands of cloud-computing customers, and their IT budgets—grabbed plenty of headlines. Unfortunately, many pundits simply repeated talking points from press releases in trying to explain the price cuts’ significance, with little attempt to understand what really happened or why. In my Cloud Connect talk, I tried to dig deeper and look at the broader context for the price cuts, particularly AWS’s. My main takeaway: Not enough attention is being paid to the broader technical refresh AWS has had in the works for a while.
Given that conclusion, here are my main summary points for IT executives trying to make sense of the latest industry pricing action:
- AWS users should migrate from obsolete m1, m2, c1, c2 to the new m3, r3, c3 “instance types”—industry lingo for families of cloud-based, virtual machines—to get better performance at lower prices with the latest Intel CPUs.
- Any competitor that cites benchmark or cost comparisons with the obsolete AWS m1 family as a basis should be called out as bogus “benchmarketing”.
- AWS and Google instance prices are now essentially the same for similar specs.
- Microsoft doesn’t appear to have the latest Intel CPUs generally available and only matches prices for obsolete AWS instances.
- IBM Softlayer pricing is still higher than AWS and Google, especially on small instance types.
- Google’s statement that prices should follow Moore’s law implies that we should expect prices to halve every 18-24 months.
For those of you who want more background, here are the pricing Web pages from AWS, Google Compute Engine, Microsoft Azure, and IBM Softlayer. I assembled my own spreadsheet summary of instance specifications from the above vendors: http://bit.ly/cloudinstances. Finally, there was a broader analysis of the Google and AWS announcements by Rightscale.
Here are the facts. On Tuesday March 25, Google announced some new features and steep price cuts to its cloud services. The next day, industry leader AWS also announced new features and matching price cuts. On MondayMarch 31, Microsoft Azure followed suit and reduced prices.
In the following analysis, I’m only going to discuss instance types and on-demand prices. There was a lot more in the announcements, and in particular I will discuss other pricing models beyond on-demand in future blog posts.
Now, for the context. The most important aspect of all this is to understand that AWS has two generations of instance types, and is currently in a transition from Intel CPU technology the company introduced five or more years ago to a new generation introduced in the last year. The new-generation CPUs are based on an architecture known as Sandybridge. The latest tweak is called Ivybridge and boasts incremental improvements that offer more cores per chip and slightly higher performance. AWS states at http://aws.amazon.com/ec2/instance-types/ that m3 are based on “Intel Xeon E5-2670 (Sandy Bridge or Ivy Bridge) processors”, and c3 uses “2.8 GHz Intel Xeon E5-2680v2 (Ivy Bridge)”
Since Google is a more-recent entrant to the public-cloud market, all its instance types are based on Sandybridge. So using the latest instance families, AWS prices and features can be directly compared with Google’s. AWS is encouraging the transition by pricing its newer, faster instances at a lower cost than the older, slower ones. This is the broader trend that I think a lot of commentators missed when they looked recently at the Google/AWS/Microsoft cloud price war. In its recent announcement, AWS cut its prices for obsolete instance-type families by a smaller percentage than the newer instance-type families, so the gap has just widened; it’s cheaper now to buy the faster AWS technology.
Old AWS instance types have names starting with m1, m2 and c1, c2. They all have newer replacements known as m3, r3 and c3, except for the smallest one – the m1.small. The newer instances have a similar amount of RAM and CPU threads, but the CPU performance is significantly higher. The new equivalents also replace small, slow, local disks with smaller but far faster—and more reliable—solid-state disks, and the underlying networks move from 1Gbit/s to 10Gbit/s.
Most people are much more familiar with the old generation instance types. So when some AWS competitors write their press releases, they are able to get away with claiming that they are both faster and cheaper than AWS by comparing their own offerings against AWS’s older-generation products. This is an old “benchmarketing” trick – compare your new product against the competition’s older and more recognizable product. But obviously, it’s cherry-picking the results.
For the most commonly used instance types, there is a close specification match between the AWS m3 and the Google n1-standard. They are also exactly the same price per hour. Since AWS released its price changes after Google, this implies that AWS deliberately matched Google’s price. The big architectural difference between the vendors is that Google instances are diskless and all their storage is network-attached, while AWS’s instances have various amounts of SSD included.
The AWS hypervisor also makes slightly more memory available per instance, and ratings for the c3 confirm that AWS is supplying a higher CPU clock rate for that instance type. For the high-memory capacity instance types, the situation is a little different. The Google n1-himem instances have less memory available than the AWS r3 equivalents and cost a bit less. This makes intuitive sense, as this instance type is normally bought for its memory capacity. The AWS pricing page doesn’t yet list CPU performance data for the new r3 instances.
Microsoft’s and IBM’s offerings
Now we come to Microsoft. Previously, Microsoft had committed to match AWS’s prices. And in the company’s latest announcement, Microsoft’s new prices did indeed match the m1 range exactly. Its memory-oriented A5 instance is cheaper than an old, AWS m2.xlarge, but the A5 is an older, slower CPU type. It’s also more expensive ($0.22 vs $0.18) and has less memory (14GB vs. 15GB) than the AWS r3.large.
The common CPU options on Azure are aligned with the older AWS instance types. Azure does have Intel Sandybridge CPUs for compute-use cases using the A8 and A9 instance types, but I couldn’t find list pricing for them, and they appear to be a low-volume, special option. The Azure pricing strategy ignores the current generation AWS product, so the Microsoft-advertised “price-match guarantee” doesn’t deliver. In addition, the Google and AWS price changes were effective from April 1st, but Azure’s new pricing doesn’t take effect until May 1st.
Next I’ll turn to IBM Softlayer. This product has a choose-what-you-want model, rather than a specific set of instance types. The smaller instances cost $0.10/hr, whereas AWS and Google n1-standard-1 instances are $0.07/hr. As you pick a bigger instance type on Softlayer, the cost doesn’t scale up in a linear fashion. In contrast, Google and AWS double their prices each time the configuration doubles. The Softlayer equivalent of the n1-standard-16 actually costs slightly less than Google. Softlayer pricing for most instances is in the same ballpark as AWS and Azure instances were before the cuts, so I expect IBM will eventually have to cut prices to match the new level.
Gaps and missing features
The remaining anomaly in AWS pricing is the low-end m1.small. There is no newer technology equivalent to this instance at present, so I wouldn’t be surprised to see AWS do something interesting in this space soon. Generally AWS has a much wider range of instances than Google, but AWS is missing an m3.4xlarge to match Google’s n1-standard-16. In addition, the Google hicpu range has double the CPU to RAM ratio of the AWS c3 range, so the two aren’t directly comparable.
Google has no equivalent to the highest-memory and CPU AWS instances, and has no local disk or SSD options. Instead, the company has better attached-disk performance than AWS Elastic Block Store, but attached disk adds to the instance cost, and can never be as fast as opting for local SSD inside the instance.
Microsoft Azure needs to refresh its instance-type options. Right now, it has a much smaller range; older, slower CPUs; and no SSD options. Bottom line: This offering doesn’t look particularly competitive.
If you buy computer hardware and capitalize it over three years, you lock into the same monthly cost for three years. When your vendor cuts prices, you don’t get to reduce your monthly costs. Plus, as the three-year period goes on, your CPUs get old, leading to less-competitive response times and higher failure rates. Now, however, with the big public-cloud vendors driving costs down several times a year and upgrading their instances, your model of public vs. private costs needs to shift. You need to be factoring in something like Moore’s Law for cost reductions and count on technology refreshes more often than every three years. Google, in its recent pricing announcement, actually said we should expect Moore’s Law to apply to cloud pricing, which I interpret to mean that we can expect costs to halve about every 18-24 months. This isn’t a race to zero; it’s a proportional reduction every year. Over a three-year period, your monthly cost at the end will wind up being a third to a quarter of your cost at the start. It’s a good story for customers.
Yet I still hear CIOs fret that cloud vendor lock-in will let the big providers raise prices. This ruse is used to justify private-cloud investments. But I think this argument is ridiculous: Even without switching vendors, public-cloud customers will see repeated price reductions over the next several years for the systems they’re already using. AWS’s most recent price cut was its 42nd since the service begin. I think the trend here is clear.
I’ve previously published presentation materials that include cost optimization with AWS. I’m researching this area and over the coming months will publish a series of posts on all aspects of cloud optimization.
The full presentation slides from my Cloud Connect talk, “The Good the Bad and the Ugly: Critical Decisions for the Cloud Enabled Enterprise“, are available via the new Powered by Battery site.