Penny Wise and Cloud Foolish

The two iron rules of cloud pricing introduced by AWS are:

  1. Prices never go up1 2.
  2. We will absolutely soak you on data transfer charges.

Last week Google Cloud published a post explaining how they were ignoring the first rule in favour of adopting the second.

The post was called “Unlock more choice with updates to Google Cloud’s infrastructure capabilities and pricing”, though the <title> was more straightforward: “Updates to Google Cloud’s Infrastructure pricing”. The main announcements in the post were:

The common theme with the price changes is the introduction of data transfer fees, an increase in fees for active storage tiers, and a decrease in archive storage cost.

So, today, we are announcing we will adjust our infrastructure product and pricing structure … [These changes] are also designed to better align with how other leading cloud providers charge for similar products, so customers can more easily compare services between leading cloud providers.

The main alignment here is Google adding data transfer charges to match AWS and breaking architectural promises they’ve made to their customers in the process. This is an incredibly short-sighted move and will damage customer trust in Google Cloud for many years.

Cloud Cost and Cloud Architecture

As Corey Quinn so eloquently put it, “all cloud cost is fundamentally about architecture”. When designing for the cloud, pricing is one of the most important signals to take into account. Pricing and quotas indicate how the Cloud provider has designed the product, how they want you to think about the product, and how you should use it.

The pricing changes Google is making strike at the heart of their customer’s applications and will force many customers to rearchitect their applications, or pay a much larger amount to keep their existing architecture.

One of GCP’s most differentiated features has been their multi-region and global services like Cloud Load Balancing, Cloud Storage, BigQuery, Cloud Spanner, and Cloud KMS. These let you operate on the same resources in multiple regions, giving you more resilience to an outage in a single region. As a bonus, Google’s multi-region services often come with strong consistency across regions so customers don’t have to deal with consistency at the application layer.

In contrast, most AWS services are region oriented, and when AWS provides multi-region services like DynamoDB Global Tables and S3 Cross-Region Replication, you need to handle eventual consistency yourself.

Google’s message was that multi-region services would give you higher availability. You too can run like Google. From the announcement of dual-region buckets:

With this new option, you write to a single dual-regional bucket without having to manually copy data between primary and secondary locations. No replication tool is needed to do this and there are no network charges associated with replicating the data, which means less overhead for you storage administrators out there. In the event of a region failure, we transparently handle the failover and ensure continuity for your users and applications accessing data in Cloud Storage.

People can argue about whether Google’s global/multi-regional control-plane provides more availability than AWS’s region-first approach, but it was a distinctive set of features that only Google had.

Storage changes

The deepest architectural changes Google is making are to storage:

The last two changes in particular make dual-region and multi-region buckets a much less attractive offer and invalidate a lot of architectural assumptions developers will have made. Businesses that have built a high-availability architecture around multi-region buckets are now faced with the unappealing options of:

  1. Pay a lot more for replication + reading from their buckets.
  2. Rearchitect their application and migrate to a single region, or perhaps a dual-region if they can make the data-replication pricing work.

Load balancing

Google is also increasing prices on Cloud Load Balancing, adding a $0.008-$0.012/GB charge for outbound data processing.

Starting October 1, 2022, we’ll apply an outbound data processing charge of $0.008 - $0.012 per GB (based on region) to all Cloud Load Balancing products in order to maintain consistency and alignment with the variable costs of the services across our Cloud Load Balancing portfolio.

Depending on your egress rates and network tier, this will amount to a 5-10% increase on internet egress, and an added cost for most kinds of internal load balancing.

Persistent disks

Persistent disk snapshot pricing is also changing. I suspect this won’t be as big a deal for most customers as the previous changes.

The price changes mention a “us-central1 baseline” and “United States multi-region baseline”. This makes me think that prices in all other regions will be going up by an equivalent percentage.

Google also plans to introduce an archive snapshot tier later this year which will be priced at $0.019/GB/month, but with a minimum billing period of 90 days.

As with Cloud Storage, previously free data transfer for creation and restoration of multi-region snapshots will now be charged at inter-region rates. Restoring a 100GB multi-region snapshot will now cost $1 in the US and Europe, and more in other regions. Depending on your architecture, these charges could add up!

Why now?

On March 7th, 2022 The Information published Why AWS Makes Money and Google Cloud Doesn’t

[..] In a positive sign, Google Cloud CEO Thomas Kurian last month told colleagues during an internal all-hands meeting that he expects the cloud unit to be profitable later this year, according to a person who viewed the event.

Another issue is that Google Cloud has fewer high-margin services—such as cloud database and analytics software—to sell to customers than AWS does. […] Former employees say AWS relies on these offerings for a large portion of its operating profit.

The Information doesn’t mention it here, but the highest margin product AWS sells is bandwidth. Charging for data transfer is as close as you can get to pure profit in the cloud. As Cloudflare has previously demonstrated, AWS likely marks up egress bandwidth by up to 8,000%, and it wouldn’t surprise me if inter-AZ and inter-region charges were also in that vicinity.

If Google Cloud wants to get to profitability by the end of the year, then introducing data transfer pricing makes a lot of sense. The extra revenue from data transfer will be nearly 100% profit and go straight to Google Cloud’s bottom line. It will be difficult for existing customers to rearchitect their applications, and most customers will probably just pay the extra charges.

However, if Google Cloud is trying to build the number two cloud business behind AWS, this move is an unforced error that will damage their credibility for years.

Killed By Google

Google has developed a reputation for killing consumer products, and even Google Cloud has made several price increases which have changed architecture invariants, along with numerous deprecations:

These changes have damaged Google’s credibility for many customers and undermined their commitment to building GCP to be an equal to AWS and Azure. Google has been trying to build customer trust with their Enterprise API stability promise, and signing many 10-year deals with large enterprises like Deutsche Bank, Mayo Clinic, and Sabre.

However the last few months have seen several shortsighted decisions:

  1. Charging for personal Google Workspace accounts (Google Workspace comes under Google Cloud). Many of the people affected by this will be decision makers for Google Cloud purchasing decisions, and not people you want to remind about Google’s reputation for shutting down products.
  2. Laying off some Cloud technical support staff and outsourcing support to third-party vendors.
  3. Last week’s Cloud price increases.

Put together, it paints a picture of an organisation myopically focused on short-term profitability over long-term strategy. These changes show they don’t understand customer perceptions of Google Cloud, and the “rules” of cloud pricing.

These changes are hard to even understand as a business strategy. Raising prices on locked-in customers feels like a move you’d see from Oracle, not from the third-ranked competitor in a rapidly expanding market.

My best guess as to why Google is doing this now is that Google Cloud set (or had set for them) an OKR to reach profitability. All of the teams are trying to increase profits, regardless of the long-term costs.

A different future

Google Cloud’s strength has always been its technology. It has fewer products than AWS, but those products are more flexible and cohesive. Their IAM model is far simpler than AWS’, and as previously outlined, Google’s global infrastructure has offered many unique, differentiated products.

At a time when it’s hard to even keep track of Azure’s many multi-tenant security vulnerabilities, and multiple AWS outages still in recent memory, Google Cloud should have been pressing their advantages.

Instead of raising prices, I think a better strategy would have been to cut prices on data transfer to nearby regions and lean into multi-region services. Google owns massive amounts of dark-fibre and under-sea cables connecting their data centres and could afford to lower their prices. Once inter-region transfer is cheaper, many more applications become feasible to run in high-availability, multi-region configurations. This would be a strong offering to sell, and one that is hard for Amazon and Microsoft to replicate. Customers running compute in multiple regions and using higher-margin services like Spanner would help make back some of the revenue lost from lowering inter-region data transfer.

This would have been a compelling, differentiated offering, playing to Google’s strengths. Instead, Google has altered the deal. Their customers will now be praying that they don’t alter it further.