Why Docker and CoreOS' split was predictable
“I skate to where the puck is going to be, not where it has been” - Wayne Gretzky
There was a recent dustup between CoreOS and Docker over Docker increasing the scope of products that they are building to encroach on CoreOS’s territory of cluster management. This led to CoreOS starting their own container runtime, Rocket to compete with Docker. Their actions are well predicted using Clayton Christensen’s Law of Conservation of Modularity:
One of the insights from our research about commoditization is that whenever it is at work somewhere in a value chain, a reciprocal process of de-commoditization is at work somewhere else in the value chain. … The reciprocality of these processes means that the locus of the ability to differentiate shifts continuously in a value chain as new waves of disruption wash over an industry. As this happens companies that position themselves at a spot in the value chain where performance is not yet good enough will capture the profit. - Clayton Christensen, The Innovator’s Solution, Chapter 6.
N.B. This theory is related to, but separate from Joel Spolsky’s famous article about commoditising your complements.
What’s happening with Docker and CoreOS today is not new or surprising, and has been happening since the dawn of computing. The locus of the ability to differentiate is shifting and there is a fight over who is going to hold that ability. Much like the puck in ice hockey, profit is captured by those who skate to where the value is going to be.
We’ve seen this cycle take place again and again. First, the mainframe computer wasn’t good enough for it’s customers needs, so it had to be built, manufactured, and sold in an integrated fashion, from components to operating system to sales force. IBM captured the value in this integrated system, and it’s suppliers eked out a miserable profit free existence. Skip forward a few years and minicomputers and computers were starting to get good enough. The value moved from assembling the system (IBM and Compaq) to reside with the component makers: operating systems (Microsoft), processors (Intel), memory chips, and disk drives. Now it was the turn of the systems integrators to eke out a miserable profit free existence in a brutal war of attrition.
When the product isn’t good enough, being an integrated company is critical to success. As the most integrated company during the early era of the computer industry, IBM dominated it’s world. … [It’s] products were based on the sorts of proprietary, interdependent value chains that are necessary when pushing the frontier of what is possible. …
When technological progress overshoots what mainstream customers can make use of, companies are forced to change … To compete on these new dimensions, companies must design modular products, in which the interfaces between components and subsystems are clearly specified. Ultimately, these interfaces coalesce into industry standards. … Once a modular architecture … has been defined, integration is no longer crucial to a company’s success. In fact it becomes a competitive disadvantage in terms of speed, flexibility, and price, and the the industry tends to dis-integrate as a consequence. - Christensen et al., Skate to Where the Money Will Be
In the desktop market, the processor and operating systems were never good enough, and continued being the locus of performance increases and where the value resided. The memory chip and disk drive manufacturers weren’t so fortunate. Their chips and disks became good enough and modularised, and the money moved to the companies that supplied equipment to make DRAM, and heads and disks for hard drives.
Jumping forward again to 2005, Linux has become a good enough operating system (for server computing, desktop computing might need to wait until next year). Companies were able to build systems like the Xen hypervisor, and Amazon EC2 on top of the operating system, and treat Linux as a modular component. These cloud computing services offered elastic scalability, and the computers beneath them were reduced to a commodity expressed in decimal compute units. The servers and operating system had been made modular to support the integrated virtual machine managers.
By 2013 virtual machines from cloud service providers were good enough and becoming a commodity. Now what wasn’t good enough was the reproducibility and deployment of applications, and management of multiple servers. There were a number of solutions including Puppet, Chef, and Ansible, but no clear breakout winner. Into the breach stepped Docker. From their description on GitHub, (bolded emphasis mine).
Docker containers are both hardware-agnostic and platform-agnostic. This means that they can run anywhere, from your laptop to the largest EC2 compute instance and everything in between - and they don’t require that you use a particular language, framework or packaging system. That makes them great building blocks for deploying and scaling web apps, databases and backend services without depending on a particular stack or provider. - github.com/docker/docker
Viewed from the perspective of modularity and integration, we can see that Docker containers are designed to be the point of integration that everything else synchronises around. Docker takes the operating system, virtual machine, physical machine, and server operator below them, and commoditises them. It also provides a set of APIs that others can use to build on top of them. One part Docker doesn’t commoditise is the data centre, we’ll get to why that’s notable soon.
From a developer’s perspective, running your applications in Docker containers lets you treat the cloud services they run on as modular, replacable commodities. This is great because you can (in theory) move to another cloud service provider if you’re not happy with the deal you’re getting at your current one. This isn’t great for Amazon, Google or the other cloud providers, as being a commodity is a quick path to a low margin business. The value has moved from the cloud service providers providing the VMs, to the containers running on top of them.
Containers are great, but you need more than just containers to run your applications. Very quickly, a number of companies sprung up to build on top of Docker’s modular containers and provide differentiated, integrated services. The most well known of these is CoreOS. CoreOS provides a stripped down version of Linux and tools to run Docker containers over a cluster of machines. They abstract away virtual machines, and Docker containers, replacing both with a single cluster, commoditising the data centre in the process. The value has moved again, from Docker containers, into the integrated services on top of Docker. Whether they’ll admit it or not, CoreOS and similar services are a threat to cloud service providers; they commoditise the cloud providers by building an integrated platform on top of them.
Given the threats to their business it wasn’t a surprise that Amazon and Google recently launched new container engine services to manage and orchestrate Docker containers. These services attempt to recapture the value by integrating Docker container management on top of their commodity computing infrastructure. Something not explained by this theory is Google open sourcing Kubernetes their container cluster management tool. I’m still thinking about why they did that and its implications.
Where does this leave Docker? From the beginning they have been a modular component for others to build on. This is great for others, but not so great for Docker, as they are by definition undifferentiated and capture very little value. Becoming a commodity is not a great way to make a living, and certainly not going to provide the 100x return on investment that Docker’s investors are looking for.
So we have CoreOS using Docker as a commodity component, and Docker realising that all of the value they are creating is being captured by companies like CoreOS. Docker can’t capture value in the OS layer they just commoditised so the only way to go is up. Building an integrated offering to manage all aspects of running and building Docker containers in a cluster makes a lot of sense for them.
CoreOS is naturally concerned about this, as Docker is now a well funded competitor to their business with a lot of developer goodwill and mindshare. The natural response is to build a competing container ecosystem to steal some of Docker’s container marketshare and support their own offering. While Rocket may have technical benefits over Docker, this is primarily about CoreOS working to prevent Docker locking them out of the container cluster management space.
In a few years time, cluster management services will be commoditised, the value will move somewhere else, and this cycle will repeat. The trick, as always is to work out where that puck is going to be.