The emergence of the cloud and containers
Happily, the tech industry has begun to coalesce around some key technologies that have come a long way in addressing some of these concerns. The first is what is known colloquially as The Cloud. Enterprises with a big technology footprint began to realize that managing data centers and infrastructure was not the main focus of their business and could be done more efficiently by third parties for which those tasks were a core competency.
Rather than staff their own data centers and manage their own hardware, they could rent expertly managed and maintained technology infrastructure, and they could scale their capacity up and back down based on real-time business needs. This was a game-changer on many levels. One positive outcome of this shift was a universal raising of the bar with regard to vulnerable and out-of-date software on the internet. As vulnerabilities emerged, the cloud providers could make the right thing to do the easiest thing to do. Managed databases that handle their own operating system updates and object storage that is publicly inaccessible by default are two examples that come immediately to mind. Another outcome was a dramatic increase in deployment velocity as infrastructure management was taken off developers’ plates.
As The Cloud became ubiquitous in the world of enterprise software and infrastructure capacity became a commodity, this allowed some issues that were previously obscured by infrastructure concerns to take center stage. Developers could have something running perfectly in their development environment only to have it fall over in production. This problem became so common that it earned its own subgenre of programmer humor called It Works on My Machine.
Another of these issues was unused capacity. It had become so easy to stand up new server infrastructure, that app teams were standing up large (and expensive) fleets only to have them running nearly idle most of the time.
Containers
That brings us to the subject of containers. Many application teams would argue that they needed their own fleet of servers because they had a unique set of dependencies that needed to be installed on those servers, which conflicted with the libraries and utilities required by other apps that may want to share those servers. It was the happy confluence of two technical streams that solved this problem, allowing applications with vastly different sets of dependencies to run side by side on the same server without them even being aware of one another.
Container runtimes
The first stream was the concept of cgroups and kernel namespaces. These were abstractions that were built into the Linux kernel that gave a process some guarantee as to how much memory and processor capacity it would have available to it, as well as the illusion of its own process space, its own networking stack, and its own root filesystem, among other things.
Container packaging and distribution
The second was an API by which you could package up an entire Linux root filesystem, complete with its own unique dependencies, store it efficiently, unpack it on an arbitrary server, and run it in isolation from other processes that were running with their own root filesystems.
When combined, developers found that they could stand up a fleet of servers and run many heterogeneous applications safely on that single fleet, thereby using their cloud infrastructure much more efficiently.
Then, just as the move to the cloud exposed a new set of problems that would lead to the evolution of a container ecosystem, those containers created a new set of problems, which we’ll cover in the next section about Kubernetes.