Prior to the widespread adoption of DevOps, organizations would often commit to developing and delivering a software system within a specified time frame and, more often than not, miss release deadlines. The failure to meet required deadlines put additional strains on organizations financially and often meant that the business would bleed financial capital. Release deadlines in software organizations are missed for any number of reasons, but some of the most common are listed here:
- The time needed to complete pure development efforts
- The amount of effort involved in integrating disparate components into a working software title
- The number of quality issues identified by the testing team
- Failed deployments of software or failed installations onto customers' machines
The amount of extra effort (and money) required to complete a software title (beyond its originally scheduled release date) sometimes even drains company coffers so much it forces the organization into bankruptcy. Companies such as Epic MegaGames or Apogee were once at the top of their industry but quickly faltered and eventually faded into the background of failed businesses and dead software titles as a result of missed release dates and a failure to compete.
The primary risk of this era was not so much in the amount of time engineering would often take to create a title, but instead in the amount of time it would take to integrate, test, and release a software title after initial development was completed. Once the initial development of a software title was completed, there were oftentimes long integration cycles coupled with complex quality-assurance measures. As a result of the quality issues identified, major rework would need to be performed before the software title was adequately defect-free and releasable. Eventually, the releases were replicated onto disk or CD and shipped to customers.
Some of the side-effects of this paradigm were that during development, integration, quality assurance, or pre release periods, the software organization would not be able to capitalize on the software, and the business was often kept in the dark on progress. This inherently created a significant amount of risk, which could result in the insolvency of the business. With software engineering risks at an all-time high and businesses averse to Vegas-style gambling, something needed to be done.
In an effort for businesses to codify the development, integration, testing, and release steps, companies strategized and created the software development life cycle (SDLC). The SDLC provided a basic outline process flow, which engineering would follow in an effort understand the current status of an under-construction software title. These process steps included the following:
- Requirements gathering
- Design
- Development
- Testing
- Deployment
- Operations
The process steps in the SDLC were found to be cyclic in nature, meaning that once a given software title was released, the next iteration (including bug fixes, patches, and so on) was planned, and the SDLC would be restarted. In the 90s, this meant a revision in the version number, major reworks of features, bug fixes, added enhancements, a new integration cycle, quality assurance cycle, and eventually a reprint of CDs or disks. From this process, the modern SDLC was born.
An illustration of the SDLC is provided next:
Through the creation and codification of the SDLC, businesses now had an effective way to manage the software creation and release process. While this process properly identified a repeatable software process, it did not mitigate the risk of integrating it. The major problem with the integration phase was in the risk of merging. During the time period before DevOps, CI, CD, and agile, software marching orders would traditionally be divided among teams, and individual developers would retreat to their workstations and code. They would progress in their development efforts in relative isolation until everyone was done and a subsequent integration phase of development took place.
During the integration phase, individual working copies were cobbled together to eventually form one cohesive and operational software title. At the time, the integration phase posed the most amount of risk to a business, as this phase could take as long as (or longer than) the process of creating the software title itself. During this period, engineering resources were expensive and the risk of failure was at its highest; a better solution was needed.
The risk of the integration phase to businesses was oftentimes very high, and a unique approach was finally identified by a few software pundits, which would ultimately pave the way for the future. Continuous Integration is a development practice where developers can merge their local workstation development changes incrementally (and very frequently) into a shared source-control mainline. In a CI environment, basic automation would typically be created to validate each incremental change and ensure nothing was inadvertently broken or didn't work. In the unfortunate event something broke, the developer could easily fix it or revert the change. The idea of continuously merging contributions meant that organizations would no longer need an integration phase, and QA could begin to take place as the software was developed.
Continuous Integration would eventually be popularized through successful mainstream software-engineering implementations and through the tireless efforts of Kent Beck and Martin Fowler. These two industry pundits successfully scaled basic continuous-integration techniques during their tenure at the Chrysler corporation in the mid 90s. As a result of their successful litmus tests through their new CI solution, they noticed an elimination of risk to the business via the integration phase. As a result, they eagerly touted the newfound methodology as the way of the future. Not too long after CI began to gain visibility, other software organizations began to take notice of it and also successfully applied the core techniques.