Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Implementing DevOps with Ansible 2

You're reading from   Implementing DevOps with Ansible 2 A step-by-step guide to automating all DevOps stages with ease using Ansible

Arrow left icon
Product type Paperback
Published in Jul 2017
Publisher Packt
ISBN-13 9781787120532
Length 266 pages
Edition 1st Edition
Languages
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Jonathan McAllister Jonathan McAllister
Author Profile Icon Jonathan McAllister
Jonathan McAllister
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. DevOps Fundamentals FREE CHAPTER 2. Configuration Management Essentials 3. Installing, Configuring, and Running Ansible 4. Playbooks and Inventory Files 5. Playbooks – Beyond the Fundamentals 6. Jinja in Ansible 7. Ansible Vault 8. Ansible Modules and Libraries 9. Integrating Ansible with CI and CD Solutions 10. Ansible and Docker 11. Extending Ansible 12. Ansible Galaxy

The History of DevOps

Prior to the widespread adoption of DevOps, organizations would often commit to developing and delivering a software system within a specified time frame and, more often than not, miss release deadlines. The failure to meet required deadlines put additional strains on organizations financially and often meant that the business would bleed financial capital. Release deadlines in software organizations are missed for any number of reasons, but some of the most common are listed here:

  • The time needed to complete pure development efforts
  • The amount of effort involved in integrating disparate components into a working software title
  • The number of quality issues identified by the testing team
  • Failed deployments of software or failed installations onto customers' machines

The amount of extra effort (and money) required to complete a software title (beyond its originally scheduled release date) sometimes even drains company coffers so much it forces the organization into bankruptcy. Companies such as Epic MegaGames or Apogee were once at the top of their industry but quickly faltered and eventually faded into the background of failed businesses and dead software titles as a result of missed release dates and a failure to compete.

The primary risk of this era was not so much in the amount of time engineering would often take to create a title, but instead in the amount of time it would take to integrate, test, and release a software title after initial development was completed. Once the initial development of a software title was completed, there were oftentimes long integration cycles coupled with complex quality-assurance measures. As a result of the quality issues identified, major rework would need to be performed before the software title was adequately defect-free and releasable. Eventually, the releases were replicated onto disk or CD and shipped to customers.

Some of the side-effects of this paradigm were that during development, integration, quality assurance, or pre release periods, the software organization would not be able to capitalize on the software, and the business was often kept in the dark on progress. This inherently created a significant amount of risk, which could result in the insolvency of the business. With software engineering risks at an all-time high and businesses averse to Vegas-style gambling, something needed to be done.

In an effort for businesses to codify the development, integration, testing, and release steps, companies strategized and created the software development life cycle (SDLC). The SDLC provided a basic outline process flow, which engineering would follow in an effort understand the current status of an under-construction software title. These process steps included the following:

  • Requirements gathering
  • Design
  • Development
  • Testing
  • Deployment
  • Operations

The process steps in the SDLC were found to be cyclic in nature, meaning that once a given software title was released, the next iteration (including bug fixes, patches, and so on) was planned, and the SDLC would be restarted. In the 90s, this meant a revision in the version number, major reworks of features, bug fixes, added enhancements, a new integration cycle, quality assurance cycle, and eventually a reprint of CDs or disks. From this process, the modern SDLC was born.

An illustration of the SDLC is provided next:

Through the creation and codification of the SDLC, businesses now had an effective way to manage the software creation and release process. While this process properly identified a repeatable software process, it did not mitigate the risk of integrating it. The major problem with the integration phase was in the risk of merging. During the time period before DevOps, CI, CD, and agile, software marching orders would traditionally be divided among teams, and individual developers would retreat to their workstations and code. They would progress in their development efforts in relative isolation until everyone was done and a subsequent integration phase of development took place.

During the integration phase, individual working copies were cobbled together to eventually form one cohesive and operational software title. At the time, the integration phase posed the most amount of risk to a business, as this phase could take as long as (or longer than) the process of creating the software title itself. During this period, engineering resources were expensive and the risk of failure was at its highest; a better solution was needed.

The risk of the integration phase to businesses was oftentimes very high, and a unique approach was finally identified by a few software pundits, which would ultimately pave the way for the future. Continuous Integration is a development practice where developers can merge their local workstation development changes incrementally (and very frequently) into a shared source-control mainline. In a CI environment, basic automation would typically be created to validate each incremental change and ensure nothing was inadvertently broken or didn't work. In the unfortunate event something broke, the developer could easily fix it or revert the change. The idea of continuously merging contributions meant that organizations would no longer need an integration phase, and QA could begin to take place as the software was developed.

Continuous Integration would eventually be popularized through successful mainstream software-engineering implementations and through the tireless efforts of Kent Beck and Martin Fowler. These two industry pundits successfully scaled basic continuous-integration techniques during their tenure at the Chrysler corporation in the mid 90s. As a result of their successful litmus tests through their new CI solution, they noticed an elimination of risk to the business via the integration phase. As a result, they eagerly touted the newfound methodology as the way of the future. Not too long after CI began to gain visibility, other software organizations began to take notice of it and also successfully applied the core techniques.

Strides toward the future

By the late 90s and early 2000s, Continuous Integration was in full swing. Software engineering teams were clamoring to integrate more frequently and verify changes faster, and they diligently worked to develop releasable software incrementally. In many ways, this was the golden era of engineering. It was at the height of the Continuous Integration revolution that (in 2001) 12 software engineering pundits met in a retreat at a mountain resort in Snowbird, Utah, to discuss a new approach to software development. The result of this meeting of the minds, known now as agile development, is broken down into four central pillars, which are:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

This set of simple principles combined with the 12 core philosophies of agile development would later become known as the agile manifesto. The complete agile manifesto can be found at http://agilemanifesto.org/.

In 2001, the agile manifesto was officially published, and organizations soon began breaking work into smaller chunks and getting orders standing up instead of sitting down. Functionality was prioritized, and work items were divided across team members for completion. This meant that the team now had rigid timelines and 2- to 4-week deliverable deadlines.

While this was a step in the right direction, it was limited to the scope of the development group alone. Once the software system was handed from development to quality assurance, the development team would often remain hands off as the software eventually made its way to a release. The most notable problem in this era was related to large complex deployments into physical infrastructure by people who had little to no understanding of the way the software worked.

As software organizations evolved, so did the other departments. For example, quality assurance (QA) practices became more modern and automated. Programmers began writing automated test suites and worked to validate software changes in an automated way. From the revolution in QA, modern practices such as Test-driven Development (TDD), Behavior-driven Development (BDD), and A/B Testing evolved.

The agile movement came about in the year 2001 with the signing and release of the agile manifesto. The principles identified in the agile manifesto in many ways identified a lot of the core concepts that the DevOps movement has since adopted and extended. The agile manifesto represented a radical shift in development patterns when it was released. It argued for shorter iterative development cycles, rapid feedback, and higher levels of collaboration. Sound familiar?

It was also about this time that Continuous Integration began to take root in software organizations and engineers began to take notice of broken builds, failed unit tests, and release engineering.

You have been reading a chapter from
Implementing DevOps with Ansible 2
Published in: Jul 2017
Publisher: Packt
ISBN-13: 9781787120532
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime