Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Continuous Delivery with Docker and Jenkins, 3rd Edition

You're reading from   Continuous Delivery with Docker and Jenkins, 3rd Edition Create secure applications by building complete CI/CD pipelines

Arrow left icon
Product type Paperback
Published in May 2022
Publisher Packt
ISBN-13 9781803237480
Length 374 pages
Edition 3rd Edition
Tools
Arrow right icon
Author (1):
Arrow left icon
Rafał Leszko Rafał Leszko
Author Profile Icon Rafał Leszko
Rafał Leszko
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Section 1 – Setting Up the Environment
2. Chapter 1: Introducing Continuous Delivery FREE CHAPTER 3. Chapter 2: Introducing Docker 4. Chapter 3: Configuring Jenkins 5. Section 2 – Architecting and Testing an Application
6. Chapter 4: Continuous Integration Pipeline 7. Chapter 5: Automated Acceptance Testing 8. Chapter 6: Clustering with Kubernetes 9. Section 3 – Deploying an Application
10. Chapter 7: Configuration Management with Ansible 11. Chapter 8: Continuous Delivery Pipeline 12. Chapter 9: Advanced Continuous Delivery 13. Best Practices 14. Assessments 15. Other Books You May Enjoy

Understanding CD

The most accurate definition of CD is stated by Jez Humble and reads as follows:

"Continuous delivery is the ability to get changes of all types – including new features, configuration changes, bug fixes, and experiments – into production, or into the hands of users, safely and quickly, in a sustainable way."

This definition covers the key points.

To understand this better, let's imagine a scenario. You are responsible for a product – let's say, an email client application. Users come to you with a new requirement: they want to sort emails by size. You decide that the development will take around 1 week. When can the user expect to use the feature? Usually, after the development is done, you hand over the completed feature to the Quality Assurance (QA) team and then to the operations team, which takes additional time, ranging from days to months.

Therefore, even though the development only took 1 week, the user receives it in a couple of months! The CD approach addresses this issue by automating manual tasks so that the user can receive a new feature as soon as it's implemented.

To help you understand what to automate and how, we'll start by describing the delivery process that is currently used for most software systems.

The traditional delivery process

The traditional delivery process, as its name suggests, has been in place for many years and is implemented in most IT companies. Let's define how it works and comment on its shortcomings.

Introducing the traditional delivery process

Every delivery process begins with the requirements that have been defined by a customer and ends with the product being released to production. There are differences between these two stages. Traditionally, this process looks as follows:

Figure 1.1 – Release cycle diagram

Figure 1.1 – Release cycle diagram

The release cycle starts with the requirements provided by the Product Owner, who represents the Customer (stakeholders). Then, there are three phases, during which the work is passed between different teams:

  • Development: The developers (sometimes together with business analysts) work on the product. They often use agile techniques (Scrum or Kanban) to increase the development velocity and improve communication with the client. Demo sessions are organized to obtain a customer's quick feedback. All good development techniques (such as test-driven development (TDD) or extreme programming practices) are welcome. Once the implementation is complete, the code is passed to the QA team.
  • Quality Assurance: This phase is usually called User Acceptance Testing (UAT) and it requires the code to be frozen on the trunk code base so that no new development will break the tests. The QA team performs a suite of integration testingacceptance testing, and non-functional analysis (performance, recovery, security, and so on). Any bug that is detected goes back to the development team, so the developers usually have their hands full. After the UAT phase is completed, the QA team approves the features that have been planned for the next release.
  • Operations: The final phase, and usually the shortest one, involves passing the code to the operations team so that they can perform the release and monitor the production environment. If anything goes wrong, they contact the developers so that they can help with the production system.

The length of the release cycle depends on the system and the organization, but it usually ranges from 1 week to a few months. The longest I've heard about was 1 year. The longest I worked on one was quarterly-based, and each part was as follows:

  • Development: 1.5 months
  • UAT: 1 month and 3 weeks
  • Release (and strict production monitoring): 1 week

The traditional delivery process is widely used in the IT industry, so this is probably not the first time you've read about such an approach. Nevertheless, it has several drawbacks. Let's look at them explicitly to understand why we need to strive for something better.

Shortcomings of the traditional delivery process

The most significant shortcomings of the traditional delivery process are as follows:

  • Slow delivery: The customer receives the product long after the requirements were specified. This results in unsatisfactory time to market and delays customer feedback.
  • Long feedback cycle: The feedback cycle is not only related to customers but developers. Imagine that you accidentally created a bug, and you learn about it during the UAT phase. How long does it take to fix something you worked on 2 months ago? Even dealing with minor bugs can take weeks.
  • Lack of automation: Rare releases do not encourage automation, which leads to unpredictable releases.
  • Risky hotfixes: Hotfixes cannot usually wait for the full UAT phase, so they tend to be tested differently (the UAT phase is shortened) or not tested at all.
  • Stress: Unpredictable releases are stressful for the operations team. What's more, the release cycle is usually tightly scheduled, which imposes additional stress on developers and testers.
  • Poor communication: Work that's passed from one team to another represents the waterfall approach, in which people start to care only about their part, rather than the complete product. If anything goes wrong, that usually leads to the blame game instead of cooperation.
  • Shared responsibility: No team takes responsibility for the product from A to Z:
    • For developersDone means that the requirements have been implemented.
    • For testersDone means that the code has been tested.
    • For operationsDone means that the code has been released.
  • Lower job satisfaction: Each phase is interesting for a different team, but other teams need to support the process. For example, the development phase is interesting for developers but, during the other two phases, they still need to fix bugs and support the release, which is usually not interesting for them at all.

These drawbacks represent just the tip of the iceberg of the challenges related to the traditional delivery process. You may already feel that there must be a better way to develop software and this better way is, obviously, the CD approach.

The benefits of CD

How long would it take your organization to deploy a change that involves just a single line of code? Do you do this on a repeatable, reliable basis? These are the famous questions from Mary and Tom Poppendieck (authors of Implementing Lean Software Development), which have been quoted many times by Jez Humble and others. The answers to these questions are the only valid measurement of the health of your delivery process.

To be able to deliver continuously, and not spend a fortune on the army of operations teams working 24/7, we need automation. That is why, in short, CD is all about changing each phase of the traditional delivery process into a sequence of scripts called the automated deployment pipeline, or the CD pipeline. Then, if no manual steps are required, we can run the process after every code change and deliver the product continuously to users.

CD lets us get rid of the tedious release cycle and brings the following benefits:

  • Fast delivery: Time to market is significantly reduced as customers can use the product as soon as development is completed. Remember that the software delivers no revenue until it is in the hands of its users.
  • Fast feedback cycle: Imagine that you created a bug in the code, which goes into production the same day. How much time does it take to fix something you worked on the same day? Probably not much. This, together with the quick rollback strategy, is the best way to keep production stable.
  • Low-risk releases: If you release daily, the process becomes repeatable and much safer. As the saying goes, if it hurts, do it more often.
  • Flexible release options: If you need to release immediately, everything is already prepared, so there is no additional time/cost associated with the release decision.

Needless to say, we could achieve all these benefits simply by eliminating all the delivery phases and proceeding with development directly from production. However, this would result in a reduction in quality. The whole difficulty of introducing CD is the concern that the quality would decrease alongside eliminating any manual steps. In this book, we will show you how to approach CD safely and explain why, contrary to common beliefs, products that are delivered continuously contain fewer bugs and are better adjusted to the customer's needs.

Success stories

My favorite story on CD was told by Rolf Russell at one of his talks. It goes as follows. In 2005, Yahoo! acquired Flickr, and it was a clash of two cultures in the developer's world. Flickr, by that time, was a company with the start-up approach in mind. Yahoo!, on the other hand, was a huge corporation with strict rules and a safety-first attitude. Their release processes differed a lot. While Yahoo used the traditional delivery process, Flickr released many times a day. Every change that was implemented by developers went into production the same day. They even had a footer at the bottom of their page showing the time of the last release and the avatars of the developers who made the changes.

Yahoo! deployed rarely, and each release brought a lot of changes that were well-tested and prepared. Flickr worked in very small chunks; each feature was divided into small incremental parts, and each part was deployed to production quickly. The difference is presented in the following diagram:

Figure 1.2 – Comparison of the release cycles of Yahoo! and Flickr

Figure 1.2 – Comparison of the release cycles of Yahoo! and Flickr

You can imagine what happened when the developers from the two companies met. Yahoo! treated Flickr's colleagues as irresponsible junior developers, a bunch of software cowboys who didn't know what they were doing. So, the first thing they wanted to do was add a QA team and the UAT phase to Flickr's delivery process. Before they applied the change, however, Flickr's developers had only one wish. They asked to evaluate the most reliable products throughout Yahoo! as a whole. It came as a surprise when they saw that even with all the software in Yahoo!, Flickr had the lowest downtime. The Yahoo! team didn't understand it at first, but they let Flickr stay with their current process anyway. After all, they were engineers, so the evaluation result was conclusive. Only after some time had passed did the Yahoo! developers realize that the CD process could be beneficial for all the products in Yahoo! and they started to gradually introduce it everywhere.

The most important question of the story remains: how come Flickr was the most reliable system? The reason behind this was what we already mentioned in the previous sections. A release is less risky if the following is true:

  • The delta of code changes is small
  • The process is repeatable

That is why, even though the release itself is a difficult activity, it is much safer when it's done frequently.

The story of Yahoo! and Flickr is only one example of many successful companies where the CD process proved to be the correct choice. Nowadays, it's common for even small organizations to release frequently and market leaders such as Amazon, Facebook, Google, and Netflix perform thousands of releases per day.

Information

You can read more about the research on the CD process and individual case studies at https://continuousdelivery.com/evidence-case-studies/.

Keep in mind that the statistics get better every day. However, even without any numbers, just imagine a world in which every line of code you implement goes safely into production. Clients can react quickly and adjust their requirements, developers are happy as they don't have to solve that many bugs, and managers are satisfied because they always know the current state of work. After all, remember that the only true measure of progress is the software that is released.

You have been reading a chapter from
Continuous Delivery with Docker and Jenkins, 3rd Edition - Third Edition
Published in: May 2022
Publisher: Packt
ISBN-13: 9781803237480
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime