We introduced the idea, benefits, and prerequisites with regards to the Continuous Delivery process. In this section, we describe the tools that will be used throughout the book and their place in the complete system.
Building the Continuous Delivery process
Introducing tools
First of all, the specific tool is always less important than understanding its role in the process. In other words, any tool can be replaced with another one which plays the same role. For example, Jenkins can be replaced with Atlassian Bamboo and Chief can be used instead of Ansible. That is why each chapter begins with the general description of why such a tool is necessary and what its role is in the whole process. Then, the exact tool is described with comparison to its substitutes. That form gives you the flexibility to choose the right one for your environment.
Another approach could be to describe the Continuous Delivery process on the level of ideas; however, I strongly believe that giving an exact example with the code extract, something that readers can run by themselves, results in a much better understanding of the concept.
Let's have a quick look at the tools we will use throughout the book. In this section, however, it is only a brief introduction of each technology and much more detail is presented as this book goes on.
Docker ecosystem
Docker, as the clear leader of the containerization movement, has dominated the software industry in the recent years. It allows the packaging of an application in the environment-agnostic image and therefore treats servers as a farm of resources, rather than machines that must be configured for each application. Docker was a clear choice for this book because it perfectly fits the (micro) service world and the Continuous Delivery process.
Together with Docker comes additional technologies, which are as follows:
- Docker Hub: This is a registry for Docker images
- Docker Compose: This is a tool to define multicontainer Docker applications
- Docker Swarm: This is a clustering and scheduling tool
Jenkins
Jenkins is by far the most popular automation server on the market. It helps to create Continuous Integration and Continuous Delivery pipelines and, in general, any other automated sequence of scripts. Highly plugin-oriented, it has a great community which constantly extends it with new features. What's more, it allows to write the pipeline as code and supports distributed build environments.
Ansible
Ansible is an automation tool that helps with software provisioning, configuration management, and application deployment. It is trending faster than any other configuration management engine and can soon overtake its two main competitors: Chef and Puppet. It uses agentless architecture and integrates smoothly with Docker.
GitHub
GitHub is definitely the number one of all hosted version control systems. It provides a very stable system, a great web-based UI, and a free service for public repositories. Having said that, any source control management service or tool will work with Continuous Delivery, no matter if it's in the cloud or self-hosted and if it's based on Git, SVN, Mercurial, or any other tool.
Java/Spring Boot/Gradle
Java has been the most popular programming language for years. That is why it is being used for most code examples in this book. Together with Java, most companies develop with the Spring framework, so we used it to create a simple web service needed to explain some concepts. Gradle is used as a build tool. It's still less popular than Maven, however, trending much faster. As always, any programming language, framework, or build tool can be exchanged and the Continuous Delivery process would stay the same, so don't worry if your technology stack is different.
The other tools
Cucumber was chosen arbitrarily as the acceptance testing framework. Other similar solutions are Fitnesse and JBehave. For the database migration we use Flyway, but any other tool would do, for example, Liquibase.
Creating a complete Continuous Delivery system
You can look at how this book is organized from two perspectives.
The first one is based on the steps of the automated deployment pipeline. Each chapter takes you closer to the complete Continuous Delivery process. If you look at the names of the chapters, some of them are even named like the pipeline phases:
- Continuous Integration pipeline
- Automated acceptance testing
- Configuration management with Ansible
The rest of the chapters give the introduction, summary, or additional information complementary to the process.
There is also a second perspective to the content of this book. Each chapter describes one piece of the environment, which in turn is well prepared for the Continuous Delivery process. In other words, the book presents, step by step, technology by technology, how to build a complete system. To help you get the feeling of what we plan to build throughout the book, let's now have a look at how the system will evolve in each chapter.
Introducing Docker
In Chapter 2, Introducing Docker, we start from the center of our system and build a working application packaged as a Docker image. The output of this chapter is presented in the following diagram:
A dockerized application (web service) is run as a container on a Docker Host and is reachable as it would run directly on the host machine. That is possible thanks to port forwarding (port publishing in the Docker's terminology).
Configuring Jenkins
In Chapter 3, Configuring Jenkins, we prepare the Jenkins environment. Thanks to the support of multiple agent (slave) nodes, it is able to handle the heavy concurrent load. The result is presented in the following diagram:
The Jenkins master accepts a build request, but the execution is started at one of the Jenkins Slave (agent) machines. Such an approach provides horizontal scaling of the Jenkins environment.
Continuous Integration Pipeline
In Chapter 4, Continuous Integration Pipeline, we show how to create the first phase of the Continuous Delivery pipeline, the commit stage. The output of this chapter is the system presented in the following diagram:
The application is a simple web service written in Java with the Spring Boot framework. Gradle is used as a build tool and GitHub as the source code repository. Every commit to GitHub automatically triggers the Jenkins build, which uses Gradle to compile Java code, run unit tests, and perform additional checks (code coverage, static code analysis, and so on). After the Jenkins build is completed, a notification is sent to the developers.
After this chapter, you will be able to create a complete Continuous Integration pipeline.
Automated acceptance testing
In Chapter 5, Automated Acceptance Testing, we finally merge the two technologies from the book title: Docker and Jenkins. It results in the system presented in the following diagram:
The additional elements in the diagram are related to the automated acceptance testing stage:
- Docker Registry: After the Continuous Integration phase, the application is packaged first into a JAR file and then as a Docker image. That image is then pushed to the Docker Registry, which acts as a storage for dockerized applications.
- Docker Host: Before performing the acceptance test suite, the application has to be started. Jenkins triggers a Docker Host machine to pull the dockerized application from the Docker Registry and starts it.
- Docker Compose: If the complete application consists of more than one Docker container (for example, two web services: Application 1 using Application 2), then Docker Compose helps to run them together.
- Cucumber: After the application is started on the Docker Host, Jenkins runs a suite of acceptance tests written in the Cucumber framework.
Configuration management with Ansible/Continuous Delivery pipeline
In the next two chapters, that is, Chapter 6, Configuration Management with Ansible and Chapter 7, Continuous Delivery Pipeline, we complete the Continuous Delivery pipeline. The output is the environment presented in the following diagram:
Ansible takes care of the environments and enables the deployment of the same applications on multiple machines. As a result, we deploy the application to the staging environment, run the acceptance testing suite, and finally release the application to the production environment, usually in many instances (on multiple Docker Host machines).
Clustering with Docker Swarm/Advanced Continuous Delivery
In Chapter 8, Clustering with Docker Swarm, we replace single hosts in each of the environments with clusters of machines. Chapter 9, Advanced Continuous Delivery, additionally adds databases to the Continuous Delivery process. The final environment created in this book is presented in the following diagram:
Staging and production environments are equipped with Docker Swarm clusters and therefore multiple instances of the application are run on the cluster. We don't have to think anymore on which exact machine our applications are deployed. All we care about is the number of their instances. The same applies to Jenkins slaves, they are also run on a cluster. The last improvement is the automatic management of the database schemas using Flyway migrations integrated into the delivery process.
I hope you are already excited by what we plan to build throughout this book. We will approach it step by step, explaining every detail and all the possible options in order to help you understand the procedures and tools. After reading this book, you will be able to introduce or improve the Continuous Delivery process in your projects.