Tools and technologies
Tools and technologies play an important role in the DevOps culture; however, it is not the only part that needs attention. For all parts of the application delivery pipeline, different tools, disruptive innovations, open source initiatives, community plugins, and so on are required to keep the entire pipeline running to produce effective outcomes.
Code repositories – Git
Subversion is a version control system that is used to track all the changes made to files and folders. Using this, you can keep track of the applications being built. Features added months ago can also be tracked using the version code. It is all about tracking the code. Whenever any new features added or new code made, it is first tested and then committed by the developer. Then, the code is sent to the repository to track the changes, and a new version is given to it. A comment can also be made by the developer so that other developers can easily understand changes that were made. Other developers only have to update their checkout to see the changes made.
Advantages
The following are some advantages of using source code repositories:
- Many developers can work simultaneously on the same code
- If a computer crashes, the code can still be recovered as it had been committed in the server
- If a bug occurs, the new code can be easily reverted to the previous version
Git is an open source distributed version control system designed to handle small to enormous projects with speed and efficiency. It is easy to learn and has good performance. It comprises a full-fledged repository and version control tracking capabilities independent of a central server or network access. It was developed and designed by Linus Torvalds in 2005.
Characteristics
The following are some significant characteristics of Git:
- It provides support for nonlinear development
- It is compatible with existing systems and protocols
- It ensures the cryptographic authentication of history
- It has well-designed pluggable merge strategies
- It consists of toolkit-based designs
- It supports various merging techniques, such as resolve, octopus, and recursive
Differences between SVN and Git
SVN and Git are both very popular source code repositories; however, Git is getting more popular in recent times. Let's look at the major differences between them:
Detailed description of Subversion and Git is illustrated in the following table:
Subversion |
Git |
Centralized version control system |
Distributed version control system |
Snapshot of a specific version of the project is available on the developer's machine |
Complete clone of a full-fledged repository is available on the developer's machine |
Perform operations such as commit, merge, blame, and revert and verifies branch and log from a central repository |
Perform operations such as commit, merge, and blame and verifies branch and log from a local repository, along with pull and push operation to a remote repository if the developer needs to share work with others |
URLs are used for trunks, branches, or tags:
|
|
An SVN workflow:
|
A Git workflow:
|
File changes are included in the next commit |
File changes have to be marked explicitly and only then are they included in the next commit |
Committed work is directly transferred to the central repository, and hence, direct connection to the repository must be available |
Committed work is not directly transferred to the remote repository and committed to local repository, and to share it with other developers, we need to push it to the remote repository, in which case we need a connection to the remote repository |
Each commit gets ascending revision numbers |
Each commit gets commit hashes rather than ascending revision numbers |
Application directory:
|
Application directory:
|
|
|
Short learning curve |
Long learning curve |
Build tools – Maven
Apache Maven is a build tool with the Apache 2.0 license. It is used for Java projects and can be used in a cross-platform environment. It can be also be used for Ruby, Scala, C#, and other languages.
The following are the important features of Maven:
A Project Object Model (POM) XML file contains information about the name of the application, owner information, how the application distribution file can be created, and how dependencies can be managed.
Example pom.xml file
The pom.xml
file has predefined targets, such as validate, generate-sources, process-sources, generate-resources, process-resources, compile, process-test-sources, process-test-resources, test-compile, test, package, install, and deploy.
The following is an example of a sample pom.xml
file used in Maven:
Continuous integration tools – Jenkins
Jenkins was originally an open source continuous integration software written in Java under the MIT License. However, Jenkins 2 an open source automation server that focuses on any automation, including continuous integration and continuous delivery.
Jenkins can be used across different platforms, such as Windows, Ubuntu/Debian, Red Hat/Fedora, Mac OS X, openSUSE, and FreeBSD. Jenkins enables users to utilize continuous integration services for software development in an agile environment. It can be used to build freestyle software projects based on Apache Ant and Maven 2/Maven 3. It can also execute Windows batch commands and shell scripts.
It can be easily customized with the use of plugins. There are different kinds of plugins available for customizing Jenkins based on specific needs for setting up continuous integration. Categories of plugins include source code management (the Git, CVS, and Bazaar plugins), build triggers (the Accelerated Build Now and Build Flow plugins), build reports (the Code Scanner and Disk Usage plugins), authentication and user management (the Active Directory and GitHub OAuth plugins), and cluster management and distributed build (Amazon EC2 and Azure Slave plugins).
Note
To know more about Jenkins please refer Jenkins Essentials https://www.packtpub.com/application-development/jenkins-essentials.
Jenkins accelerates the software development process through automation:
Key features and benefits
Here are some striking benefits of Jenkins:
- Easy install, upgrade, and configuration.
- Supported platforms: Windows, Ubuntu/Debian, Red Hat/Fedora/CentOS, Mac OS X, openSUSE, FreeBSD, OpenBSD, Solaris, and Gentoo.
- Manages and controls development lifecycle processes.
- Non-Java projects supported by Jenkins: Such as .NET, Ruby, PHP, Drupal, Perl, C++, Node.js, Python, Android, and Scala.
- A development methodology of daily integrations verified by automated builds.
- Every commit can trigger a build.
- Jenkins is a fully featured technology platform that enables users to implement CI and CD.
- The use of Jenkins is not limited to CI and CD. It is possible to include a model and orchestrate the entire pipeline with the use of Jenkins as it supports shell and Windows batch command execution. Jenkins 2.0 supports a delivery pipeline that uses a Domain-Specific Language (DSL) for modeling entire deployments or delivery pipelines.
- The pipeline as code provides a common language-DSL-to help the development and operations teams to collaborate in an effective manner.
- Jenkins 2 brings a new GUI with stage view to observe the progress across the delivery pipeline.
- Jenkins 2.0 is fully backward compatible with the Jenkins 1.x series.
- Jenkins 2 now requires Servlet 3.1 to run.
- You can use embedded Winstone-Jetty or a container that supports Servlet 3.1 (such as Tomcat 8).
- GitHub, Collabnet, SVN, TFS code repositories, and so on are supported by Jenkins for collaborative development.
- Continuous integration: Automate build and the test automated testing (continuous testing), package, and static code analysis.
- Supports common test frameworks such as HP ALM Tools, JUnit, Selenium, and MSTest.
- For continuous testing, Jenkins has plugins for both; Jenkins slaves can execute test suites on different platforms.
- Jenkins supports static code analysis tools such as code verification by CheckStyle and FindBug. It also integrates with Sonar.
- Continuous delivery and continuous deployment: It automates the application deployment pipeline, integrates with popular configuration management tools, and automates environment provisioning.
- To achieve continuous delivery and deployment, Jenkins supports automatic deployment; it provides a plugin for direct integration with IBM uDeploy.
- Highly configurable: Plugins-based architecture that provides support to many technologies, repositories, build tools and test tools; it has an open source CI server and provides over 400 plugins to achieve extensibility.
- Supports distributed builds: Jenkins supports "master/slave" mode, where the workload of building projects is delegated to multiple slave nodes.
- It has a machine-consumable remote access API to retrieve information from Jenkins for programmatic consumption, to trigger a new build, and so on.
- It delivers a better application faster by automating the application development lifecycle, allowing faster delivery.
The Jenkins build pipeline (quality gate system) provides a build pipeline view of upstream and downstream connected jobs, as a chain of jobs, each one subjecting the build to quality-assurance steps. It has the ability to define manual triggers for jobs that require intervention prior to execution, such as an approval process outside of Jenkins. In the following diagram Quality Gates and Orchestration of Build Pipeline are illustrated:
Jenkins can be used with the following tools in different categories as shown here:
Language |
Java |
.Net |
Code repositories |
Subversion, Git, CVS, StarTeam |
Subversion, Git, CVS, StarTeam |
Build tools |
Ant, Maven |
NAnt, MS Build |
Code analysis tools |
Sonar, CheckStyle, FindBugs, NCover, Visual Studio Code Metrics, PowerTool |
Sonar, CheckStyle, FindBugs, NCover, Visual Studio Code Metrics, PowerTool |
Continuous integration |
Jenkins |
Jenkins |
Continuous testing |
Jenkins plugins (HP Quality Center 10.00 with the QuickTest Professional add-in, HP Unified Functional Testing 11.5x and 12.0x, HP Service Test 11.20 and 11.50, HP LoadRunner 11.52 and 12.0x, HP Performance Center 12.xx, HP QuickTest Professional 11.00, HP Application Lifecycle Management 11.00, 11.52, and 12.xx, HP ALM Lab Management 11.50, 11.52, and 12.xx, JUnit, MSTest, and VsTest) |
Jenkins plugins (HP Quality Center 10.00 with the QuickTest Professional add-in, HP Unified Functional Testing 11.5x and 12.0x, HP Service Test 11.20 and 11.50, HP LoadRunner 11.52 and 12.0x, HP Performance Center 12.xx, HP QuickTest Professional 11.00, HP Application Lifecycle Management 11.00, 11.52, and 12.xx, HP ALM Lab Management 11.50, 11.52, and 12.xx, JUnit, MSTest, and VsTest) |
Infrastructure provisioning |
Configuration management tool-Chef |
Configuration management tool-Chef |
Virtualization/cloud service provider |
VMware, AWS, Microsoft Azure (IaaS), traditional environment |
VMware, AWS, Microsoft Azure (IaaS), traditional environment |
Continuous delivery/deployment |
Chef/deployment plugin/shell scripting/Powershell scripts/Windows batch commands |
Chef/deployment plugin/shell scripting/Powershell scripts/Windows batch commands |
Configuration management tools – Chef
Software Configuration Management (SCM) is a software engineering discipline comprising tools and techniques that an organization uses to manage changes in software components. It includes technical aspects of the project, communication, and control of modifications to the projects during development. It also called software control management. It consists of practices for all software projects ranging from development to rapid prototyping and ongoing maintenance. It enriches the reliability and quality of software.
Chef is a configuration management tool used to transform infrastructure into code. It automates the building, deploying, and managing of the infrastructure. By using Chef, infrastructure can be considered as code. The concept behind Chef is that of reusability. It uses recipes to automate the infrastructure. Recipes are instructions required for configuring databases, web servers, and load balances. It describes every part of the infrastructure and how it should be configured, deployed, and managed. It uses building blocks known as resources. A resource describes parts of the infrastructure, such as the template, package, and files to be installed.
These recipes and configuration data are stored on Chef servers. The Chef client is installed on each node of the network. A node can be a physical or virtual server.
As shown in the following diagram, the Chef client periodically checks the Chef server for the latest recipes and to see whether the node is in compliance with the policy defined by the recipes. If it is out of date, the Chef client runs them on the node to bring it up to date:
Features
The following are some important features of the Chef configuration management tool:
- The Chef server:
- It manages a huge number of nodes
- It maintains a blueprint of the infrastructure
- The Chef client:
- It manages various operating systems, such as Linux, Windows, Mac OS, Solaris, and FreeBSD
- It provides integration with cloud providers
- It is easy to manage the containers in a versionable, testable, and repeatable way
- Chef provides an automation platform to continuously define, build, and manage cloud infrastructure used for deployment
- It enables resource provisioning and the configuration of resources programmatically, and it will help in the deployment pipeline in order to automate provisioning and configuration
The following three basic concepts of Chef will enable organizations to quickly manage any infrastructure:
- Achieving the desired state
- Centralized modeling of IT infrastructure
- Resource primitives that serve as building blocks
Note
To learn more about Chef refer Learning Chef https://www.packtpub.com/networking-and-servers/learning-chef.
Cloud service providers
AWS and Microsoft Azure are popular public cloud providers right now. They provide cloud services in different areas, and both have their strong areas. Based on the organization's culture and past partnerships, either can be considered after a detailed assessment based on requirements.
The following is a side-by-side comparison:
AWS |
Microsoft Azure | |
Virtual machines |
Amazon EC2 |
Virtual machine |
PaaS |
Elastic Beanstalk |
Azure Web Apps |
Container services |
Amazon EC2 Container Services |
Azure Container Services |
RDBMS |
Amazon RDS |
Azure SQL Database |
NoSQL |
DynamoDB |
DocumentDB |
BIG Data |
Amazon EMR |
HD Insight |
Networking |
Amazon VPC |
Virtual network |
Cache |
Amazon Elasticache |
Azure RadisCache |
Import/export |
Amazon import/export |
Azure import/export |
Search |
Amazon CloudSearch |
Azure Search |
CDN |
CloudFront |
Azure CDN |
Identity and access management |
AWS IAM and Directory Services |
Azure Active Directory |
Automation |
AWS OpsWorks |
Azure Automation |
Note
Amazon Web Services:Â http://aws.amazon.com/. Microsoft Azure:Â https://azure.microsoft.com
Container technology
Containers use OS-level virtualization, where the kernel is shared between isolated user-spaces. Docker and OpenVZ are popular open source example of OS—level virtualization technologies.
Docker
Docker is an open source initiative to wrap code, the runtime environment, system tools, and libraries. Docker containers share the kernel they are running on and hence start instantly and in a lightweight manner. Docker containers run on Windows as well as Linux distributions. It is important to understand how containers and virtual machines are different. Here is a comparison table of virtual machines and containers:
Note
You can download Docker by visiting https://github.com/docker/docker .
Monitoring tools
There are many open source tools available for monitoring resources. Zenoss and Nagios are two of the most popular open source tools and have been adopted by many organizations.
Zenoss
Zenoss is an agentless and open source management platform for applications, servers, and networks released under the GNU General Public License (GPL) version 2 and is based on the Zope application server. Zenoss Core consists of the extensible programming language Python, object-oriented web server Zope, monitoring protocol network, graph and log time series data by RRD tool, MySQL, and event-driven networking engine Twisted. It provides an easy-to-use web portal to monitor alerts, performance, configuration, and inventory. In the following diagram, Zenoss features are illustrated:
Note
You can visit Zenoss Core 5 website at http://www.zenoss.org/ .
Nagios
Nagios is a cross-platform and open source monitoring tool for infrastructure and networks. It monitors network services such as FTP, HTTP, SSH, and SMTP. It monitors resources, detects problems, and alerts stakeholders. Nagios can empower organizations and service providers to identify and resolve issues in a way that outages have minimal impact on the IT infrastructure and processes, hence ensuring highest adherence to SLAs. Nagios can monitor cloud resources such as compute, storage, and network.
Note
You can get more information by navigating to Nagios official website at https://www.nagios.org/ .
Deployment orchestration/continuous delivery - Jenkins
The build pipeline, also called the deployment or application delivery pipeline, can be used to achieve end-to-end automation for all operations, including continuous integration, cloud provisioning, configuration management, continuous delivery, continuous deployment, and notifications. The following Jenkins plugins can be used for overall orchestration of all the activities involved in end-to-end automation:
- Continuous integration: Jenkins
- Configuration management: Chef
- Cloud service providers: AWS, Microsoft Azure
- Container technology: Docker
- Continuous delivery/deployment: ssh
End-to-end orchestration: Jenkins plugins
Here is a sample representation of end-to-end automation using different tools:
Jenkins can be used to manage unit testing and code verification; Chef can be used for setting up a runtime environment; Knife plugins can be used for creating a virtual machine in AWS or Microsoft Azure; the build pipeline or deployment pipeline plugins in Jenkins can be used for managing deployment orchestration.
From a single pipeline dashboard, we can view the status of all the builds that are configured in the pipeline. Each build in the pipeline is a kind of quality gate. If one build fails, then the execution won't go further. Additional dimensions can be added, such as notification based on compilation failures, unit test failures, or for unsuccessful deployment. The final deployment can be based on some sort of permission from a specific stakeholder. Consider a scenario for a parameterized build or promoted build concept-what should we do? All will be revealed in the chapters to follow!
The DevOps dashboard
One of the most liked components of DevOps culture is the dashboard or GUI that provides a combined status of all end-to-end activities. For automation tools, an easy-to-use web GUI is handy for managing resources. For end-to-end automation in application deployment activities, multiple open source or commercial tools are used. There is a high possibility that a single product may not be used for all activities, for example, Git or SVN as the repository, Jenkins as the CI server, and IBM UrbanCode Deploy as the deployment orchestration tool. In such a scenario, it is easier if there is a single-pane-of-glass view where we can track multiple tools for a specific application.
Hygieia is an open source DevOps dashboard that provides a way to track the status of a deployment pipeline. It basically tracks six different areas as of now, including features (Jira, VersionOne), code repository (GitHub, Subversion), builds (Jenkins, Hudson), quality (Sonar, Cucumber/Selenium), monitoring, and deployment (IBM UrbanCode Deploy). Following is a sample image of configured DevOps dashboard:
Note
Download Hygieia from here https://github.com/capitalone/Hygieia .