Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Beginning DevOps with Docker
Beginning DevOps with Docker

Beginning DevOps with Docker: Automate the deployment of your environment with the power of the Docker toolchain

eBook
€6.99 €10.99
Paperback
€12.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Beginning DevOps with Docker

Chapter 1. Images and Containers

This lesson will cover fundamental concepts about containerization as a foundation for the images and containers we will later build. We will also get to understand how and why Docker gets involved in the DevOps ecosystem. Before we begin, we will see how virtualization differs from containerization in Docker.

Lesson Objectives

By the end of this lesson, you will be able to:

  • Describe how Docker improves a DevOps workflow
  • Interpret Dockerfile syntax
  • Build images
  • Set up containers and images
  • Set up a local dynamic environment
  • Run applications in Docker containers
  • Obtain a basic overview of how Docker manages images via Docker Hub
  • Deploy a Docker image to Docker Hub

Virtualization versus Containerization

This block diagram gives an overview of a typical virtual machine setup:

Virtualization versus Containerization

In virtual machines, the physical hardware is abstracted, therefore we have many servers running on one server. A hypervisor helps do this.

Virtual machines do sometimes take time to start up and are expensive in capacity (they can be GBs in size), although the greatest advantage they have over containers is the ability to run different Linux distributions such as CentOS instead of just Ubuntu:

Virtualization versus Containerization

In containerization, it is only the app layer (where code and dependencies are packaged) that is abstracted, making it possible for many containers to run on the same OS kernel but on separate user space.

Containers use less space and boot fast. This makes development easier, since you can delete and start up containers on the fly without considering how much server or developer working space you have.

Let's begin the lesson with a quick overview on how Docker comes into play in a DevOps workflow and the Docker environment.

How Docker Improves a DevOps Workflow

DevOps is a mindset, a culture, and a way of thinking. The ultimate goal is to always improve and automate processes as much as possible. In layman language, DevOps requires one to think in the laziest point of view, which puts most, if not all, processes as automatic as possible.

Docker is an open source containerization platform that improves the shipping process of a development life cycle. Note it is neither a replacement for the already existing platforms nor does the organization want it to be.

Docker abstracts the complexity of configuration management like Puppet. With this kind of setup, shell scripts become unnecessary. Docker can also be used on small or large deployments, from a hello world application to a full-fledged production server.

As a developer on different levels, whether beginner or expert, you may have used Docker and you didn't even realize it. If you have set up a continuous integration pipeline to run your tests online, most servers use Docker to build and run your tests.

Docker has gained a lot of support in the tech community because of its agility and, as such, a lot of organizations are running containers for their services. Such organizations include the following:

  • Continuous integration and continuous delivery platforms such as Circle CI, Travis CI, and Codeship
  • Cloud platforms such as Amazon Web Services (AWS) and Google Cloud Platform (GCP) allow developers to run applications out of containers
  • Cisco and the Alibaba group also run some of their services in containers

Docker's place in the DevOps workflow involves, but is not limited to, the following:

Note

Examples of Docker's use cases in a development workflow.

Unifying requirements refers to using a single configuration file. Docker abstracts and limits requirements to a single Dockerfile file.

Abstraction of OS means one doesn't need to worry about building the OS because there exist prebuilt images.

Velocity has to define a Dockerfile and build containers to test in, or use an already built image without writing a Dockerfile.Docker allows development teams to avoid investment on steep learning curves through shell scripts because "automation tool X" is too complicated.

Recap of the Docker Environment

We walked through the fundamentals of containerization earlier. Allow me to emphasize the alternative workflow that Docker brings to us.

Normally, we have two pieces to a working application: the project code base and the provisioning script. The code base is the application code. It is managed by version control and hosted in GitHub, among other platforms.

The provisioning script could be a simple shell script to be run in a host machine, which could be anywhere from a Windows workstation to a fully dedicated server in the cloud.

Using Docker does not interfere with the project code base, but innovates on the provisioning aspect, improving the workflow and delivery velocity. This is a sample setup of how Docker implements this:

Recap of the Docker Environment

The Dockerfile takes the place of the provisioning script. The two combined (project code and Dockerfile) make a Docker image. A Docker image can be run as an application. This running application sourced from a Docker image is called a Docker container.

The Docker container allows us to run the application in a completely new environment on our computers, which is completely disposable. What does this mean?

It means that we are able to declare and run Linux or any other operating system on our computers and then, run our application in it. This also emphasizes that we can build and run the container as many times as we want without interfering with our computer's configuration.

With this, I have brought to your attention four key words: image, container, build, and run. We will get to the nitty-gritty of the Docker CLI next.

Basic Docker Terminal Commands

Open Command Prompt to check that Docker is installed in your workstation. Entering the command docker on your terminal should show the following:

Basic Docker Terminal Commands

This is the list of available subcommands for Docker. To understand what each subcommand does, enter docker-subcommand –help on the terminal:

Basic Docker Terminal Commands

Run docker info and note the following:

  • Containers
  • Images
  • Server Version
Basic Docker Terminal Commands

This command displays system-wide information. The server version number is important at times, especially when new releases introduce something that is not backward-compatible. Docker has stable and edge releases for their Community Edition.

We will now look at a few commonly used commands.

This command searches Docker Hub for images:

docker search <term> (for example, docker search ubuntu)

Docker Hub is the default Docker registry. A Docker registry holds named Docker images. Docker Hub is basically the "GitHub for Docker images". Earlier, we looked at running an Ubuntu container without building one; this is where the Ubuntu image is stored and versioned:

Basic Docker Terminal Commands

"There are private Docker registries, and it is important that you are aware of this now."? Docker Hub is at hub.docker.com. Some images are hosted at store.docker.com but Docker Store contains official images. However, it mainly focuses on the commercial aspect of an app store of sorts for Docker images and provides workflows for use.

The register page is as shown here:

Basic Docker Terminal Commands

The log in page is as shown here:

Basic Docker Terminal Commands

From the results, you can tell how users have rated the image by the number of stars. You can also tell whether the image is official. This means that the image is promoted by the registry, in this case, Docker Hub. New Docker users are advised to use official images since they have great documentation, are secure, promote best practices, and are designed for most use cases. As soon as you have settled on one image, you'll need to have it locally.

Note

Ensure you are able to search for at least one image from Docker Hub. Image variety ranges from operating systems to libraries, such as Ubuntu, Node.js, and Apache.

This command allows you to search from Docker Hub:

docker search <term>

For example, docker search ubuntu.

This command pulls an image from the registry to your local machine:

docker pull

For example, docker pull ubuntu.

As soon as this command is running, you'll notice that it is using the default tag: latest. In Docker Hub, you can see the list of tags. For Ubuntu, they are listed here: https://hub.docker.com/r/library/ubuntu/ plus their respective Dockerfiles:

Basic Docker Terminal Commands

Download the Ubuntu image profile on Docker Hub from: https://hub.docker.com/r/library/ubuntu/.

Activity 1 — Utilizing the docker pull Command

To get you conversant with the docker pull command.

The goal of this activity is to gain a firm understanding of the docker-pull CLI, not only by running the listed commands, but also by seeking help on other commands while exploring, through manipulating the built containers.

  1. Is Docker up and running? Type docker on the terminal or command-line application.
  2. This command is used to pull the image from the Docker Hub.
    docker pull
    

Image variety ranges from operating systems to libraries, such as Ubuntu, Node.js, and Apache. This command allows you to pull images from Docker Hub:

For example, docker pull ubuntu.

This command lists the Docker images we have locally:

  • docker images

When we run the command, if we have pulled images from Docker Hub, we will be able to see a list of images:

Activity 1 — Utilizing the docker pull Command

They are listed according to the repository, tag, image ID, date created, and size. The repository is simply the image name unless it is sourced from a different registry. In this case, you'll have a URL without the http:// and the top level domain (TLD) such as >registry.heroku.com/<image-name> from the Heroku registry.

This command will check whether the image by the name hello-world exists locally:

docker run <image>

For example, docker run hello-world:

Activity 1 — Utilizing the docker pull Command

If the image is not local, it will be pulled from the default registry, Docker Hub, and run as a container, by default.

This command lists the running containers:

docker ps

If there aren't any running containers, you should have a blank screen with the headers:

Activity 1 — Utilizing the docker pull Command

Activity 2 — Analyzing the Docker CLI

Ensure you have the Docker CLI running by typing docker on your terminal.

You have been asked to demonstrate the commands covered so far.

To get you conversant with the Docker CLI. The goal of this activity is to gain a firm understanding of the docker-compose CLI, not only by running the listed commands, but also by seeking help on other commands while exploring, through manipulating the built containers. The goal is to be flexible enough with the CLI to be able to use it in a real-world scenario such as running an automated script.

  1. Is Docker up and running? Type docker on the terminal or command-line application.
  2. Search for the official Apache image using the CLI, using docker search apache:
    Activity 2 — Analyzing the Docker CLI
  3. Attempt to pull the image using docker pull apache.
  4. Confirm the availability of the image locally using docker images.
  5. Bonus: Run the image as a container using docker run apache.
  6. Bonus: Stop the container using docker stop <container ID>.
  7. Bonus: Delete the container and the image using docker rm <container ID>.

Dockerfile Syntax

Every Docker image starts from a Dockerfile. To create an image of an application or script, simply create a file called Dockerfile.

Note

It does not have an extension and begins with a capital letter D.

A Dockerfile is a simple text document where all the commands that template a container are written. The Dockerfile always starts with a base image. It contains steps to create the application or to run the script in mind.

Before we build, let's take a quick look at a few best practices on writing Dockerfiles.

Some best practices include, but are not limited to, the following:

  • Separation of concern: Ensure each Dockerfile is, as much as possible, focused on one goal. This will make it so much easier to reuse in multiple applications.
  • Avoid unnecessary installations: This will reduce complexity and make the image and container compact enough.
  • Reuse already built images: There are several built and versioned images on Docker Hub; thus, instead of implementing an already existing image, it's highly advisable to reuse by importing.
  • Have a limited number of layers: A minimal number of layers will allow one to have a compact or smaller build. Memory is a key factor to consider when building images and containers, because this also affects the consumers of the image, or the clients.

We'll start simply with a Python and JavaScript script. The choice of these languages is based on their popularity and ease of demonstration.

Writing Dockerfiles for Python and JavaScript examples

Note

No prior experience is required on the selected languages as theyare meant to give a dynamic view of how any language can adopt containerization.

Python

Before we begin, create a new directory or folder; let's use this as our workspace.

Open the directory and run docker search python. We'll pick the official image: python. The official image has the value [OK] in the OFFICIAL column:

Python

Go to hub.docker.com or store.docker.com and search for python to get the correct tag or at least know what version the Python image with the latest tag is. We will talk more about tags in Topic D.

The image tag should be the number with this syntax that looks like 3.x.x or 3.x.x-rc.

Create a file by the name run.py and enter the first line as follows:

print("Hello Docker - PY")

Create a new file on the same folder level and name it Dockerfile.

Note

We do not have an extension for the Dockerfile.

Add the following in the Dockerfile:

FROM python
ADD . .
RUN ls
CMD python run.py

The FROM command, as alluded to earlier, specifies the base image.

The command can also be used on an inheritance point of view. This means you do not have to include extra package installations in the Dockerfile if there already exists an image with the packages.

The ADD command copies the specified files at source to the destination within the image's filesystem. This means the contents of the script will be copied to the directory specified.

In this case because run.py and Dockerfile are on the same level then run.py is copied to the working directory of the base image's file system that we are building upon.

The RUN command is executed while the image is being built. ls being run here is simply for us to see the contents of the image's filesystem.

The CMD command is used when a container is run based on the image we'll create using this Dockerfile. That means at the end of the Dockerfile execution, we are intending to run a container.

JavaScript

Exit the previous directory and create a new one. This one will be demonstrating a node application.

Add the following line in the script and save:

console.log("Hello Docker - JS")

Run docker search node - we'll pick the official image: node

Remember that the official image has the value [OK] in the OFFICIAL column:

JavaScript

Note that node is the JavaScript runtime based on Google's high performance, open source JavaScript engine, V8.

Go to hub.docker.com and search for node to get the correct tag or at least know what version the node image with the latest tag is.

Create a new Dockerfile and add the following:

This should be on the same file level as the script.

FROM node
ADD . .
RUN ls
CMD node run.js

We'll cover these for now.

Activity 3 — Building the Dockerfile

Ensure you have the Docker CLI running by typing docker on your terminal.

To get you conversant with Dockerfile syntax. The goal of this activity is to help understand and practice working with third-party images and containers. This helps get a bigger picture on how collaboration can still be affected through containerization. This increases product delivery pace by building features or resources that already exist.

You have been asked to write a simple Dockerfile that prints hello-world.

  1. Is Docker up and running? Type docker on the terminal or command-line application.
  2. Create a new directory and create a new Dockerfile.
  3. Write a Dockerfile that includes the following steps:
    FROM ubuntu:xenial 
    RUN apt-get install -y apt-transport-https curl software-properties-common python-software-properties
    RUN curl -fsSL https://apt.dockerproject.org/gpg | apt-key add 
    RUN echo 'deb https://apt.dockerproject.org/repo ubuntu-xenial main' > /etc/apt/sources.list.d/docker.list
    RUN apt-get update
    RUN apt-get install -y python3-pip
    RUN apt-get install -y build-essential libssl-dev libffi-dev python-dev
    

Building Images

Before we begin building images, let's understand the context first. An image is a standalone package that can run an application or allocated service. Images are built through Dockerfiles, which are templates that define how images are to be built.

A container is defined as a runtime instance or version of an image. Note this will run on your computer or the host as a completely isolated environment, which makes it disposable and viable for tasks such as testing.

With the Dockerfiles ready, let's get to the Python Dockerfile directory and build the image.

docker build

The command to build images is as follows:

docker build -t <image-name> <relative location of the Dockerfile>

-t stands for the tag. The <image-name> can include the specific tag, say, latest. It is advised that you do it this way: always tagging the image.

The relative location of the Dockerfile here would be a dot (.) to mean that the Dockerfile is on the same level as the rest of the code; that is, it is at the root level of the project. Otherwise, you would enter the directory the Dockerfile is in.

If, for example, it is in the Docker folder, you would have docker build -t <image-name> docker, or if it is in a folder higher than the root directory, you would have two dots. Two levels higher would be three dots in place of the one dot.

Note

The output on the terminal and compare to the steps written on the Dockerfiles. You may want to have two or more Dockerfiles to configure different situations, say, a Dockerfile to build a production-ready app and another one for testing. Whatever reason you may have, Docker has the solution.

The default Dockerfile is, yes, Dockerfile. Any additional one by best practices is named Dockerfile.<name>,say, Dockerfile.dev.

To build an image using a Dockerfile aside from the default one, run the following: docker build -f Dockerfile.<name> -t <image-name> <relative location of the Dockerfile>

Note

If you rebuild the image with a change to the Dockerfile, without specifying a different tag, a new image will be built and the previous image is named <none>.

The docker build command has several options that you can see for yourself by running docker build --help. Tagging images with names such as latest is also used for versioning. We will talk more on this in the Topic F.

To build the image, run the following command in the Python workspace:

>$ docker build -t python-docker .

Note

The trailing dot is an important part of the syntax here:

docker build

Note

The trailing dot is an important part of the syntax here:

docker build

Open the JavaScript directory and build the JavaScript image as follows:

>$ docker build -t js-docker .

Running the commands will outline the four steps based on the four lines of commands in the Dockerfile.

Running docker images lists the two images you have created and any other image you had pulled before.

Removing Docker Images

The docker rmi <image-id> command is used to delete an image. Let me remind you that the image ID can be found by running the docker images command.

To delete the images that are non-tagged (assumed not to be relevant), knowledge of bash scripting comes in handy. Use the following command:

docker rmi $(docker images | grep "^<none>" | awk "{print $3}")

This simply searches for images with <none> within their row of the docker images command and returns the image IDs that are in the third column:

Removing Docker Images

Activity 4 — Utilizing the Docker Image

Ensure you have the Docker CLI running by typing docker on your terminal.

To get you conversant with running containers out of images.

You have been asked to build an image from the Dockerfile written in Activity C. Stop the running container, delete the image, and rebuild it using a different name.

  1. Is Docker up and running? Type docker on the terminal or command-line application.
  2. Open the JavaScript example directory.
  3. Run docker build -t <choose a name> (observe the steps and take note of the result).
  4. Run docker run <the-name-you-chose>.
  5. Run docker stop <container ID>.
  6. Run docker rmi <add the image ID here>.
  7. Run docker build -t <choose new name>.
  8. Run docker ps (note the result; the old image should not exist).

Running Containers From Images

Remember when we mentioned containers are built from images? The command docker run <image> creates a container based on that image. One can say that a container is a running instance of an image. Another reminder is that this image could either be local or in the registry.

Go ahead and run the already created images docker run python-docker and docker run js-docker:

Running Containers From Images

What do you notice? The containers run outputs to the terminal's respective lines. Notice that the command preceded by CMD in the Dockerfile is the one that runs:

docker build -t python-docker:test .  and docker build -t js-docker:test .

Then, run the following:

python-docker:test and docker run js-docker:test

Note

You will not see any output on the terminal.

This is not because we don't have a command CMD to run as soon as the container is up. For both images built from Python and Node, there is a CMD inherited from the base images.

Note

Images created always inherit from the base image.

The two containers we have run contain scripts that run once and exit. Examining the results of docker ps, you'll have nothing listed from the two containers run earlier. However, running docker ps -a reveals the containers and their state as exited.

There is a command column that shows the CMD of the image from which the container is built from.

When running a container, you can specify the name as follows:

docker run --name <container-name> <image-name> (for example, docker run --name py-docker-container python-docker):

Running Containers From Images

We outlined earlier that you only want to have relevant Docker images and not the <none> tagged Docker images.

As for containers, you need to be aware that you can have several containers from one image. docker rm <container-id> is the command for removing containers. This works for exited containers (those that are not running).

Note

For the containers that are still running, you would have to either:

Stop the containers before removing them (docker stop <container-id>)

Remove the containers forcefully (docker rm <container-id> -f)

No container will be listed if you run docker ps, but sure enough if we run docker ps -a, you will notice that the containers are listed and their command columns will show the inherited CMD commands: python3 and node:

Running Containers From Images

Python

The CMD in Dockerfile for Python's image is python3. This means that the python3 command is run in the container and the container exits.

Note

With this in mind, one gets to run Python without installing Python in one's machine.

Try running this: docker run -it python-docker:test (with the image we created last).

We get into an interactive bash shell in the container. -it instructs the Docker container to create this shell. The shell runs python3, which is the CMD in the Python base image:

Python

In the command docker run -it python-docker:test python3 run.py, python3 run.py is run as you would in the terminal within the container. Note that run.py is within the container and so runs. Running docker run -it python python3 run.py would indicate the absence of the run.py script:

Python
Python

The same applies to JavaScript, showing that the concept applies across the board.

docker run -it js-docker:test (the image we created last) will have a shell running node (the CMD in the node base image):

Python

docker run -it js-docker:test node run.js will output Hello Docker - JS:

Python

That proves the inheritance factor in Docker images.

Now, return the Dockerfiles to their original state with the CMD commands on the last line.

Versioning Images and Docker Hub

Remember talking about versioning images in Topic D? We did that by adding latest and using some numbers against our images, such as 3.x.x or 3.x.x-rc.

In this topic, we'll go through using tags for versioning and look at how official images have been versioned in the past, thereby learning best practices.

The command in use here is the following:

docker build -t <image-name>:<tag> <relative location of the Dockerfile>

Say, for example, we know that Python has several versions: Python 3.6, 3.5, and so on. Node.js has several more. If you take a look at the official Node.js page on Docker Hub, you see the following at the top of the list:

9.1.0, 9.1, 9, latest (9.1/Dockerfile) (as of November 2017):

Versioning Images and Docker Hub

This versioning system is called semver: semantic versioning. This version number has the format MAJOR, MINOR, PATCH in an incremental manner:

MAJOR: For a change that is backward-incompatible

MINOR: For when you have a backward-compatible change

PATCH: For when you make bug fixes that are backward-compatible

You'll notice labels such as rc and other prerelease and build metadata attached to the image.

When building your images, especially for release to the public or your team, using semver is the best practice.

That said, I advocate that you do this always and have this as a personal mantra: semver is key. It will remove ambiguity and confusion when working with your images.

Deploying a Docker Image to Docker Hub

Every time we run docker build, the image created is locally available. Normally, the Dockerfile is hosted together with the code base; therefore, on a new machine, one would need to use docker build to create the Docker image.

With Docker Hub, any developer has the opportunity to have a Docker image hosted to be pulled into any machine running Docker. This does two things:

  • Eliminates the repetitive task of running docker build
  • Adds an additional way of sharing your application which is simple to set up compared to sharing a link of your app's code base and README detailing the setup process

docker login is the command to run to connect to Docker Hub via the CLI. You need to have an account in hub.docker.com and enter the username and password through the terminal.

docker push <docker-hub-username/image-name[:tag]> is the command to send the image to the registry, Docker Hub:

Deploying a Docker Image to Docker Hub

A simple search of your image on hub.docker.com will give the output to your Docker image.

In a new machine, a simple docker pull <docker-hub-username/your-image-name> command will produce a copy of your image locally.

Summary

In this lesson, we have done the following:

  • Reviewed the DevOps workflow and a few use cases for Docker
  • Walked through Dockerfile syntax
  • Gained a high-level understanding of the build images for applications and running containers
  • Constructed a number of images, versioned them, and pushed them to Docker Hub
Left arrow icon Right arrow icon

Key benefits

  • Learn how to structure your own Docker containers
  • Create and manage multiple configuration images
  • Understand how to scale and deploy bespoke environments

Description

Making sure that your application runs across different systems as intended is quickly becoming a standard development requirement. With Docker, you can ensure that what you build will behave the way you expect it to, regardless of where it's deployed. By guiding you through Docker from start to finish (from installation, to the Docker Registry, all the way through to working with Docker Swarms), we’ll equip you with the skills you need to migrate your workflow to Docker with complete confidence.

Who is this book for?

This book is ideal for developers, system architects and site reliability engineers (SREs) who wish to adopt a Docker-based workflow for consistency, speed and isolation of system resources within their applications. You’ll need to be comfortable working with the command line.

What you will learn

  • Learn to design and build containers for different kinds of applications
  • Create a testing environment to identify issues that may cause production deployments to fail
  • Discover how you can correctly structure and manage a multi-tier environment
  • Run, debug, and experiment with example applications in Docker containers

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : May 29, 2018
Length: 96 pages
Edition : 1st
Language : English
ISBN-13 : 9781789532401
Concepts :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : May 29, 2018
Length: 96 pages
Edition : 1st
Language : English
ISBN-13 : 9781789532401
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 82.97
Practical DevOps, Second Edition
€36.99
Beginning DevOps with Docker
€12.99
Learn Docker - Fundamentals of Docker 18.x
€32.99
Total 82.97 Stars icon

Table of Contents

4 Chapters
1. Images and Containers Chevron down icon Chevron up icon
2. Application Container Management Chevron down icon Chevron up icon
3. Orchestration and Delivery Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.9
(7 Ratings)
5 star 71.4%
4 star 0%
3 star 0%
2 star 0%
1 star 28.6%
Filter icon Filter
Top Reviews

Filter reviews by




Rohith Sep 21, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Subscriber review Packt
Gary Waltman Jul 07, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Easy read
Amazon Verified review Amazon
Amazon Customer Jun 14, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Excellent book.
Amazon Verified review Amazon
Dominic Motuka Jul 09, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Joseph has a solid grasp of the subject and understands that DevOps is more than just a series of scripts written and maintained by the (mostly) operations staff that tries to run with DevOps. Docker has been introduced to us ( the readers ) in a simple way.Well written and nicely illustrated with color images. This is an excellent introduction to Docker.
Amazon Verified review Amazon
NdagiStanley Aug 22, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Beginning DevOps with Docker is a must-read for individuals who expect proper code and written analogies that will enhance their understanding of the overall use of docker in DevOps. The docker commands are well explained, everything to get you from beginner to guru.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.