Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Docker Cookbook
Docker Cookbook

Docker Cookbook: Over 100 practical and insightful recipes to build distributed applications with Docker , Second Edition

Arrow left icon
Profile Icon Cochrane Profile Icon K Khare Profile Icon Jeeva S. Chelladhurai
Arrow right icon
R$272.99
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.5 (2 Ratings)
Paperback Aug 2018 352 pages 2nd Edition
eBook
R$80 R$218.99
Paperback
R$272.99
Subscription
Free Trial
Renews at R$50p/m
Arrow left icon
Profile Icon Cochrane Profile Icon K Khare Profile Icon Jeeva S. Chelladhurai
Arrow right icon
R$272.99
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.5 (2 Ratings)
Paperback Aug 2018 352 pages 2nd Edition
eBook
R$80 R$218.99
Paperback
R$272.99
Subscription
Free Trial
Renews at R$50p/m
eBook
R$80 R$218.99
Paperback
R$272.99
Subscription
Free Trial
Renews at R$50p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Table of content icon View table of contents Preview book icon Preview Book

Docker Cookbook

Introduction and Installation

In this chapter, we will cover the following recipes:

  • Verifying the requirements for Docker installation
  • Installing Docker on Ubuntu
  • Installing Docker on CentOS
  • Installing Docker on Linux with an automated script
  • Installing Docker for Windows
  • Installing Docker for Mac
  • Pulling an image and running a container
  • Adding a nonroot user to administer Docker
  • Finding help with the Docker command line

Introduction

At the very start of the IT revolution, most applications were deployed directly on physical hardware, over the host OS. Because of that single user space, runtime was shared between applications. The deployment was stable, hardware-centric, and had a long maintenance cycle. It was mostly managed by an IT department, and gave much less flexibility to developers. In such cases, the hardware resources were underutilized most of the time. The following diagram depicts such a setup:

Traditional application deployment

For flexible deployments, and in order to better utilize the resources of the host system, virtualization was invented. With hypervisors, such as KVM, XEN, ESX, Hyper-V, and so on, we emulated the hardware for virtual machines (VMs) and deployed a guest OS on each virtual machine. VMs can have a different OS than their host; this means that we are responsible for managing patches, security, and the performance of that VM. With virtualization, applications are isolated at VM level and are defined by the life cycle of VMs. This gives us a better return on our investment and higher flexibility at the cost of increased complexity and redundancy. The following diagram depicts a typical virtualized environment:

Application deployment in a virtualized environment

Since virtualization was developed, we have been moving towards more application-centric IT. We have removed the hypervisor layer to reduce hardware emulation and complexity. The applications are packaged with their runtime environment, and are deployed using containers. OpenVZ, Solaris Zones, and LXC are a few examples of container technology. Containers are less flexible compared to VMs; for example, we cannot run Microsoft Windows on Linux OS as of writing. Containers are also considered less secure than VMs, because with containers, everything runs on the host OS. If a container gets compromised, then it might be possible to get full access to the host OS. It can be a bit too complex to set up, manage, and automate. These are a few of the reasons why we have not seen the mass adoption of containers in the last few years, even though we had the technology. The following diagram shows how an application is deployed using containers:

Application deployment with containers

With Docker, containers suddenly became first-class citizens. All big corporations, such as Google, Microsoft, Red Hat, IBM, and others, are now working to make containers mainstream.

Docker was started as an internal project by dotCloud founder Solomon Hykes. It was released as open source in March 2013 under the Apache 2.0 license. With dotCloud's platform as a service experience, the founders and engineers of Docker were aware of the challenges of running containers. So with Docker, they developed a standard way to manage containers.

Docker uses the operating system's underlying kernel features, which enable containerization. The following diagram depicts the Docker platform and the kernel features used by Docker. Let's look at some of the major kernel features that Docker uses:

Docker platform and the kernel features used by Docker

Namespaces

Namespaces are the building blocks of a container. There are different types of namespace, and each one of them isolates applications from the others. They are created using the clone system call. You can also attach to existing namespaces. Some of the namespaces used by Docker will be explained in the following sections.

The PID namespace

The PID namespace allows each container to have its own process numbering. Each PID forms its own process hierarchy. A parent namespace can see the children namespaces and affect them, but a child can neither see the parent namespace nor affect it.

If there are two levels of hierarchy, then at the top level, we would see the process running inside the child namespace with a different PID. So a process running in a child namespace would have two PIDs: one in the child namespace and the other in the parent namespace. For example, if we run a program on the container.sh container, then we can see the corresponding program on the host as well.

On the container, the sh container.sh process has a PID of 8:

On the host, the same process has a PID of 29778:

The net namespace

With the PID namespace, we can run the same program multiple times in different isolated environments; for example, we can run different instances of Apache on different containers. But without the net namespace, we would not be able to listen on port 80 on each one of them. The net namespace allows us to have different network interfaces on each container, which solves the problem I mentioned earlier. Loopback interfaces would be different in each container as well.

To enable networking in containers, we create pairs of special interfaces in two different net namespaces and allow them to talk to each other. One end of the special interface resides inside the container and the other resides on the host system. Generally, the interface inside the container is called eth0, and on the host system, it is given a random name, such as veth516cc56. These special interfaces are then linked through a bridge (docker0) on the host to enable communication between the containers and the route packets.

Inside the container, you will see something like the following:

$ docker container run -it alpine ash
# ip a

On the host, it would look like the following:

$ ip a

Also, each net namespace has its own routing table and firewall rules.

The IPC namespace

The inter-process communication (IPC) namespace provides semaphores, message queues, and shared memory segments. It is not widely used these days, but some programs still depend on it.

If the IPC resource created by one container is consumed by another container, then the application running on the first container could fail. With the IPC namespace, processes running in one namespace cannot access resources from another namespace.

The mnt namespace

Using only a chroot, you can inspect the relative paths of the system from a chrooted directory/namespace. The mnt namespace takes the idea of chroots to the next level. With the mnt namespace, a container can have its own set of mounted filesystems and root directories. Processes in one mnt namespace cannot see the mounted filesystems of another mnt namespace.

The UTS namespace

With the UTS namespace, we can have different hostnames for each container.

The user namespace

With user namespace support, we can have users who have a nonzero ID on the host, but who can have a zero ID inside the container. This is because the user namespace allows mappings of users and groups IDs per namespace.

There are ways to share namespaces between the host and container, and other containers as well. We'll see how to do this in subsequent chapters.

Cgroups

Control groups (cgroups) provide resource limitations and accounting for containers. The following quote is from the Linux Kernel documentation:

"Control Groups provide a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour."

In simple terms, they can be compared to the ulimit shell command or the setrlimit system call. Instead of setting the resource limit to a single process, cgroups allow you to limit resources to a group of processes.

Control groups are split into different subsystems, such as CPU, CPU sets, memory block I/O, and so on. Each subsystem can be used independently, or can be grouped with others. The features that cgroups provide are as follows:

  • Resource limiting: For example, one cgroup can be bound to specific CPUs, so that all processes in that group would run on given CPUs only
  • Prioritization: Some groups may get a larger share of CPUs
  • Accounting: You can measure the resource usage of different subsystems for billing
  • Control: You can freeze and restart groups

Some of the subsystems that can be managed by cgroups are as follows:

  • blkio: Sets I/O access to and from block devices, such as disks, SSDs, and so on
  • Cpu: Limits access to CPU
  • Cpuacct: Generates CPU resource utilization
  • Cpuset: Assigns CPUs on a multicore system to tasks in a cgroup
  • Devices: Grants devices access to a set of tasks in a group
  • Freezer: Suspends or resumes tasks in a cgroup
  • Memory: Sets limits on memory use by tasks in a cgroup

There are multiple ways to control work with cgroups. Two of the most popular ones are accessing the cgroup virtual filesystem manually and accessing it with the libcgroup library. To use libcgroup on Linux, run the following command to install the required packages on Ubuntu or Debian:

$ sudo apt-get install cgroup-tools

To install the required packages on CentOS, Fedora, or Red Hat, use the following code:

$ sudo yum install libcgroup libcgroup-tools
These steps are not possible on Docker for Mac and Windows, because you can't install the required packages on those versions of Docker.

Once installed, you can get the list of subsystems and their mount point in the pseudo filesystem with the following command:

$ lssubsys -M

Although we haven't looked at the actual commands yet, let's assume that we are running a few containers and want to get the cgroup entries for a container. To get those, we first need to get the container ID and then use the lscgroup command to get the cgroup entries of a container, which we can get using the following command:

The union filesystem

The union filesystem allows the files and directories of separate filesystems, known as layers, to be transparently overlaid to create a new virtual filesystem. While starting a container, Docker overlays all the layers attached to an image and creates a read-only filesystem. On top of that, Docker creates a read/write layer that is used by the container's runtime environment. You can read the Pulling an image and running a container recipe of this chapter for more details. Docker can use several union filesystem variants, including AUFS, Btrfs, zfs, overlay, overlay2, and DeviceMapper.

Docker also has a virtual file system (VFS) storage driver. A VFS doesn't support copy-on-write (COW) and is not a union filesystem. This means that each layer is a directory on the disk, and each time a new layer is created, it requires a deep copy of its parent layer. For these reasons, it has lower performance and requires more disk space, but it is a robust and stable option that works in every environment.

The container format

Docker Engine combines the namespaces, control groups, and UnionFS into a wrapper called a container format. In 2015, Docker donated its container format and runtime to an organization called the the Open Container Initiative (OCI). The OCI is a lightweight, open-governance structure formed under the Linux Foundation by Docker and other industry leaders. The purpose of the OCI is to create open industry standards around container formats and runtimes. There are currently two specifications: the Runtime Specification and the Image Specification.

The Runtime Specification outlines how to run an OCI runtime filesystem bundle. Docker donated runC, (https://github.com/opencontainers/runc) its OCI-compliant runtime to the OCI, to serve as the reference implementation.

The OCI image format contains the information needed to launch the application on the target platform. The specification defines how to create the OCI image, and what the desired output would look like. The output would consist of an image manifest, a filesystem (layer) serialization, and the image configuration. Docker donated its Docker V2 image format to the OCI to form the basis of the OCI image specification.

There are currently two container engines that support the OCI Runtime and Image Specifications: Docker and rkt.

Verifying requirements for Docker installation

Docker is supported on many Linux platforms, such as RHEL, Ubuntu, Fedora, CentOS, Debian, Arch Linux, among others. It is also supported on many cloud platforms, such as Amazon Web Services, Digital Ocean, Microsoft Azure, and Google Cloud. Docker has also released desktop applications for Microsoft Windows and Mac OS X that allows you to easily get Docker up and running directly on your local machine.

In this recipe, we will verify the requirements for Docker installation. We will look at a system with an Ubuntu 18.04 LTS installation, though the same steps should work on other Linux flavors as well.

Getting ready

Log in as a root user on the system that has Ubuntu 18.04 installed.

How to do it...

Perform the following steps:

1. Docker is not supported on 32-bit architecture. To check the architecture on your system, run the following command:

        $ uname -i
x86_64
  1. Docker is supported on kernel 3.8 or later. It has been back-ported on some of the kernel 2.6, such as RHEL 6.5 and above. To check the kernel version, run the following command:
      $ uname -r
4.15.0-29-generic
  1. Running the kernel should support an appropriate storage backend. Some of the options for such a backend are VFS, DeviceMapper, AUFS, Btrfs, zfs, and Overlayfs.

For Ubuntu, the default storage backend or driver is overlay2, which has been available since Ubuntu 14.04. Another popular one is DeviceMapper, which uses the device-mapper thin provisioning module to implement layers. It should be installed by default on a majority of Linux platforms. To check for device-mapper, you can run the following command:

            $ grep device-mapper /proc/devices
253 device-mapper

On most distributions, AUFS would require a modified kernel.

  1. Support for cgroups and namespaces have been in the kernel for sometime, and should be enabled by default. To check for their presence, you can look at the corresponding configuration file of the kernel you are running. For example, on Ubuntu, I can do something like the following:
    $ grep -i namespaces /boot/config-4.15.0-29-generic
CONFIG_NAMESPACES=y

$ grep -i cgroups /boot/config-4.15.0-29-generic
CONFIG_CGROUPS=y
The name of the config file is usually dependent on your kernel version. Your system might have a different filename. If this is the case, change the command accordingly.

How it works...

Docker requires that the host system meets a basic set of requirements in order for it to run correctly. By running the preceding commands, we were able to confirm that our system meets those requirements.

See also

Installing Docker on Ubuntu

There are a few different versions of Ubuntu that are available. In this recipe, we will be installing Docker on Ubuntu 18.04, which is the latest LTS version as of writing. These same steps should also work with Ubuntu 16.04.

Getting ready

Check for the prerequisites mentioned in the previous recipe.

Uninstall any older versions of Docker. Previous versions of the Docker package are called docker, docker.io, or docker-engine. If these are installed, then we need to uninstall them, or else they might cause problems:

    $ sudo apt-get remove docker docker-engine docker.io

How to do it...

Go through the following steps:

  1. Update the apt package index:
   $ sudo apt-get update
  1. Install the packages to allow apt to use a repository over HTTPS:
       $ sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
  1. Add Docker's official GPG key:
        $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
OK

Verify that we have the correct key installed:

        $ sudo apt-key fingerprint 0EBFCD88
pub rsa4096 2017-02-22 [SCEA]
9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
uid [ unknown] Docker Release (CE deb) <docker@docker.com>
sub rsa4096 2017-02-22 [S]
  1. Add the Docker apt repository using the stable channel:
       $ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
If you want more frequent updates, and you don't mind a few bugs, you can use the nightly test channel. To use the test channel, all you need to do is change stable to test in the preceding command.
  1. Update the apt package index so that it includes the Docker repository we just added:
       $ sudo apt-get update
  1. Install the latest version of Docker CE:
       $ sudo apt-get install docker-ce
  1. Verify that the installation worked:
       $ sudo docker container run hello-world

How it works...

The preceding command will install Docker on Ubuntu and all the packages required by it.

There's more...

The default Docker daemon configuration file is located at /etc/docker, which is used while starting the daemon. Here are some basic operations:

  • To start the service, enter the following:
    $ sudo systemctl start docker
  • To verify the installation, enter the following:
    $ docker info
  • To update the package, enter the following:
    $ sudo apt-get update
  • To enable the start of the service at boot time, enter the following:
    $ sudo systemctl enable docker
  • To stop the service, enter the following:
    $ sudo systemctl stop docker

See also

Installing Docker on CentOS

Another popular Linux distribution is CentOS, which is a free, enterprise-class distribution that is compatible with Red Hat Enterprise Linux (RHEL). Go through the following easy recipe to install Docker on CentOS 7.x.

Getting ready

The centos-extra repository must be enabled. This is usually enabled by default, but if you disabled it, please enable it again.

Previously, the Docker package had a different name: It was called docker or docker-engine; it is now called docker-ce. We will need to remove any previous Docker versions in order to prevent any conflicts:

$ sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
It is OK if yum reports that none of these packages are installed.

How to do it...

Go through the following steps:

  1. Install the required packages:
      $ sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
  1. Set up the Docker yum repository using the stable channel:
        $ sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
  1. Optional: Enable the test channel for access to the nightly builds:
        $ sudo yum-config-manager --enable docker-ce-test
  1. Install the latest version of docker-ce:
         $ sudo yum install docker-ce
  1. If prompted to accept the GPG key, verify that it matches 060A 61C5 1B55 8A7F 742B 77AA C52F EB6B 621E 9F35. If it does, then accept it:
    Retrieving key from https://download.docker.com/linux/centos/gpg
Importing GPG key 0x621E9F35:
Userid : "Docker Release (CE rpm) <docker@docker.com>"
Fingerprint: 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35
From : https://download.docker.com/linux/centos/gpg
Is this ok [y/N]: y
  1. Start the Docker daemon:
        $ sudo systemctl start docker
  1. Verify that the installation worked:
        $ docker container run hello-world

How it works...

The preceding recipe installs Docker on CentOS and all the packages required by it.

There's more...

The default Docker daemon configuration file is located at /etc/docker, which is used while starting the daemon. Here are some basic operations:

  • To start the service, enter the following:
    $ sudo systemctl start docker
  • To verify the installation, enter the following:
    $ docker info
  • To update the package, enter the following:
    $ sudo yum -y upgrade
  • To enable the service start at boot time, enter the following:
    $ sudo systemctl enable docker
  • To uninstall Docker, enter the following:
        $ sudo yum remove docker-ce
  • To stop the service, enter the following:
        $ sudo systemctl stop docker

See also

Installing Docker on Linux with an automated script

In the previous two recipes, we went through the different steps required to install Docker on Ubuntu and CentOS. Those steps are fine when you are only installing it on a host or two, but what if you need to install it on a hundred? In that case, you would want something a little more automated to speed up the process. This recipe shows you how to install Docker on different Linux flavors using an install script that is provided by Docker.

Getting ready

Like all scripts that you download off the internet, the first thing you should do is examine the script and make sure you know what it is doing before you use it. To do this, go through the following steps:

  1. Visit https://get.docker.com in your favorite web browser to review the script, and make sure you are comfortable with what it is doing. If in doubt, don't use it.
  2. The script needs to be run as root or with sudo privileges.
  3. If Docker has already been installed on the host, it needs to be removed before running the script.

The script currently works with the following flavors of Linux: CentOS, Fedora, Debian, Ubuntu, and Raspbian.

How to do it

To use the script, go through the following steps:

  1. Download the script to the host system:
        $ curl -fsSL get.docker.com -o get-docker.sh
  1. Run the script:
        $ sudo sh get-docker.sh

How it works...

The preceding recipe used an automated script to install Docker on Linux.

There's more...

In order to upgrade Docker, you will need to use the package manager on your host. Rerunning the script can cause issues if it attempts to re-add repositories that were already added. See the previous recipes to learn how to upgrade Docker on CentOS and Ubuntu using their respective package managers.

Installing Docker for Windows

Docker for Windows is a native application that is deeply integrated with Hyper-V virtualization and the Windows networking and filesystems. It is a full-featured development environment that can be used for building, debugging, and testing Docker apps on a Windows PC. It also works well with VPNs and proxies to make it easier when used in a corporate environment.

Docker for Windows supports both Windows and Linux containers out of the box, and it is easy to switch between the two to build your multiplatform applications. It comes with the Docker CLI client, Docker Compose, Docker Machine, and Docker Notary.

Recent releases have also added Kubernetes support so that you can easily create a full Kubernetes environment on your machine with just the click of a button.

Getting ready

Docker for Windows has the following system requirements:

  • 64-bit Windows 10 Pro, Enterprise, and Education (1607 Anniversary Update, Build 14393 or later)
  • Virtualization must be enabled in BIOS and be CPU-SLAT-capable.
  • 4 GB of RAM
If your system does not satisfy these requirements, fear not—all is not lost. You can install Docker Toolbox (https://docs.docker.com/toolbox/overview/), which uses Oracle VirtualBox instead of Hyper-V. It isn't as good, but it is better than nothing.

How to do it

To install Docker for Windows, go through the following steps:

  1. Download Docker for Windows from the Docker Store at https://store.docker.com/editions/community/docker-ce-desktop-windows. You will need to log in in order to download the installer. If you do not have a Docker account, you can create one at https://store.docker.com/signup.
  2. Double-click the installation file that you downloaded from the store. It should be called something like Docker for windows Installer.exe:

Once the installation is complete, it will automatically start up. You will notice a little whale icon in the notification area of your task bar. If you need to change any settings, right-click on the icon and select Settings.

  1. Open up a command-line terminal and check to make sure that the installation is working:
            $ docker container run hello-world

How it works...

This recipe will show you how to install a Docker development environment on your Windows machine.

There's more

Now that you have Docker for Windows installed, check out the following tips to get the most out of your installation:

  • Docker for Windows supports both Windows and Linux containers. If you want to switch, you just need to right-click on the whale icon, select Switch to Windows containers..., and then click the Switch button:

To switch back, do the same thing, except this time, select Switch to Linux containers....

  • Docker for Windows will automatically check for new updates and let you know when a new version is available to install. If you agree to upgrade, it will download the new version and install it for you.
  • Kubernetes doesn't run by default. If you want to turn it on, you will need to right-click on the Docker whale icon in your task bar, then select Settings. Inside the Settings menu, there is a Kubernetes tab. Click on the tab, and then click the Enable Kubernetes option and hit the Apply button:

See also

Installing Docker for Mac

Docker for Mac is the fastest and most reliable way to run Docker on a Mac. It installs all of the tools required to set up a complete Docker development environment on your Mac. It includes the Docker command line, Docker Compose, and Docker Notary. It also works well with VPNs and proxies to make it easier when used in a corporate environment.

Recent releases have also added Kubernetes support so that you can easily create a full Kubernetes environment on your machine with just the click of a button.

Getting ready

Docker for Mac has the following system requirements:

  • macOS El Capitan 10.11, or a newer macOS release
  • At least 4 GB of RAM
  • The Mac hardware must be a 2010 or newer model, with Intel's hardware support for Memory Management Unit (MMU) virtualization, including Extended Page Tables (EPT) and unrestricted mode. To see whether your machine supports this, run the following command in a terminal:
        $ sysctl kern.hv_support
kern.hv_support: 1
If your system does not satisfy these requirements, fear not—all is not lost. You can install Docker Toolbox (https://docs.docker.com/toolbox/overview/), which uses Oracle VirtualBox instead of HyperKit. It isn't as good, but it is better than nothing.

How to do it

To install Docker for Mac, go through the following steps:

  1. Download Docker for Mac from the Docker Store at https://store.docker.com/editions/community/docker-ce-desktop-mac. You will need to log in in order to download the installer. If you do not have a Docker account, you can create one at https://store.docker.com/signup.
  2. Open the installation file that you downloaded from the store. It should be called something like Docker.dmg.
  1. Drag and drop the whale icon into the Applications folder:

  1. Double-click the Docker.app icon in the Applications folder to start Docker, as shown in the following screenshot:
  1. You will be prompted to authorize Docker.app with your system password. This is normal—Docker.app needs privileged access to install some of its components. Click OK and enter your password so it can finish installing:
  1. When Docker is finished, a little whale icon will show up in the status menu in the top right of your screen, as shown in the following screenshot:
  1. If you click on the whale, you can access the application preferences and other options.
  2. Select the About Docker button to verify that you have the latest version.
  3. Check to make sure that it is installed and working. Open up a terminal window and type the following:
            $ docker container run hello-world

How it works...

The preceding recipe will download and install a Docker development environment on your Mac.

There's more...

Now that you have Docker for Mac installed, here are a few more tips for getting started:

  • Docker for Mac will automatically check for new updates and let you know when a new version is available for you to install. If you agree to upgrade, it will do all the work, downloading the new version and installing it for you.
  • Kubernetes isn't running by default. If you want to turn it on, you will need to click on the Docker whale icon in your Status menu, then select Preferences. Inside of Preferences, there is a Kubernetes tab. Click on the tab, then click the Enable Kubernetes option, and hit the Apply button:

See also

Pulling an image and running a container

I am borrowing the following recipe from the next chapter to introduce some concepts. Don't worry if the recipe doesn't explain everything; we'll cover the topics in detail later in this chapter, or in the next few chapters. For now, let's pull an image and run it. We'll also get familiar with Docker architecture and its components in this recipe.

Getting ready

First, gain access to a system that has Docker installed.

How to do it...

To pull an image and run a container, go through the following steps:

  1. Pull an image by running the following command:
      $ docker image pull alpine
  1. List the existing images by using the following command:
      $ docker image ls
  1. Create a container using the pulled image and list the containers as follows:
        $ docker container run -id --name demo alpine ash

How it works...

Docker has client–server architecture. Its binary consists of the Docker client and server daemon, and can reside on the same host. The client can communicate via sockets or a RESTful API to either a local or remote Docker daemon. The Docker daemon builds, runs, and distributes containers. As shown in the following diagram, the Docker client sends the command to the Docker daemon running on the host machine. The Docker daemon also connects to either a public or a local registry to get images requested by the client:

So in our case, the Docker client sends a request to the daemon running on the local system, which then connects to the public Docker registry and downloads the image. Once it is downloaded, we can run it.

There's more...

Let's explore some keywords that we encountered earlier in this recipe:

  • Images: Docker images are read-only templates, and they give us containers during runtime. They are based on the idea of a base image and layers resting on top of it. For example, we can have a base image of Alpine or Ubuntu, and then we can install packages or make modifications over the base image to create a new layer. The base image and new layer can be treated as a new image. For example, in the following figure, Debian is the base image and then Emacs and Apache are the two layers added on top of it. They are highly portable and can be shared easily:

Layers are transparently laid on top of the base image to create a single coherent filesystem.

  • Registries: A registry holds Docker images. It can be public or private, depending on the location from which you can download or upload images. The public Docker registry is called Docker Hub, which we will cover later.
  • Index: An index manages user accounts, permissions, searches, tagging, and all that nice stuff that's in the public web interface of the Docker registry.
  • Containers: Containers run images that are created by combining the base image and the layers on top of it. They contain everything that is needed to run an application. As shown in the preceding diagram, a temporary layer is also added while starting the container, and this will be discarded if it is not committed after the container is stopped and deleted. If it is committed, then it would create another layer.
  • Repository: Different versions of an image can be managed by multiple tags, which are saved with different GUID. A repository is a collection of images tracked by GUIDs.

See also

Adding a nonroot user to administer Docker

For ease of use, we can allow a nonroot user to administer Docker by adding it to a Docker group. This is not required when using Docker on Mac or Windows.

Getting ready

To prepare to add a nonroot user to administer Docker, go through the following steps:

  1. Create the Docker group, if it is not there already:
    $ sudo groupadd docker
  1. Create the user to whom you want to give permissions to administer Docker:
    $ sudo useradd dockertest

How to do it...

Run the following command to add the newly created user to administer Docker:

    $ sudo usermod -aG docker dockertest

How it works...

The preceding command will add a user to the Docker group. The added user will thus be able to perform all Docker operations.

Finding help with the Docker command line

Docker commands are well documented, and can be referred to whenever needed. Lots of documentation is available online as well, but it might differ from the documentation for the Docker version you are running.

Getting ready

First, install Docker on your system.

How to do it...

  1. On a Linux-based system, you can use the man command to find help, as follows:
    $ man docker
  1. Subcommand-specific help can also be found with either of the following commands:
    $ man docker ps
$ man docker-ps

How it works...

The man command uses the man pages installed by the Docker package to provide help.

See also

Left arrow icon Right arrow icon

Key benefits

  • Learn to manage containers efficiently with the help of real-world examples
  • Integrate orchestration tools such as Kubernetes for controlled deployments
  • Implement best practices for improving container efficiency and security

Description

Docker is an open source tool used for creating, deploying, and running applications using containers. With more than 100 self-contained tutorials, this book examines common pain points and best practices for developers building distributed applications with Docker. Each recipe in this book addresses a specific problem and offers a proven, best practice solution with insights into how it works, so that you can modify the code and configuration files to suit your needs. The Docker Cookbook begins by guiding you in setting up Docker in different environments and explains how to work with its containers and images. You’ll understand Docker orchestration, networking, security, and hosting platforms for effective collaboration and efficient deployment. The book also covers tips and tricks and new Docker features that support a range of other cloud offerings. By the end of this book, you’ll be able to package and deploy end-to-end distributed applications with Docker and be well-versed with best practice solutions for common development problems.

Who is this book for?

If you’re a developer, system administrator, or DevOps engineer looking to learn effective ways to build and manage distributed applications with Docker, this book is for you. You’ll need a basic understanding of Linux/Unix to understand the recipes covered.

What you will learn

  • Uncover the latest features of Docker 18.xx
  • Work with Docker images and containers
  • Explore container networking and data sharing
  • Get to grips with Docker APIs and language bindings
  • Understand the different PaaS solutions for Docker
  • Implement container orchestration using Docker Swarm and Kubernetes
  • Explore a variety of methods to debug and secure your Docker container
Estimated delivery fee Deliver to Brazil

Standard delivery 10 - 13 business days

R$63.95

Premium delivery 3 - 6 business days

R$203.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Aug 31, 2018
Length: 352 pages
Edition : 2nd
Language : English
ISBN-13 : 9781788626866
Vendor :
Docker
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Estimated delivery fee Deliver to Brazil

Standard delivery 10 - 13 business days

R$63.95

Premium delivery 3 - 6 business days

R$203.95
(Includes tracking information)

Product Details

Publication date : Aug 31, 2018
Length: 352 pages
Edition : 2nd
Language : English
ISBN-13 : 9781788626866
Vendor :
Docker
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
R$50 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
R$500 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just R$25 each
Feature tick icon Exclusive print discounts
R$800 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just R$25 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total R$ 763.97
Mastering Docker
R$306.99
Docker Cookbook
R$272.99
Docker Quick Start Guide
R$183.99
Total R$ 763.97 Stars icon

Table of Contents

12 Chapters
Introduction and Installation Chevron down icon Chevron up icon
Working with Docker Containers Chevron down icon Chevron up icon
Working with Docker Images Chevron down icon Chevron up icon
Network and Data Management for Containers Chevron down icon Chevron up icon
Docker Use Cases Chevron down icon Chevron up icon
Docker APIs and SDKs Chevron down icon Chevron up icon
Docker Performance Chevron down icon Chevron up icon
Docker Orchestration and Hosting a Platform Chevron down icon Chevron up icon
Docker Security Chevron down icon Chevron up icon
Getting Help and Tips and Tricks Chevron down icon Chevron up icon
Docker on the Cloud Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.5
(2 Ratings)
5 star 50%
4 star 0%
3 star 0%
2 star 50%
1 star 0%
Esa Oct 13, 2023
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
I see you have no e-book download option for this "free" book. I was hoping I could read it on my e-reader. In this online web-based format I have no use for it.
Subscriber review Packt
John Costa Feb 08, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I found the approach of the book to be pragmatic and helpful. It walked through the concepts presented in the beginning of the book and tied them together in well defined examples later in the book. It was a bonus to see the CI/CD and Orchestration examples as well.While there's not much information on how Docker runs on Window's under the hood - I didn't find that too distracting as the point of a docker container is to be system agnostic.Overall, I'd recommend this book, especially to those within minimal or no exposure to Docker at all.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela