Installing Docker
The hands-on exercises in this book will require that you have a working Docker host. To install Docker, we have included a script located in this book’s GitHub repository, in the chapter1
directory, called install-docker.sh
.
Today, you can install Docker on just about every hardware platform out there. Each version of Docker acts and looks the same on each platform, making development and using Docker easy for people who need to develop cross-platform applications. By making the functions and commands the same between different platforms, developers do not need to learn a different container runtime to run images.
The following is a table of Docker’s available platforms. As you can see, there are installations for multiple OSs, as well as multiple architectures:
Desktop Platform |
x86_64/amd64 |
arm64 (Apple Silicon) |
Docker Desktop (Linux) |
||
Docker Desktop (macOS) |
||
Docker Desktop (Windows) |
Server Platform |
x86_64/amd64 |
arm64/aarch64 |
arm (32-bit) |
ppcc64le |
s390x |
CentOS |
|||||
Debian |
|||||
Fedora |
|||||
Raspberry Pi OS |
|||||
RHEL (s390) |
|||||
SLES |
|||||
Ubuntu |
Table 1.2: Available Docker platforms
Images that are created using one architecture cannot run on a different architecture. This means that you cannot create an image based on x86 hardware and expect that same image to run on your Raspberry Pi running an ARM processor. It’s also important to note that while you can run a Linux container on a Windows machine, you cannot run a Windows container on a Linux machine.
While images, by default, are not cross-architecture compatible, there are new tools to create what’s known as a multi-platform image. Multi-platform images are images that can be used across different architectures or processors in a single container, rather than having multiple images, such as one for NGINX on x86, another one for ARM, and another one for PowerPC. This will help you simplify your management and deployment of containerized applications. Since multi-platform images contain various versions, one for each architecture you include, you need to specify the architecture when deploying the image. Luckily, the container runtime will help out and automatically select the correct architecture from the image manifest.
The use of multi-platform images provides portability, flexibility, and scalability for your containers across cloud platforms, edge deployments, and hybrid infrastructure. With the use of ARM-based servers growing in the industry and the heavy use of Raspberry Pi by people learning Kubernetes, cross-platform images will help make consuming containers quicker and easier.
For example, in 2020, Apple released the M1 chip, ending the era of Apple running Intel processors in favor of the ARM processor. We’re not going to get into the details of the difference, only that they are different and this leads to important challenges for container developers and users. Docker does have Docker Desktop, a macOS tool for running containers that lets you use the same workflows that you used if you had a Docker installation on Linux, Windows, or x86 macOS. Docker will try to match the architecture of the underlying host when pulling or building images. On ARM-based systems, if you are attempting to pull an image that does not have an ARM version, Docker will throw an error due to the architecture incompatibilities. If you are attempting to build an image, it will build an ARM version on macOS, which cannot run on x86 machines.
Multi-platform images can be complex to create. If you want additional details on creating multi-platform images, visit the Multi-platform images page on Docker’s website: https://docs.docker.com/build/building/multi-platform/.
The installation procedures that are used to install Docker vary between platforms. Luckily, Docker has documented many of them on their website: https://docs.docker.com/install/.
In this chapter, we will install Docker on an Ubuntu 22.04 system. If you do not have an Ubuntu machine to install on, you can still read about the installation steps, as each step will be explained and does not require that you have a running system to understand the process. If you have a different Linux installation, you can use the installation procedures outlined on Docker’s site at https://docs.docker.com/. Steps are provided for CentOS, Debian, Fedora, and Ubuntu, and there are generic steps for other Linux distributions.
Preparing to install Docker
Now that we have introduced Docker, the next step is to select an installation method. Docker’s installation changes between not only different Linux distributions but also versions of the same Linux distribution. Our script is based on using an Ubuntu 22.04 server, so it may not work on other versions of Ubuntu. You can install Docker using one of two methods:
- Add the Docker repositories to your host system
- Install using Docker scripts
The first option is considered the best option since it allows for easy installation and updates to the Docker engine. The second option is designed for installing Docker on testing/development environments and is not recommended for deployment in production environments.
Since the preferred method is to add Docker’s repository to our host, we will use that option.
Installing Docker on Ubuntu
Now that we have added the required repositories, the next step is to install Docker.
We have provided a script in the chapter1
folder of the Git repository called install
-docker.sh
. When you execute the script, it will automatically install all of the necessary binaries required for Docker to run.
To provide a brief summary of the script, it begins by modifying a specific value in the /etc/needrestart/needrestart.conf
file. In Ubuntu 22.04, there was a change in how daemons are restarted, where users might be required to manually select which system daemons to restart. To simplify the exercises described in the book, we alter the restart
value in the needsrestart.conf
file to “automatic” instead of prompting for each changed service.
Next, we install a few utilities like vim
, ca-certificates
, curl
, and GnuPG
. The first three utilities are fairly common, but the last one, GnuPG
, may be newer to some readers and might need some explaining. GnuPG
, an acronym for GNU Privacy Guard, enhances Ubuntu with a range of cryptographic capabilities such as encryption, decryption, digital signatures, and key management.
In our Docker deployment, we need to add Docker’s GPG public key. which is a cryptographic key pair that secures communication and maintains data integrity. GPG keys use asymmetrical encryption, which involves the use of two different, but related, keys, known as a public key and a private key. These keys are generated together as a pair, but they provide different functions. The private key, which remains confidential, is used to generate the digital signatures on the downloaded files. The public key is publicly available and is used to verify digital signatures created by the private key.
Next, we need to add Docker’s repository to our local repository list. When we add the repository to the list, we need to include the Docker certificate. The docker.gpg
certificate was downloaded by the script from Docker’s site and stored on the local server under /etc/apt/keyings/docker.gpg
. When we add the repository to the repository list, we add the key by using the signed-by option in the /etc/apt/sources.list.d/docker.list
file. The full command is shown here:
deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable
By including the Docker repository in our local apt
repository list, we gain the ability to install the Docker binaries effortlessly. This process entails using a straightforward apt-get install
command, which will install the five essential binaries for Docker: docker-ce
, docker-ce-cli
, containerd.io
, docker-buildx-plugin
, and docker-compose-plugin
. As previously stated, all these files are signed with Docker’s GPG key. Thanks to the inclusion of Docker’s key on our server, we can be confident that the files are safe and originate from a reliable source.
Once Docker is successfully installed, the next step involves enabling and configuring the Docker daemon to start automatically during system boot using the systemctl
command. This process follows the standard procedure applied to most system daemons installed on Linux servers.
Rather than go over each line of code in each script, we have included comments in the scripts to help you understand how what each command and step is executing. Where it may help with some topics, we will include some section of code in the chapters for reference.
After installing Docker, let’s get some configuration out of the way. First, you will rarely execute commands as root in the real world, so we need to grant permissions to use Docker to your user.
Granting Docker permissions
In a default installation, Docker requires root access, so you will need to run all Docker commands as root. Rather than using sudo
with every Docker command, you can add your user account to a new group on the server that provides Docker access without requiring sudo
for every command.
If you are logged on as a standard user and try to run a Docker command, you will receive an error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/images/json: dial unix /var/run/docker.sock: connect: permission denied
To allow your user, or any other user you may want to add, to execute Docker commands, you need to add the users to a new group called docker
that was created during the installation of Docker. The following is an example command you can use to add the currently logged-on user to the group:
sudo usermod -aG docker $USER
To add the new members to your account, you can either log off and log back into the Docker host, or activate the group changes using the newgrp
command:
newgrp docker
Now, let’s test that Docker is working by running the standard hello-world
image (note that we do not require sudo
to run the Docker command):
docker run hello-world
You should see the following output, which verifies that your user has access to Docker:
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
2db29710123e: Pull complete
Digest: sha256:37a0b92b08d4919615c3ee023f7ddb068d12b8387475d64c622ac30f45c29c51
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation is working correctly – congratulations!
To generate this message, Docker took the following steps:
- The Docker client contacted the Docker daemon.
- The Docker daemon pulled the
hello-world
image from Docker Hub (amd64). - The Docker daemon created a new container from the image that runs the executable that produces the output you are currently reading.
- The Docker daemon streamed that output to the Docker client, which sent it to your terminal.
To try something more ambitious – you can run an Ubuntu container with the following:
$ docker run -it ubuntu bash
For more examples and ideas, visit https://docs.docker.com/get-started/.
Now that we’ve granted Docker permission, we can start unlocking the most common Docker commands by learning how to use the Docker CLI.