Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering Ubuntu Server

You're reading from   Mastering Ubuntu Server Explore the versatile, powerful Linux Server distribution Ubuntu 22.04 with this comprehensive guide

Arrow left icon
Product type Paperback
Published in Sep 2022
Publisher Packt
ISBN-13 9781803234243
Length 584 pages
Edition 4th Edition
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Jay LaCroix Jay LaCroix
Author Profile Icon Jay LaCroix
Jay LaCroix
Arrow right icon
View More author details
Toc

Table of Contents (26) Chapters Close

Preface 1. Deploying Ubuntu Server 2. Managing Users and Permissions FREE CHAPTER 3. Managing Software Packages 4. Navigating and Essential Commands 5. Managing Files and Directories 6. Boosting Your Command-line Efficiency 7. Controlling and Managing Processes 8. Monitoring System Resources 9. Managing Storage Volumes 10. Connecting to Networks 11. Setting Up Network Services 12. Sharing and Transferring Files 13. Managing Databases 14. Serving Web Content 15. Automating Server Configuration with Ansible 16. Virtualization 17. Running Containers 18. Container Orchestration 19. Deploying Ubuntu in the Cloud 20. Automating Cloud Deployments with Terraform 21. Securing Your Server 22. Troubleshooting Ubuntu Servers 23. Preventing Disasters 24. Other Books You May Enjoy
25. Index

Managing Docker containers

Now that Docker is installed and running, let’s take it for a test drive. After installing Docker, we have the docker command available to use now, which has various sub-commands to perform different functions with containers. First, let’s try out docker search:

docker search ubuntu

With Docker, containers are created from images. There are many pre-existing container images we can use, or we can build our own. The docker search command allows us to search for a container image that already exists and has been made available to us. Once we’ve chosen an image, we can download it locally and create container instances from it.

The ability of administrators to search for (and download) an existing container is just one of many great features Docker offers us. Although we can definitely build our own container images (and we will do so, right here in this chapter), sometimes it might make sense to use a pre-existing container image, rather than create a new one from scratch.

For example, you can install an NGINX container, simply named nginx. This is actually an official container image, so it should be trustworthy. We can tell that a container image is trustworthy by the verbiage DOCKER OFFICIAL IMAGE being present if you were to look up the image on the Docker Hub website at https://hub.docker.com. If we wanted to deploy a container running NGINX, doing so via the official image would save us a lot of time, especially compared to creating one from scratch. After all, why reinvent the wheel if you don’t have to?

However, even if the container image comes from a trustworthy source, you should still audit it. With the NGINX example, we can be fairly confident that the image is safe and doesn’t contain any unwanted objects, such as malware. However, there’s no such thing as 100% trustworthy when it comes to security, so we should audit them anyway.

But how does this work? The docker search command will search Docker Hub, which is an online repository that hosts containers for others to download and utilize. You could search for containers based on other applications, or even other distributions such as Fedora or AlmaLinux, if you wanted to experiment. The command will return a list of Docker images available that meet your search criteria.

So what do we do with these images? An image in Docker is its closest equivalent to a VM or hardware image. It’s a snapshot that contains the filesystem of a particular operating system or Linux distribution, along with some changes the author included to make it perform a specific task. This image can then be downloaded and customized to suit your purposes. You can choose to upload your customized image back to Docker Hub if you would like to contribute upstream. Every image you download will be stored on your machine so that you won’t have to re-download it every time you wish to create a new container.

To pull down a Docker image for our use, we can use the docker pull command, along with one of the image names we saw in the output of our search command:

docker pull ubuntu

With the preceding command, we’re pulling down the latest Ubuntu container image available on Docker Hub. The image will now be stored locally, and we’ll be able to create new containers from it. The process will look similar to the following screenshot:

Figure 17.1: Downloading an Ubuntu container image

If you’re curious as to which images you have saved locally, you can execute docker images to get a list of the Docker container images you have stored on your server:

docker images

The output will look similar to this:

Figure 17.2: Listing installed Docker images

Notice the IMAGE ID in the output. If for some reason you want to remove an image, you can do so with the docker rmi command, and you’ll need to use the ID as an argument to tell the command what to delete. The syntax would look similar to this if I was removing the image with the ID shown in the screenshot:

docker rmi d2e4e1f51132

Once you have a container image downloaded to your server, you can create a new container from it by running the docker run command, followed by the name of your image and an application within the image to run. An application run from within a Docker container is known as an ENTRYPOINT, which is just a fancy term to describe an application a particular container is configured to run. You’re not limited to the ENTRYPOINT though, and not all containers actually have an ENTRYPOINT. You can use any command in the container that you would normally be able to run in that distribution. In the case of the Ubuntu container image we downloaded earlier, we can run bash with the following command so that we can get a prompt and enter any command(s) we wish:

docker run -it ubuntu /bin/bash

Once you run that command, you’re now interacting with a shell prompt from within your container. From here, you can run commands you would normally run within a real Ubuntu machine, such as installing new packages, changing configuration files, and more. Go ahead and play around with the container, and then we’ll continue with a bit more theory on how this is actually working.

There are some potentially confusing aspects of Docker we should get out of the way first before we continue with additional examples. The thing that’s most likely to confuse newcomers to Docker is how containers are created and destroyed. When you execute the docker run command against an image you’ve downloaded, you’re actually creating a container. Therefore, the image you downloaded with the docker pull command wasn’t an actual container itself, but it becomes a container when you run an instance of it. When the command that’s being run inside the container finishes, the container goes away. Therefore, if you were to run /bin/bash in a container and install a bunch of packages, those packages would be wiped out as soon as you exited the container.

You can think of a Docker image as a “blueprint” for a container that can be used to create running containers. Every container you run has a container ID that differentiates it from others. If you want to remove a persistent container, for example, you would need to reference this ID with the docker rm command. This is very similar to the docker rmi command that’s used to remove container images.

To see the container ID for yourself, you’ll first need to exit the container if you’re currently running one. There are two ways of doing so. First, you could press Ctrl + d to disconnect, or even type exit and press Enter. When you exit the container, you’re removing it (Docker containers only typically exist while running). When you run the docker ps command (which is the command you’ll use any time you want a list of containers on your system), you won’t see it listed. Instead, you can add the -a option to see all containers listed, even those that have been stopped.

You’re probably wondering, then, how to exit a container and not have it go away. To do so, while you’re attached to a container, press Ctrl + p and then press q (don’t let go of the Ctrl key while you press these two letters). This will drop you out of the container, and when you run the docker ps command (even without the -a option), you’ll see that it’s still running.

The docker ps command deserves some attention. The output will give you some very useful information about the containers on your server, including the CONTAINER ID that was mentioned earlier. In addition, the output will contain the IMAGE it was created from, the COMMAND being run when the container was CREATED, and its STATUS, as well as any PORTS you may have forwarded. The output will also display randomly generated names for each container, which are usually quite comical. As I was going through the process of creating containers while writing this section, the code names for my containers were tender_cori, serene_mcnulty, and high_goldwasser. This is just one of the many quirks of Docker, and some of these can be quite humorous.

The important output of the docker ps -a command is the CONTAINER ID, the COMMAND, and the STATUS. The ID, which we already discussed, allows you to reference a specific container to enable you to run commands against it. COMMAND lets you know what command was being run. In our example, we executed /bin/bash when we started our containers.

If we have any containers that were stopped, we can resume a container with the docker start command, giving it a container ID as an argument. Your command will end up looking similar to this:

docker start d2e4e1f51132

The output will simply return the ID of the container, and then drop you back to your shell prompt—not the shell prompt of your container, but that of your server. You might be wondering at this point, how do I get back to the shell prompt for the container? We can use docker attach for that:

docker attach d2e4e1f51132

The docker attach command is useful because it allows you to attach your shell to a container that is already running. Most of the time, containers are started automatically instead of starting with /bin/bash as we have done. If something were to go wrong, we may want to use something like docker attach to browse through the running container to look for error messages. It’s very useful.

Speaking of useful, another great command is docker info. This command will give you information about your implementation of Docker, such as letting you know how many containers you have on your system, which should be the number of times you’ve run the docker run command unless you cleaned up the previously run containers with docker rm. Feel free to take a look at its output and see what you can learn from it.

Getting deeper into the subject of containers, it’s important to understand what a Docker container is and what it isn’t. A container is not a service running in the background, at least not inherently. A container is a collection of namespaces, such as a namespace for its filesystem or users. As discussed earlier in this chapter, containers are isolated from the rest of the server by utilizing technology within the Linux kernel. When you disconnect without a process running within the container, there’s no reason for it to run, since its namespace is empty. Thus, it stops. If you’d like to run a container in a way that is similar to a service (it keeps running in the background), you would want to run the container in detached mode. Basically, this is a way of telling your container to run this process and to not stop running it until you tell it to. Here’s an example of creating a container and running it in detached mode:

docker run -dit ubuntu /bin/bash

After running the previous command, Docker will print a container ID, and then drop back to your command prompt. You can then see that the container is running with the docker ps command, so use docker attach along with the container ID to connect to it and run commands.

Normally, we use the -it options to create a container. This is what we used a few examples ago. The -i option triggers interactive mode, while the -t option gives us a pseudo-TTY. At the end of the command, we tell the container to run the Bash shell. The -d option runs the container in the background.

It may seem relatively useless to have another Bash shell running in the background that isn’t actually performing a task. But these are just simple examples to help you get the hang of Docker. A more common use case may be to run a specific application. In fact, you can even serve a website from a Docker container by installing and configuring Apache within the container, including a virtual host. The question then becomes: how do you access the container’s instance of Apache within a web browser? The answer is port redirection, which Docker also supports. Let’s give this a try.

First, let’s create a new container in detached mode. Let’s also redirect port 80 within the container to port 8080 on the host:

docker run -dit -p 8080:80 ubuntu /bin/bash

The command will output a container ID. This ID will be much longer than you’re accustomed to seeing. This is because when we run docker ps -a, it only shows shortened container IDs. You don’t need to use the entire container ID when you attach; you can simply use part of it as long as it’s long enough to be different from other IDs:

docker attach dfb3e

Here, I’ve attached to a container with an ID that begins with dfb3e. This will connect my shell to a Bash shell within the container.

Let’s install Apache. We’ve done this before, but there are a few differences that you’ll see. First, if you simply run the following command to install the apache2 package as we would normally do, it may fail for one or two reasons:

sudo apt install apache2

The two problems here are first that sudo isn’t included by default in the Ubuntu container, so it won’t even recognize the sudo part of the command. When you run docker attach, you’re actually attaching to the container as the root user, so the lack of sudo won’t be an issue anyway. Second, the repository index in the container may be out of date, if it’s even present at all. This means that apt within the container won’t even find the apache2 package. To solve this, we’ll first update the repository index:

apt update

Then, install apache2 using the following command:

apt install apache2

You may be asked to set your time zone or geographic location during the installation of packages. If so, go ahead and enter each prompt accordingly.

Now we have Apache installed in our container. We don’t need to worry about configuring the default sample web page or making it look nice. We just want to verify that it works. Let’s start the service:

/etc/init.d/apache2 start

After running that command, Apache should be running within the container.

The previous command is definitely not our normal way of starting services. Typically, we’d use a command like systemctl start apache2, but there’s no actual init system inside a container, so running systemctl commands will not work as they normally would. Always refer to any documentation that may exist for a container you’re attempting to run, regarding how to start an application it may contain.

Apache should be running within the container. Now, press Ctrl + p and then press q (don’t let go of the Ctrl key while you press these two letters) to exit the container, but allow it to keep running in the background. You should be able to visit the sample Apache web page for the container by navigating to localhost:8080 in your web browser. You should see the default It works! page of Apache.

Congratulations, you’re officially running an application within a container:

Figure 17.3: The default Apache start page, running from within a container

As your Docker knowledge grows, you’ll want to look deeper into the concept of an ENTRYPOINT. An ENTRYPOINT is a preferred way of starting applications in a Docker container. In our examples so far, we’ve used an ENTRYPOINT of /bin/bash. While that’s perfectly valid, an ENTRYPOINT is generally a Bash script that is configured to run the desired application and is launched by the container.

Our Apache container is running happily in the background, responding to HTTP requests over port 8080 on the host. But what should we do with it at this point? We can create our own image from it so that we can simplify deploying it later. To be fair, we’ve only installed Apache inside the container, so it’s not saving us that much work. In a real production environment, you may have a container running that needed quite a few commands to set it up. With an image, we can have all of that work baked into the image, so we won’t have to run any setup commands we may have each time we want to create a container. To create a container image, let’s grab the container ID of a running container by running the docker ps command. Once we have that, we can now create a new image of the container with the docker commit command:

docker commit <Container ID> ubuntu/apache-server:1.0

That command will return us the ID of our new image. To view all the Docker images available on our machine, we can run the docker images command to have Docker return a list. You should see the original Ubuntu image we downloaded, along with the one we just created. We’ll first see a column for the repository the image came from; in our case, it is Ubuntu. Next, we see the tag. Our original Ubuntu image (the one we used docker pull to download) has a tag of latest. We didn’t specify that when we first downloaded it; it just defaulted to latest. In addition, we see an image ID for both, as well as the size.

To create a new container from our new image, we just need to use docker run, but specify the tag and name of our new image. Note that we may already have a container listening on port 8080, so this command may fail if that container hasn’t been stopped:

docker run -dit -p 8080:80 ubuntu/apache-server:1.0 /bin/bash

Speaking of stopping a container, I should probably show you how to do that as well. As you can probably guess, the command is docker stop followed by a container ID:

docker stop <Container ID>

This will send the SIGTERM signal to the container, followed by SIGKILL if it doesn’t stop on its own after a delay.

Admittedly, the Apache container example was fairly simplistic, but it does the job as far as showing you a working example of a container that is actually somewhat useful. Before continuing on, think for a moment of all the use cases you can use Docker for in your organization. It may seem like a very simple concept (and it is), but it allows you to do some very powerful things. Perhaps you’ll want to try to containerize your organization’s intranet page or some sort of application. The concept of Docker sure is simple, but it can go a long way with the right imagination.

Before I close out this section, I’ll give you a personal example of how I implemented a container at a previous job. At this organization, I worked with some Embedded Linux software engineers who each had their own personal favorite Linux distribution. Some preferred Ubuntu, others preferred Debian, and a few even ran Gentoo. This in and of itself wasn’t necessarily an issue—sometimes it’s fun to try out other distributions. But for developers, a platform change can introduce inconsistency, and that’s not good for a software project. The included build tools are different in each distribution of Linux because they all ship different versions of all the development packages and libraries. The application this particular organization developed was only known to compile properly in Debian, and newer versions of the compiler posed a problem for the application. My solution was to provide each developer with a Docker container based on Debian, with all the build tools that they needed to perform their job baked in. At this point, it no longer mattered which distribution they ran on their workstations. The container was the same no matter what they were running. Regardless of what their underlying operating system was, they all had the same tools. This gave each developer the freedom to run their preferred distribution of Linux (or even macOS), and it didn’t impact their ability to do their job. I’m sure there are some clever use cases you can come up with for implementing containerization.

Now that we understand the basics of Docker, let’s take a look at automating the process of building containers.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image