Running single container applications
Before we get into the nuts and bolts of Docker orchestration, let's run through the basics of running single applications in Docker. Seeing as this is a tech book, the first example is always some variant of Hello World
and this is no different.
Note
By default, docker
must be run as the root user or with sudo
. Instead, you could add your user to the docker
group and run containers without root.
$ docker run --rm ubuntu echo "Hello World"
This example is really simple. It downloads the ubuntu
Docker image and uses that image to run the echo "Hello World"
command. Simple, right? There is actually a lot going on here that you need to understand before you get into orchestration.
First of all, notice the word ubuntu
in that command. That tells Docker that you want to use the ubuntu
image. By default, Docker will download images from the Docker Hub. There are a large number of images, most uploaded by the community, but there are also a number of official images of various projects of which ubuntu
is one. These form a great base for almost any application.
Second, take special note of the --rm
flag. When docker
runs, it creates an image for the container that contains any changes to the base image. Those changes persist as long as the container exists even if the container is stopped. The --rm
flag tells docker
to remove the container and its image as soon as it stops running. When you start automating containers with orchestration tools, you will often want to remove containers when they stop. I'll explain more in the next section.
Lastly, take a look at the echo
command. Yes, it is an echo
alright, and it outputs Hello World
just like one would expect. There are two important points here. First, the command can be anything in the image, and second, it must be in the image. For example, if you tried to run nginx
in that command, Docker will throw an error similar to the following:
$ sudo docker run --rm ubuntu nginx exec: "nginx": executable file not found in $PATH Error response from daemon: Cannot start container 821fcd4e8ae76668d8c508190b338e166247dc46cb6bc2582731566e7f2c705a: [8] System error: exec: "nginx": executable file not found in $PATH
The "Hello World"
examples are all good but what if you want to do something actually useful? To quote old iPhone ads;Â There's an app for that. There are many official applications available on the Docker Hub. Let's continue with nginx
and start a container running nginx
to serve a simple website:
$ docker run --rm -p 80:80 --name nginx nginx
This command starts a new container based on the nginx
image, downloading it if needed, and telling docker
to forward TCP port 80 to port 80 in the image. Now you can go to http://localhost
and see a most welcoming website:
Welcoming people to Nginx is all well and good, but obviously, you will want to do more than that. That will be covered in more detail in Chapter 2, Building Multi-Container Applications with Docker Compose. For now, the default will be sufficient.
If you run the preceding example, you will notice that the console appears to hang. That's because docker
starts processes in the foreground. What you are seeing there is nginx
waiting for a request. If you go to http://localhost
, then you should see messages from the nginx
access log printed to the console. Another option is to add -d
to your run
command. That will detach the process from the console:
$ docker run -d -p 80:80 --name nginx nginx
Note
The -d
and --rm
options are mutually exclusive.
There are multiple ways to stop a container. The first way is to end the process running in the container. This will happen automatically for short running processes. When starting a container like nginx
in the preceding example, pressing
Ctrl
+
C
in the session will stop nginx
and the container. The other way is to use docker stop
. It takes the image ID or name of the container. For example, to stop the container that was started earlier you would run docker stop nginx
.
Let's take a moment and look at how Docker deals with remote images. Remember, when you first ran docker run
with the ubuntu
or nginx
images, docker
had to first download the images from the Docker Hub. When you run them again, Docker will use the downloaded image. You can see the images Docker knows about with the docker images
command:
$ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE ubuntu latest ae81bbda2b6c 5 hours ago 126.6 MB nginx latest bfdd4ced794e 3 days ago 183.4 MB
Unwanted images can be deleted with the docker rmi
command:
$ docker rmi ubuntu
Ack! What do you do if you deleted an image but you still need it? You have two options. First, you can run a container that uses the image. It works, but can be cumbersome if running a container changes data or conflicts with something that is already running. Fortunately, there is the docker pull
command:
$ docker pull ubuntu
This command will pull the default version of the ubuntu
image from the repository on Docker Hub. Specific versions can be pulled by specifying them in the command:
$ docker pull ubuntu:trusty
Docker pull is also used to update a previously downloaded image. For example, the ubuntu
image is regularly updated with security fixes and other patches. If you do not pull the updates, docker
on your host will continue to use the old image. Simply run the docker pull
command again and any updates to the image will be downloaded.
Let's take a quick diversion and consider what this means for your hosts when you begin to orchestrate Docker. Unless you or your tools update the images on your hosts, you will find that some hosts are running old images while others are running the new, shiny image. This can open your systems up to intermittent failures or security holes. Most modern tools will take care of that for you or, at least, have an option to force a pull before deployment. Others may not, keep that in mind as you look at orchestration tools and strategies.
What is running?
At some point, you will want to see what containers are running on a specific host. Your orchestration tools will help with that, but there will be times that you will need to go straight to the source to troubleshoot a problem. For that, there is the docker ps
command. To demonstrate, start up a few containers:
$ for i in {1..4}; do docker run -d --name nginx$i nginx ; done
Now run docker ps
:
$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e5b302217aeb nginx "nginx -g 'daemon off" About a minute ago Up About a minute 80/tcp, 443/tcp nginx4 dc9d9e1e1228 nginx "nginx -g 'daemon off" About a minute ago Up About a minute 80/tcp, 443/tcp nginx3 6009967479fc nginx "nginx -g 'daemon off" About a minute ago Up About a minute 80/tcp, 443/tcp nginx2 67ac8125983c nginx "nginx -g 'daemon off" About a minute ago Up About a minute 80/tcp, 443/tcp nginx1
You should see the containers that were just started as well as any others that you may have running. If you stop the containers, they will disappear from docker ps
:
$ for i in {1..4}; do docker stop nginx$i ; done nginx1 nginx2 nginx3 nginx4
As you can see if you run docker ps
, the containers are gone:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
However, since the --rm
flag was not used, docker
still knows about them and could restart them:
$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e5b302217aeb nginx "nginx -g 'daemon off" 3 minutes ago Exited (0) About a minute ago nginx4 dc9d9e1e1228 nginx "nginx -g 'daemon off" 3 minutes ago Exited (0) About a minute ago nginx3 6009967479fc nginx "nginx -g 'daemon off" 3 minutes ago Exited (0) About a minute ago nginx2 67ac8125983c nginx "nginx -g 'daemon off" 3 minutes ago Exited (0) About a minute ago nginx1
These are all the stopped nginx
containers. The docker rm
command will remove the containers:
$ for i in {1..4}; do docker rm nginx$i ; done nginx1 nginx2 nginx3 nginx4
Until a container is removed, all the data is still available. You can restart the container and it will chug along quite happily with whatever data existed when the container was stopped. Once the container is removed, all the data within that container is removed right along with it. In many cases, you might not care but, in others, that data might be important. How you deal with that data will be an important part of planning out your orchestration system. In Chapter 3, Cluster Building Blocks - Registry, Overlay Networks, and Shared Storage, I will show you how you can move your data into shared storage to keep it safe.