Creating Docker images
In Chapter 2, Getting to Know Node.js and MongoDB, we learned that in the Docker platform, we use Docker images to create containers, which can then run services. We have already learned how to use the existing mongo
image to create a container for our database service. In this section, we are going to learn how to create our own image to instantiate a container from. To do so, we first need to create a Dockerfile, which contains all the instructions needed to build the Docker image. First, we will create a Docker image for our backend service and run a container from it. Then, we will do the same for our frontend. Finally, we will create a Docker Compose file to start our database and backend services together with our frontend.
Creating the backend Dockerfile
A Dockerfile tells Docker step by step how to build the image. Each line in the file is an instruction telling Docker what to do. The format of a Dockerfile is as follows:
# comment INSTRUCTION arguments
Every Dockerfile must begin with a FROM
instruction, which specifies which image the newly created image should be based on. You can extend your image from existing images, such as ubuntu
or node
.
Let’s get started by creating the Dockerfile for our backend service:
- Copy the
ch4
folder to a newch5
folder, as follows:$ cp -R ch4 ch5
- Create a new
backend/Dockerfile
file inside thech5
folder. - In this file, we first define a base image for our image, which will be version 20 of the
node
image:FROM node:20
This image is provided by Docker Hub, similar to the
ubuntu
andmongo
images we created containers from before.
Note
Be careful to only use official images and images created by trusted authors. The node
image, for example, is officially maintained by the Node.js team.
- Then, we set the working directory, which is where all files of our service will be placed inside the image:
WORKDIR /app
The
WORKDIR
instruction is similar to usingcd
in the terminal. It changes the working directory so that we do not have to prefix all the following commands with the full path. Docker creates the folder for us if it does not exist yet. - Next, we copy the
package.json
andpackage-lock.json
files from our project to the working directory:COPY package.json package-lock.json ./
The
COPY
instruction copies files from your local file system into the Docker image (relative to the local working directory). Multiple files can be specified, and the last argument to the instruction is the destination (in this case, the current working directory of the image).The
package-lock.json
file is needed to ensure that the Docker image contains the same versions of thenpm
packages as our local build. - Now, we run
npm install
to install all dependencies in the image:RUN npm install
The
RUN
instruction executes a command in the working directory of the image. - Then, we copy the rest of our application from the local file system to the Docker image:
COPY . .
Note
Are you wondering why we initially just copied package.json
and package-lock.json
? Docker images are built layer by layer. Each instruction forms a layer of the image. If something changes, only the layers following the change are rebuilt. So, in our case, if any of the code changes, only this last COPY
instruction is re-executed when rebuilding the Docker image. Only if dependencies change are the other COPY
instruction and npm install
re-executed. Using this order of instruction reduces the time required to rebuild the image immensely.
- Finally, we run our application:
CMD ["npm", "start"]
The
CMD
instruction is not executed while building the image. Instead, it stores information in the metadata of the image, telling Docker which command to run when a container is instantiated from the image. In our case, the container is going to runnpm start
when using our image.
Note
You may have noticed that we passed a JSON array to the CMD
instruction instead of simply writing CMD npm start
. The JSON array version is called exec form and, if the first argument is an executable, will run the command directly without invoking a shell. The form without the JSON array is called shell form and will execute the command with a shell, prefixing it with /bin/sh -c
. Running a command without a shell has the advantage of allowing the application to properly receive signals, such as a SIGTERM
or SIGKILL
signal when the application is terminated. Alternatively, the ENTRYPOINT
instruction can be used to specify which executable should be used to run a certain command (it defaults to /bin/sh -c
). In some cases, you may even want to run the script directly using CMD ["node", "src/index.js"]
, so that the script can properly receive all signals. However, this would require us to implement the SIGINT
signal in our backend server to allow closing the container via Ctrl + C, so, to keep things simple, we just use npm
start
instead.
After creating our Dockerfile, we should also create a .dockerignore
file to make sure unnecessary files are not copied into our image.
Creating a .dockerignore file
The COPY
command, where we copy all files, would also copy the node_modules
folder and other files, such as the .env
file, which we do not want to go into our image. To prevent certain files from being copied into our Docker image, we need to create a .dockerignore
file. Let’s do that now:
- Create a new
backend/.dockerignore
file. - Open it and enter the following contents to ignore the
node_modules
folder and all.
env
files:node_modules .env*
Now that we have defined a .dockerignore
file, the COPY
instructions will ignore these folders and files. Let’s build the Docker image now.
Building the Docker image
After successfully creating the backend Dockerfile and a .dockerignore
file to prevent certain files and folders from being added to our Docker image, we can now get started building our Docker image:
- Open a Terminal.
- Run the following command to build the Docker image:
$ docker image build -t blog-backend backend/
We specified
blog-backend
as the name of our image andbackend/
as the working directory.
After running the command, Docker will start by reading the Dockerfile and .dockerignore
file. Then, it will download the node
image and run our instructions one by one. Finally, it will export all layers and metadata into our Docker image.
The following screenshot shows the output of creating a Docker image:
Figure 5.1 – The output when creating a Docker image
Now that we have successfully created our own image, let’s create and run a container based on it!
Creating and running a container from our image
We have already created Docker containers based on the ubuntu
and mongo
images in Chapter 2, Getting to Know Node.js and MongoDB. Now, we are going to create and run a container from our own image. Let’s get started doing that now:
- Run the following command to list all available images:
$ docker images
This command should return the
blog-backend
image that we just created, and themongo
andubuntu
images that we previously used. - Make sure the
dbserver
container with our database is already running. - Then, start a new container, as follows:
$ docker run -it -e PORT=3001 -e DATABASE_URL=mongodb://host.docker.internal:27017/blog -p 3001:3001 blog-backend
Let’s break down the arguments to the
docker
run
command:-it
runs the container in interactive mode (-t
to allocate a pseudo Terminal and-i
to keep the input stream open).-e PORT=3001
sets thePORT
environment variable inside the container to3001
.-e DATABASE_URL=mongodb://host.docker.internal:27017/blog
sets theDATABASE_URL
environment variable. Here, we replacedlocalhost
withhost.docker.internal
, as the MongoDB service runs in a different container on the Docker host (our machine).-p 3001:3001
forwards port3001
from inside the container to port3001
on the host (our machine).blog-backend
is the name of our image.
- The
blog-backend
container is now running, which looks very similar to running the backend directly on our host in the Terminal. Go tohttp://localhost:3001/api/v1/posts
to verify that it is running properly like before and returning all posts. - Keep the container running for now.
We have successfully packaged our backend as a Docker image and started a container from it! Now, let’s do the same for our frontend.
Creating the frontend Dockerfile
After creating a Docker image for the backend service, we are now going to repeat the same process to create an image for the frontend. We will do so by first creating a Dockerfile, then the .dockerignore
file, building the image, and then running a container. Now, we will start with creating the frontend Dockerfile.
In the Dockerfile for our frontend, we are going to use two images:
- A
build
image to build our project using Vite (which will be discarded, with only the build output kept) - A
final
image, which will serve our static site using nginx
Let’s make the Dockerfile now:
- Create a new Dockerfile in the root of our project.
- In this newly created file, first, use the
node
image again, but this time we tag itAS build
. Doing so enables multi-stage builds in Docker, which means that we can use another base image later for ourfinal
image:FROM node:20 AS build
- During build time, we also set the
VITE_BACKEND_URL
environment variable. In Docker, we can use theARG
instruction to define environment variables that are only relevant when the image is being built:ARG VITE_BACKEND_URL=http://localhost:3001/api/v1
Note
While the ARG
instruction defines an environment variable that can be changed at build time using the --build-arg
flag, the ENV
instruction sets the environment variable to a fixed value, which will persist when a container is run from the resulting image. So, if we want to customize environment variables during build time, we should use the ARG
instruction. However, if we want to customize environment variables during runtime, ENV
is better suited.
- We set the working directory to
/build
for thebuild
stage, and then repeat the same instructions that we defined for the backend to install all necessary dependencies and copy over the necessary files:WORKDIR /build COPY package.json . COPY package-lock.json . RUN npm install COPY . .
- Additionally, we execute
npm run build
to create a static build of our Vite app:RUN npm run build
- Now, our
build
stage is completed. We use theFROM
instruction again to create thefinal
stage. This time, we base it off thenginx
image, which runs an nginx web server:FROM nginx AS final
- We set the working directory for this stage to
/var/www/html
, which is the folder that nginx serves static files from:WORKDIR /usr/share/nginx/html
- Lastly, we copy everything from the
/build/dist
folder (which is where Vite puts the built static files) from thebuild
stage into thefinal
stage:COPY --from=build /build/dist .
A
CMD
instruction is not needed in this case, as thenginx
image already contains one to run the web server properly.
We successfully created a multi-stage Dockerfile for our frontend! Now, let’s move on to creating the .
dockerignore
file.
Creating the .dockerignore file for the frontend
We also need to create a .dockerignore
file for the frontend. Here, we also exclude, in addition to the node_modules/
folder and .env
files, the backend/
folder containing our backend service and the .vscode
, .git
, and .husky
folders. Let’s create the .dockerignore
file now:
- Create a new
.dockerignore
file in the root of our project. - Inside this newly created file, enter the following contents:
node_modules .env* backend .vscode .git .husky .commitlintrc.json
Now that we have ignored the files not necessary for the Docker image, let’s build it!
Building the frontend Docker image
Just like before, we execute the docker build
command to build the image, giving it the name blog-frontend
and specifying the root directory as the path:
$ docker build -t blog-frontend .
Docker will now use the node
image to build our frontend in the build
stage. Then, it will switch to the final
stage, use the nginx
image, and copy over the built static files from the build
stage.
Now, let’s create and run the frontend container.
Creating and running the frontend container
Similarly to what we did for the backend container, we can also create and run a container from the blog-frontend
image by executing the following command:
$ docker run -it -p 3000:80 blog-frontend
The nginx
image runs the web server on port 80
, so, if we want to use the port 3000
on our host, we need to forward from port 80
to 3000
by passing -
p 3000:80
.
After running this command and navigating to http://localhost:3000
in your browser, you should see the frontend being served properly and showing blog posts from the backend.
Now that we have created images and containers for the backend and frontend, we are going to learn about a way to manage multiple images more easily.
Managing multiple images using Docker Compose
Docker Compose is a tool that allows us to define and run multi-container applications with Docker. Instead of manually building and running the backend, frontend, and database containers, we can use Compose to build and run them all together. To get started using Compose, we need to create a compose.yaml
file in the root of our project, as follows:
- Create a new
compose.yaml
file in the root of our project. - Open the newly created file and start by defining the version of the Docker Compose file specification:
version: '3.9'
- Now, define a
services
object, in which we are going to define all the services that we want to use:services:
- First, we have
blog-database
, which uses themongo
image and forwards port27017
:blog-database: image: mongo ports: - '27017:27017'
Note
In YAML files, the indentation of lines is very important to distinguish where properties are nested, so please be careful to put in the correct amount of spaces before each line.
- Next, we have
blog-backend
, which uses the Dockerfile defined in thebackend/
folder, defines the environment variables forPORT
andDATABASE_URL
, forwards the port to the host, and depends onblog-database
:blog-backend: build: backend/ environment: - PORT=3001 - DATABASE_URL=mongodb://host.docker.internal:27017/blog ports: - '3001:3001' depends_on: - blog-database
- Lastly, we have
blog-frontend
, which uses the Dockerfile defined in the root, defines theVITE_BACKEND_URL
build argument, forwards the port to the host, and depends onblog-backend
:blog-frontend: build: context: . args: VITE_BACKEND_URL: http://localhost:3001/api/v1 ports: - '3000:80' depends_on: - blog-backend
- After defining the services, save the file.
- Then, stop the backend and frontend containers running in the terminal by using the Ctrl + C key combination.
- Also, stop the already running
dbserver
container, as follows:$ docker stop dbserver
- Finally, run the following command in the Terminal to start all services using Docker Compose:
$ docker compose up
Docker Compose will now create containers for the database, backend, and frontend and start all of them. You will start seeing logs being printed from the different services. If you go to http://localhost:3000
, you can see that the frontend is running. Create a new post to verify that the connection to the backend and database works as well.
The following screenshot shows the output of docker compose up
creating and starting all containers:
Figure 5.2 – Creating and running multiple containers with Docker Compose
The output in the screenshot is then followed by log messages from the various services, including the MongoDB database service and our backend and frontend services.
Just like always, you can press Ctrl + C to stop all Docker Compose containers.
Now that we have set up Docker Compose, it’s very easy to start all services at once and manage them all in one place. If you look at your Docker containers, you may notice that there are lots of stale containers still left over from previously building the blog-backend
and blog-frontend
containers. Let’s now learn how to clean up those.
Cleaning up unused containers
After experimenting with Docker for a while, there will be lots of images and containers that are not in use anymore. Docker generally does not remove objects unless you explicitly ask it to, causing it to use a lot of disk space. If you want to remove objects, you can either remove them one by one or use one of the prune
commands provided by Docker:
docker container prune
: This removes all stopped containersdocker image prune
: This removes all dangling images (images not tagged and not referenced by any container)docker image prune -a
: This removes all images not used by any containersdocker volume prune
: This removes all volumes not used by any containersdocker network prune
: This cleans up networks not used by any containersdocker system prune
: This prunes everything except volumesdocker system prune --volumes
: This prunes everything
So, if you want to, for example, remove all unused containers, you should first make sure that all of the containers that you still want to use are running. Then, execute docker container prune
in the terminal.
Now that we have learned how to use Docker locally to package our services as images and run them in containers, let’s move on to deploying our full-stack application to the cloud.