Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Modern Full-Stack React Projects

You're reading from   Modern Full-Stack React Projects Build, maintain, and deploy modern web apps using MongoDB, Express, React, and Node.js

Arrow left icon
Product type Paperback
Published in Jun 2024
Publisher Packt
ISBN-13 9781837637959
Length 506 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Daniel Bugl Daniel Bugl
Author Profile Icon Daniel Bugl
Daniel Bugl
Arrow right icon
View More author details
Toc

Table of Contents (28) Chapters Close

Preface 1. Part 1:Getting Started with Full-Stack Development
2. Chapter 1: Preparing for Full-Stack Development FREE CHAPTER 3. Chapter 2: Getting to Know Node.js and MongoDB 4. Part 2:Building and Deploying Our First Full-Stack Application with a REST API
5. Chapter 3: Implementing a Backend Using Express, Mongoose ODM, and Jest 6. Chapter 4: Integrating a Frontend Using React and TanStack Query 7. Chapter 5: Deploying the Application with Docker and CI/CD 8. Part 3:Practicing Development of Full-Stack Web Applications
9. Chapter 6: Adding Authentication with JWT 10. Chapter 7: Improving the Load Time Using Server-Side Rendering 11. Chapter 8: Making Sure Customers Find You with Search Engine Optimization 12. Chapter 9: Implementing End-to-End Tests Using Playwright 13. Chapter 10: Aggregating and Visualizing Statistics Using MongoDB and Victory 14. Chapter 11: Building a Backend with a GraphQL API 15. Chapter 12: Interfacing with GraphQL on the Frontend Using Apollo Client 16. Part 4:Exploring an Event-Based Full-Stack Architecture
17. Chapter 13: Building an Event-Based Backend Using Express and Socket.IO 18. Chapter 14: Creating a Frontend to Consume and Send Events 19. Chapter 15: Adding Persistence to Socket.IO Using MongoDB 20. Part 5:Advancing to Enterprise-Ready Full-Stack Applications
21. Chapter 16: Getting Started with Next.js 22. Chapter 17: Introducing React Server Components 23. Chapter 18: Advanced Next.js Concepts and Optimizations 24. Chapter 19: Deploying a Next.js App 25. Chapter 20: Diving Deeper into Full-Stack Development 26. Index 27. Other Books You May Enjoy

Creating Docker images

In Chapter 2, Getting to Know Node.js and MongoDB, we learned that in the Docker platform, we use Docker images to create containers, which can then run services. We have already learned how to use the existing mongo image to create a container for our database service. In this section, we are going to learn how to create our own image to instantiate a container from. To do so, we first need to create a Dockerfile, which contains all the instructions needed to build the Docker image. First, we will create a Docker image for our backend service and run a container from it. Then, we will do the same for our frontend. Finally, we will create a Docker Compose file to start our database and backend services together with our frontend.

Creating the backend Dockerfile

A Dockerfile tells Docker step by step how to build the image. Each line in the file is an instruction telling Docker what to do. The format of a Dockerfile is as follows:

# comment
INSTRUCTION arguments

Every Dockerfile must begin with a FROM instruction, which specifies which image the newly created image should be based on. You can extend your image from existing images, such as ubuntu or node.

Let’s get started by creating the Dockerfile for our backend service:

  1. Copy the ch4 folder to a new ch5 folder, as follows:
    $ cp -R ch4 ch5
  2. Create a new backend/Dockerfile file inside the ch5 folder.
  3. In this file, we first define a base image for our image, which will be version 20 of the node image:
    FROM node:20

    This image is provided by Docker Hub, similar to the ubuntu and mongo images we created containers from before.

Note

Be careful to only use official images and images created by trusted authors. The node image, for example, is officially maintained by the Node.js team.

  1. Then, we set the working directory, which is where all files of our service will be placed inside the image:
    WORKDIR /app

    The WORKDIR instruction is similar to using cd in the terminal. It changes the working directory so that we do not have to prefix all the following commands with the full path. Docker creates the folder for us if it does not exist yet.

  2. Next, we copy the package.json and package-lock.json files from our project to the working directory:
    COPY package.json package-lock.json ./

    The COPY instruction copies files from your local file system into the Docker image (relative to the local working directory). Multiple files can be specified, and the last argument to the instruction is the destination (in this case, the current working directory of the image).

    The package-lock.json file is needed to ensure that the Docker image contains the same versions of the npm packages as our local build.

  3. Now, we run npm install to install all dependencies in the image:
    RUN npm install

    The RUN instruction executes a command in the working directory of the image.

  4. Then, we copy the rest of our application from the local file system to the Docker image:
    COPY . .

Note

Are you wondering why we initially just copied package.json and package-lock.json? Docker images are built layer by layer. Each instruction forms a layer of the image. If something changes, only the layers following the change are rebuilt. So, in our case, if any of the code changes, only this last COPY instruction is re-executed when rebuilding the Docker image. Only if dependencies change are the other COPY instruction and npm install re-executed. Using this order of instruction reduces the time required to rebuild the image immensely.

  1. Finally, we run our application:
    CMD ["npm", "start"]

    The CMD instruction is not executed while building the image. Instead, it stores information in the metadata of the image, telling Docker which command to run when a container is instantiated from the image. In our case, the container is going to run npm start when using our image.

Note

You may have noticed that we passed a JSON array to the CMD instruction instead of simply writing CMD npm start. The JSON array version is called exec form and, if the first argument is an executable, will run the command directly without invoking a shell. The form without the JSON array is called shell form and will execute the command with a shell, prefixing it with /bin/sh -c. Running a command without a shell has the advantage of allowing the application to properly receive signals, such as a SIGTERM or SIGKILL signal when the application is terminated. Alternatively, the ENTRYPOINT instruction can be used to specify which executable should be used to run a certain command (it defaults to /bin/sh -c). In some cases, you may even want to run the script directly using CMD ["node", "src/index.js"], so that the script can properly receive all signals. However, this would require us to implement the SIGINT signal in our backend server to allow closing the container via Ctrl + C, so, to keep things simple, we just use npm start instead.

After creating our Dockerfile, we should also create a .dockerignore file to make sure unnecessary files are not copied into our image.

Creating a .dockerignore file

The COPY command, where we copy all files, would also copy the node_modules folder and other files, such as the .env file, which we do not want to go into our image. To prevent certain files from being copied into our Docker image, we need to create a .dockerignore file. Let’s do that now:

  1. Create a new backend/.dockerignore file.
  2. Open it and enter the following contents to ignore the node_modules folder and all .env files:
    node_modules
    .env*

Now that we have defined a .dockerignore file, the COPY instructions will ignore these folders and files. Let’s build the Docker image now.

Building the Docker image

After successfully creating the backend Dockerfile and a .dockerignore file to prevent certain files and folders from being added to our Docker image, we can now get started building our Docker image:

  1. Open a Terminal.
  2. Run the following command to build the Docker image:
    $ docker image build -t blog-backend backend/

    We specified blog-backend as the name of our image and backend/ as the working directory.

After running the command, Docker will start by reading the Dockerfile and .dockerignore file. Then, it will download the node image and run our instructions one by one. Finally, it will export all layers and metadata into our Docker image.

The following screenshot shows the output of creating a Docker image:

Figure 5.1 – The output when creating a Docker image

Figure 5.1 – The output when creating a Docker image

Now that we have successfully created our own image, let’s create and run a container based on it!

Creating and running a container from our image

We have already created Docker containers based on the ubuntu and mongo images in Chapter 2, Getting to Know Node.js and MongoDB. Now, we are going to create and run a container from our own image. Let’s get started doing that now:

  1. Run the following command to list all available images:
    $ docker images

    This command should return the blog-backend image that we just created, and the mongo and ubuntu images that we previously used.

  2. Make sure the dbserver container with our database is already running.
  3. Then, start a new container, as follows:
    $ docker run -it -e PORT=3001 -e DATABASE_URL=mongodb://host.docker.internal:27017/blog -p 3001:3001 blog-backend

    Let’s break down the arguments to the docker run command:

    • -it runs the container in interactive mode (-t to allocate a pseudo Terminal and -i to keep the input stream open).
    • -e PORT=3001 sets the PORT environment variable inside the container to 3001.
    • -e DATABASE_URL=mongodb://host.docker.internal:27017/blog sets the DATABASE_URL environment variable. Here, we replaced localhost with host.docker.internal, as the MongoDB service runs in a different container on the Docker host (our machine).
    • -p 3001:3001 forwards port 3001 from inside the container to port 3001 on the host (our machine).
    • blog-backend is the name of our image.
  4. The blog-backend container is now running, which looks very similar to running the backend directly on our host in the Terminal. Go to http://localhost:3001/api/v1/posts to verify that it is running properly like before and returning all posts.
  5. Keep the container running for now.

We have successfully packaged our backend as a Docker image and started a container from it! Now, let’s do the same for our frontend.

Creating the frontend Dockerfile

After creating a Docker image for the backend service, we are now going to repeat the same process to create an image for the frontend. We will do so by first creating a Dockerfile, then the .dockerignore file, building the image, and then running a container. Now, we will start with creating the frontend Dockerfile.

In the Dockerfile for our frontend, we are going to use two images:

  • A build image to build our project using Vite (which will be discarded, with only the build output kept)
  • A final image, which will serve our static site using nginx

Let’s make the Dockerfile now:

  1. Create a new Dockerfile in the root of our project.
  2. In this newly created file, first, use the node image again, but this time we tag it AS build. Doing so enables multi-stage builds in Docker, which means that we can use another base image later for our final image:
    FROM node:20 AS build
  3. During build time, we also set the VITE_BACKEND_URL environment variable. In Docker, we can use the ARG instruction to define environment variables that are only relevant when the image is being built:
    ARG VITE_BACKEND_URL=http://localhost:3001/api/v1

Note

While the ARG instruction defines an environment variable that can be changed at build time using the --build-arg flag, the ENV instruction sets the environment variable to a fixed value, which will persist when a container is run from the resulting image. So, if we want to customize environment variables during build time, we should use the ARG instruction. However, if we want to customize environment variables during runtime, ENV is better suited.

  1. We set the working directory to /build for the build stage, and then repeat the same instructions that we defined for the backend to install all necessary dependencies and copy over the necessary files:
    WORKDIR /build
    COPY package.json .
    COPY package-lock.json .
    RUN npm install
    COPY . .
  2. Additionally, we execute npm run build to create a static build of our Vite app:
    RUN npm run build
  3. Now, our build stage is completed. We use the FROM instruction again to create the final stage. This time, we base it off the nginx image, which runs an nginx web server:
    FROM nginx AS final
  4. We set the working directory for this stage to /var/www/html, which is the folder that nginx serves static files from:
    WORKDIR /usr/share/nginx/html
  5. Lastly, we copy everything from the /build/dist folder (which is where Vite puts the built static files) from the build stage into the final stage:
    COPY --from=build /build/dist .

    A CMD instruction is not needed in this case, as the nginx image already contains one to run the web server properly.

We successfully created a multi-stage Dockerfile for our frontend! Now, let’s move on to creating the .dockerignore file.

Creating the .dockerignore file for the frontend

We also need to create a .dockerignore file for the frontend. Here, we also exclude, in addition to the node_modules/ folder and .env files, the backend/ folder containing our backend service and the .vscode, .git, and .husky folders. Let’s create the .dockerignore file now:

  1. Create a new .dockerignore file in the root of our project.
  2. Inside this newly created file, enter the following contents:
    node_modules
    .env*
    backend
    .vscode
    .git
    .husky
    .commitlintrc.json

Now that we have ignored the files not necessary for the Docker image, let’s build it!

Building the frontend Docker image

Just like before, we execute the docker build command to build the image, giving it the name blog-frontend and specifying the root directory as the path:

$ docker build -t blog-frontend .

Docker will now use the node image to build our frontend in the build stage. Then, it will switch to the final stage, use the nginx image, and copy over the built static files from the build stage.

Now, let’s create and run the frontend container.

Creating and running the frontend container

Similarly to what we did for the backend container, we can also create and run a container from the blog-frontend image by executing the following command:

$ docker run -it -p 3000:80 blog-frontend

The nginx image runs the web server on port 80, so, if we want to use the port 3000 on our host, we need to forward from port 80 to 3000 by passing -p 3000:80.

After running this command and navigating to http://localhost:3000 in your browser, you should see the frontend being served properly and showing blog posts from the backend.

Now that we have created images and containers for the backend and frontend, we are going to learn about a way to manage multiple images more easily.

Managing multiple images using Docker Compose

Docker Compose is a tool that allows us to define and run multi-container applications with Docker. Instead of manually building and running the backend, frontend, and database containers, we can use Compose to build and run them all together. To get started using Compose, we need to create a compose.yaml file in the root of our project, as follows:

  1. Create a new compose.yaml file in the root of our project.
  2. Open the newly created file and start by defining the version of the Docker Compose file specification:
    version: '3.9'
  3. Now, define a services object, in which we are going to define all the services that we want to use:
    services:
  4. First, we have blog-database, which uses the mongo image and forwards port 27017:
      blog-database:
        image: mongo
        ports:
          - '27017:27017'

Note

In YAML files, the indentation of lines is very important to distinguish where properties are nested, so please be careful to put in the correct amount of spaces before each line.

  1. Next, we have blog-backend, which uses the Dockerfile defined in the backend/ folder, defines the environment variables for PORT and DATABASE_URL, forwards the port to the host, and depends on blog-database:
      blog-backend:
        build: backend/
        environment:
          - PORT=3001
          - DATABASE_URL=mongodb://host.docker.internal:27017/blog
        ports:
          - '3001:3001'
        depends_on:
          - blog-database
  2. Lastly, we have blog-frontend, which uses the Dockerfile defined in the root, defines the VITE_BACKEND_URL build argument, forwards the port to the host, and depends on blog-backend:
      blog-frontend:
        build:
          context: .
          args:
            VITE_BACKEND_URL: http://localhost:3001/api/v1
        ports:
          - '3000:80'
        depends_on:
          - blog-backend
  3. After defining the services, save the file.
  4. Then, stop the backend and frontend containers running in the terminal by using the Ctrl + C key combination.
  5. Also, stop the already running dbserver container, as follows:
    $ docker stop dbserver
  6. Finally, run the following command in the Terminal to start all services using Docker Compose:
    $ docker compose up

Docker Compose will now create containers for the database, backend, and frontend and start all of them. You will start seeing logs being printed from the different services. If you go to http://localhost:3000, you can see that the frontend is running. Create a new post to verify that the connection to the backend and database works as well.

The following screenshot shows the output of docker compose up creating and starting all containers:

Figure 5.2 – Creating and running multiple containers with Docker Compose

Figure 5.2 – Creating and running multiple containers with Docker Compose

The output in the screenshot is then followed by log messages from the various services, including the MongoDB database service and our backend and frontend services.

Just like always, you can press Ctrl + C to stop all Docker Compose containers.

Now that we have set up Docker Compose, it’s very easy to start all services at once and manage them all in one place. If you look at your Docker containers, you may notice that there are lots of stale containers still left over from previously building the blog-backend and blog-frontend containers. Let’s now learn how to clean up those.

Cleaning up unused containers

After experimenting with Docker for a while, there will be lots of images and containers that are not in use anymore. Docker generally does not remove objects unless you explicitly ask it to, causing it to use a lot of disk space. If you want to remove objects, you can either remove them one by one or use one of the prune commands provided by Docker:

  • docker container prune: This removes all stopped containers
  • docker image prune: This removes all dangling images (images not tagged and not referenced by any container)
  • docker image prune -a: This removes all images not used by any containers
  • docker volume prune: This removes all volumes not used by any containers
  • docker network prune: This cleans up networks not used by any containers
  • docker system prune: This prunes everything except volumes
  • docker system prune --volumes: This prunes everything

So, if you want to, for example, remove all unused containers, you should first make sure that all of the containers that you still want to use are running. Then, execute docker container prune in the terminal.

Now that we have learned how to use Docker locally to package our services as images and run them in containers, let’s move on to deploying our full-stack application to the cloud.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image