Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Microservices with Spring Boot 3 and Spring Cloud, Third Edition

You're reading from   Microservices with Spring Boot 3 and Spring Cloud, Third Edition Build resilient and scalable microservices using Spring Cloud, Istio, and Kubernetes

Arrow left icon
Product type Paperback
Published in Aug 2023
Publisher Packt
ISBN-13 9781805128694
Length 706 pages
Edition 3rd Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Magnus Larsson AB Magnus Larsson AB
Author Profile Icon Magnus Larsson AB
Magnus Larsson AB
Arrow right icon
View More author details
Toc

Table of Contents (26) Chapters Close

Preface 1. Introduction to Microservices 2. Introduction to Spring Boot FREE CHAPTER 3. Creating a Set of Cooperating Microservices 4. Deploying Our Microservices Using Docker 5. Adding an API Description Using OpenAPI 6. Adding Persistence 7. Developing Reactive Microservices 8. Introduction to Spring Cloud 9. Adding Service Discovery Using Netflix Eureka 10. Using Spring Cloud Gateway to Hide Microservices behind an Edge Server 11. Securing Access to APIs 12. Centralized Configuration 13. Improving Resilience Using Resilience4j 14. Understanding Distributed Tracing 15. Introduction to Kubernetes 16. Deploying Our Microservices to Kubernetes 17. Implementing Kubernetes Features to Simplify the System Landscape 18. Using a Service Mesh to Improve Observability and Management 19. Centralized Logging with the EFK Stack 20. Monitoring Microservices 21. Installation Instructions for macOS 22. Installation Instructions for Microsoft Windows with WSL 2 and Ubuntu 23. Native-Complied Java Microservices 24. Other Books You May Enjoy
25. Index

Docker

I assume that Docker and the concept of containers need no in-depth introduction. Docker made the concept of containers as a lightweight alternative to virtual machines very popular in 2013. A container is actually a process in a Linux host that uses Linux namespaces to provide isolation between different containers, in terms of their use of global system resources such as users, processes, filesystems, and networking. Linux control groups (also known as cgroups) are used to limit the amount of CPU and memory that a container is allowed to consume.

Compared to a virtual machine that uses a hypervisor to run a complete copy of an operating system in each virtual machine, the overhead in a container is a fraction of the overhead in a traditional virtual machine.

This leads to much faster startup times and significantly lower overhead in terms of CPU and memory usage.

The isolation that’s provided for a container is, however, not considered to be as secure as the isolation that’s provided for a virtual machine. With the release of Windows Server 2016, Microsoft supports the use of Docker in Windows servers.

During the last few years, a lightweight form of virtual machines has evolved. It mixes the best of traditional virtual machines and containers, providing virtual machines with a footprint and startup time similar to containers and with the same level of secure isolation provided by traditional virtual machines. Some examples are Amazon Firecracker and Microsoft Windows Subsystem for Linux v2 (WSL2). For more information, see https://firecracker-microvm.github.io and https://docs.microsoft.com/en-us/windows/wsl/.

Containers are very useful during both development and testing. Being able to start up a complete system landscape of cooperating microservices and resource managers (for example, database servers, messaging brokers, and so on) with a single command for testing is simply amazing.

For example, we can write scripts in order to automate end-to-end tests of our microservice landscape. A test script can start up the microservice landscape, run tests using the exposed APIs, and tear down the landscape. This type of automated test script is very useful, both for running locally on a developer PC before pushing code to a source code repository, and to be executed as a step in a delivery pipeline. A build server can run these types of tests in its continuous integration and deployment process whenever a developer pushes code to the source repository.

For production usage, we need a container orchestrator such as Kubernetes. We will come back to container orchestrators and Kubernetes later in this book.

For most of the microservices we will look at in this book, a Dockerfile such as the following is all that is required to run the microservice as a Docker container:

FROM openjdk:17
MAINTAINER Magnus Larsson <magnus.larsson.ml@gmail.com>
EXPOSE 8080
ADD ./build/libs/*.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]

If we want to start and stop many containers with one command, Docker Compose is the perfect tool. Docker Compose uses a YAML file to describe the containers to be managed.

For our microservices, it might look something like the following:

product:
 build: microservices/product-service
recommendation:
 build: microservices/recommendation-service
review:
  build: microservices/review-service
composite:
  build: microservices/product-composite-service
  ports:
    - "8080:8080"

Let me explain the preceding source code a little:

  • The build directive is used to specify which Dockerfile to use for each microservice. Docker Compose will use it to build a Docker image and then launch a Docker container based on that Docker image.
  • The ports directive for the composite service is used to expose port 8080 on the server where Docker runs. On a developer’s machine, this means that the port of the composite service can be reached simply by using localhost:8080!

All the containers in the YAML files can be managed with simple commands such as the following:

  • docker-compose up -d: Starts all containers. -d means that the containers run in the background, not locking the terminal from where the command was executed.
  • docker-compose down: Stops and removes all containers.
  • docker-compose logs -f --tail=0: Prints out log messages from all containers. -f means that the command will not complete, and instead waits for new log messages. --tail=0 means that we don’t want to see any previous log messages, only new ones.

For a full list of Docker Compose commands, see https://docs.docker.com/compose/reference/.

This was a brief introduction to Docker. We will go into more detail about Docker starting with Chapter 4, Deploying Our Microservices Using Docker.

You have been reading a chapter from
Microservices with Spring Boot 3 and Spring Cloud, Third Edition - Third Edition
Published in: Aug 2023
Publisher: Packt
ISBN-13: 9781805128694
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at R$50/month. Cancel anytime