Docker
I assume that Docker and the concept of containers need no in-depth introduction. Docker made the concept of containers as a lightweight alternative to virtual machines very popular in 2013. A container is actually a process in a Linux host that uses Linux namespaces to provide isolation between different containers, in terms of their use of global system resources such as users, processes, filesystems, and networking. Linux control groups (also known as cgroups) are used to limit the amount of CPU and memory that a container is allowed to consume.
Compared to a virtual machine that uses a hypervisor to run a complete copy of an operating system in each virtual machine, the overhead in a container is a fraction of the overhead in a traditional virtual machine.
This leads to much faster startup times and significantly lower overhead in terms of CPU and memory usage.
The isolation that’s provided for a container is, however, not considered to be as secure as the isolation that’s provided for a virtual machine. With the release of Windows Server 2016, Microsoft supports the use of Docker in Windows servers.
During the last few years, a lightweight form of virtual machines has evolved. It mixes the best of traditional virtual machines and containers, providing virtual machines with a footprint and startup time similar to containers and with the same level of secure isolation provided by traditional virtual machines. Some examples are Amazon Firecracker and Microsoft Windows Subsystem for Linux v2 (WSL2). For more information, see https://firecracker-microvm.github.io and https://docs.microsoft.com/en-us/windows/wsl/.
Containers are very useful during both development and testing. Being able to start up a complete system landscape of cooperating microservices and resource managers (for example, database servers, messaging brokers, and so on) with a single command for testing is simply amazing.
For example, we can write scripts in order to automate end-to-end tests of our microservice landscape. A test script can start up the microservice landscape, run tests using the exposed APIs, and tear down the landscape. This type of automated test script is very useful, both for running locally on a developer PC before pushing code to a source code repository, and to be executed as a step in a delivery pipeline. A build server can run these types of tests in its continuous integration and deployment process whenever a developer pushes code to the source repository.
For production usage, we need a container orchestrator such as Kubernetes. We will come back to container orchestrators and Kubernetes later in this book.
For most of the microservices we will look at in this book, a Dockerfile such as the following is all that is required to run the microservice as a Docker container:
FROM openjdk:17
MAINTAINER Magnus Larsson <magnus.larsson.ml@gmail.com>
EXPOSE 8080
ADD ./build/libs/*.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
If we want to start and stop many containers with one command, Docker Compose is the perfect tool. Docker Compose uses a YAML file to describe the containers to be managed.
For our microservices, it might look something like the following:
product:
build: microservices/product-service
recommendation:
build: microservices/recommendation-service
review:
build: microservices/review-service
composite:
build: microservices/product-composite-service
ports:
- "8080:8080"
Let me explain the preceding source code a little:
- The
build
directive is used to specify which Dockerfile to use for each microservice. Docker Compose will use it to build a Docker image and then launch a Docker container based on that Docker image. - The
ports
directive for the composite service is used to expose port8080
on the server where Docker runs. On a developer’s machine, this means that the port of the composite service can be reached simply by usinglocalhost:8080!
All the containers in the YAML files can be managed with simple commands such as the following:
docker-compose up -d
: Starts all containers.-d
means that the containers run in the background, not locking the terminal from where the command was executed.docker-compose down
: Stops and removes all containers.docker-compose logs -f --tail=0
: Prints out log messages from all containers.-f
means that the command will not complete, and instead waits for new log messages.--tail=0
means that we don’t want to see any previous log messages, only new ones.
For a full list of Docker Compose commands, see https://docs.docker.com/compose/reference/.
This was a brief introduction to Docker. We will go into more detail about Docker starting with Chapter 4, Deploying Our Microservices Using Docker.