Docker provides the tools for creating images (templates for containers, remember), distributing those images to systems other than the one used for building the image, and finally, running containers based on these images:
Docker Engine will participate in all workflow steps, and we can use just one host or many during these processes, including our developers' laptops.
Let's provide a quick review of the usual workflow processes.
Building
Building applications using containers is easy. Here are the standard steps:
- The developer usually codes an application on their own computer.
- When the code is ready, or there is a new release, new functionalities, or a bug has simply been fixed, a commit is deployed.
- If our code has to be compiled, we can do it at this stage. If we are using an interpreted language for our code, we will just add it to the next stage.
- Either manually or using continuous integration orchestration, we can create a Docker image integrating compiled binary or interpreted code with the required runtime and all its dependencies. Images are our new component artifacts.
We have passed the building stage and the built image, with everything included, must be deployed to production. But first, we need to ensure its functionality and health (Will it work? How about performance?). We can do all these tests on different environments using the image artifact we created.
Shipping
Sharing created artifacts is easier with containers. Here are some of the new steps:
- The created image is on our build host system (or even on our laptop). We will push this artifact to an image registry to ensure that it is available for the next workflow processes.
- Docker Enterprise provides integrations on Docker Trusted Registry to follow separate steps from the first push, image scanning to look for vulnerabilities, and different image pulls from different environments during continuous integration stages.
- All pushes and pulls are managed by Docker Engine and triggered by Docker clients.
Now that the image has been shipped on different environments, during integration and performance tests, we need to launch containers using environment variables or configurations for each stage.
Running
So, we have new artifacts that are easy to share between different environments, but we need to execute them in production. Here are some of the benefits of containers for our applications:
- All environments will use Docker Engine to execute our containers (processes), but that's all. We don't really need any portion of software other than Docker Engine to execute the image correctly (naturally, we have simplified this idea because we will need volumes and external resources in many cases).
- If our image passed all the tests defined in the workflow, it is ready for production, and this step will be as simple as deploying the image built originally on the previous environment, using all the required arguments and environment variables or configurations for production.
- If our environments were orchestration-managed using Swarm or Kubernetes, all these steps would have been run securely, with resilience, using internal load balancers, and with required replicas, among other properties, that this kind of platform provides.
As a summary, keep in mind that Docker Engine provides all the actions required for building, shipping, and running container-based applications.