Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
The Ultimate Docker Container Book
The Ultimate Docker Container Book

The Ultimate Docker Container Book: Build, test, ship, and run containers with Docker and Kubernetes , Third Edition

Arrow left icon
Profile Icon Dr. Gabriel N. Schenker
Arrow right icon
€20.98 €29.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.5 (11 Ratings)
eBook Aug 2023 626 pages 3rd Edition
eBook
€20.98 €29.99
Paperback
€37.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Dr. Gabriel N. Schenker
Arrow right icon
€20.98 €29.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.5 (11 Ratings)
eBook Aug 2023 626 pages 3rd Edition
eBook
€20.98 €29.99
Paperback
€37.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€20.98 €29.99
Paperback
€37.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

The Ultimate Docker Container Book

What Are Containers and Why Should I Use Them?

This first chapter will introduce you to the world of containers and their orchestration. This book starts from the very beginning, in that it assumes that you have limited prior knowledge of containers, and will give you a very practical introduction to the topic.

In this chapter, we will focus on the software supply chain and the friction within it. Then, we’ll present containers, which are used to reduce this friction and add enterprise-grade security on top of it. We’ll also look into how containers and the ecosystem around them are assembled. We’ll specifically point out the distinctions between the upstream Open Source Software (OSS) components, united under the code name Moby, that form the building blocks of the downstream products of Docker and other vendors.

The chapter covers the following topics:

  • What are containers?
  • Why are containers important?
  • What’s the benefit of using containers for me or for my company?
  • The Moby project
  • Docker products
  • Container architecture

After completing this chapter, you will be able to do the following:

  • Explain what containers are, using an analogy such as physical containers, in a few simple sentences to an interested layperson
  • Justify why containers are so important using an analogy such as physical containers versus traditional shipping, or apartment homes versus single-family homes, and so on, to an interested layperson
  • Name at least four upstream open source components that are used by Docker products, such as Docker Desktop
  • Draw a high-level sketch of the Docker container architecture

Let’s get started!

What are containers?

A software container is a pretty abstract thing, so it might help to start with an analogy that should be pretty familiar to most of you. The analogy is a shipping container in the transportation industry. Throughout history, people have transported goods from one location to another by various means. Before the invention of the wheel, goods would most probably have been transported in bags, baskets, or chests on the shoulders of humans themselves, or they might have used animals such as donkeys, camels, or elephants to transport them. With the invention of the wheel, transportation became a bit more efficient as humans built roads that they could move their carts along. Many more goods could be transported at a time. When the first steam-driven machines, and later gasoline-driven engines, were introduced, transportation became even more powerful. We now transport huge amounts of goods on planes, trains, ships, and trucks. At the same time, the types of goods became more and more diverse, and sometimes complex to handle. In all these thousands of years, one thing hasn’t changed, and that is the necessity to unload goods at a target location and maybe load them onto another means of transportation. Take, for example, a farmer bringing a cart full of apples to a central train station where the apples are then loaded onto a train, together with all the apples from many other farmers. Or think of a winemaker bringing their barrels of wine on a truck to the port where they are unloaded, and then transferred to a ship that will transport those barrels overseas.

This unloading from one means of transportation and loading onto another means of transportation was a really complex and tedious process. Every type of product was packaged in its own way and thus had to be handled in its own particular way. Also, loose goods faced the risk of being stolen by unethical workers or damaged in the process of being handled.

Figure 1.1 – Sailors unloading goods from a ship

Figure 1.1 – Sailors unloading goods from a ship

Then, containers came along, and they totally revolutionized the transportation industry. A container is just a metallic box with standardized dimensions. The length, width, and height of each container are the same. This is a very important point. Without the world agreeing on a standard size, the whole container thing would not have been as successful as it is now. Now, with standardized containers, companies who want to have their goods transported from A to B package those goods into these containers. Then, they call a shipper, who uses a standardized means of transportation. This can be a truck that can load a container, or a train whose wagons can each transport one or several containers. Finally, we have ships that are specialized in transporting huge numbers of containers. Shippers never need to unpack and repackage goods. For a shipper, a container is just a black box, and they are not interested in what is in it, nor should they care in most cases. It is just a big iron box with standard dimensions. Packaging goods into containers is now fully delegated to the parties who want to have their goods shipped, and they should know how to handle and package those goods. Since all containers have the same agreed-upon shape and dimensions, shippers can use standardized tools to handle containers; that is, cranes that unload containers, say from a train or a truck, and load them onto a ship and vice versa. One type of crane is enough to handle all the containers that come along over time. Also, the means of transportation can be standardized, such as container ships, trucks, and trains. Because of all this standardization, all the processes in and around shipping goods could also be standardized and thus made much more efficient than they were before the introduction of containers.

Figure 1.2 – Container ship being loaded in a port

Figure 1.2 – Container ship being loaded in a port

Now, you should have a good understanding of why shipping containers are so important and why they revolutionized the whole transportation industry. I chose this analogy purposefully since the software containers that we are going to introduce here fulfill the exact same role in the so-called software supply chain as shipping containers do in the supply chain of physical goods.

Let’s then have a look at what this whole thing means when translated to the IT industry and software development, shall we? In the old days, developers would develop new applications. Once an application was completed in their eyes, they would hand that application over to the operations engineers, who were then supposed to install it on the production servers and get it running. If the operations engineers were lucky, they even got a somewhat accurate document with installation instructions from the developers. So far, so good, and life was easy. But things got a bit out of hand when, in an enterprise, there were many teams of developers that created quite different types of applications, yet all of them needed to be installed on the same production servers and kept running there. Usually, each application has some external dependencies, such as which framework it was built on, what libraries it uses, and so on. Sometimes, two applications use the same framework but of different versions that might or might not be compatible with each other. Our operations engineers’ lives became much harder over time. They had to become really creative with how they loaded their ships, that is, their servers, with different applications without breaking something. Installing a new version of a certain application was now a complex project on its own, and often needed months of planning and testing beforehand. In other words, there was a lot of friction in the software supply chain.

But these days, companies rely more and more on software, and the release cycles need to become shorter and shorter. Companies cannot afford to just release application updates once or twice a year anymore. Applications need to be updated in a matter of weeks or days, or sometimes even multiple times per day. Companies that do not comply risk going out of business due to the lack of agility. So, what’s the solution? One of the first approaches was to use virtual machines (VMs). Instead of running multiple applications all on the same server, companies would package and run a single application on each VM. With this, all the compatibility problems were gone, and life seemed to be good again. Unfortunately, that happiness didn’t last long. VMs are pretty heavy beasts on their own since they all contain a full-blown operating system such as Linux or Windows Server, and all that for just a single application. This is as if you used a whole ship just to transport a single truckload of bananas in the transportation industry. What a waste! That would never be profitable. The ultimate solution to this problem was to provide something much more lightweight than VMs also able to perfectly encapsulate the goods it needed to transport. Here, the goods are the actual application that has been written by our developers, plus – and this is important – all the external dependencies of the application, such as its framework, libraries, configurations, and more. This holy grail of a software packaging mechanism is the Docker container.

Developers package their applications, frameworks, and libraries into Docker containers, and then they ship those containers to the testers or operations engineers. For testers and operations engineers, a container is just a black box. It is a standardized black box, though. All containers, no matter what application runs inside them, can be treated equally. The engineers know that if any container runs on their servers, then any other containers should run too. And this is actually true, apart from some edge cases, which always exist. Thus, Docker containers are a means to package applications and their dependencies in a standardized way. Docker then coined the phrase Build, ship, and run anywhere.

Why are containers important?

These days, the time between new releases of an application becomes shorter and shorter, yet the software itself does not become any simpler. On the contrary, software projects increase in complexity. Thus, we need a way to tame the beast and simplify the software supply chain. Also, every day, we hear that cyber-attacks are on the rise. Many well-known companies are and have been affected by security breaches. Highly sensitive customer data gets stolen during such events, such as social security numbers, credit card information, health-related information, and more. But not only is customer data compromised – sensitive company secrets are stolen too. Containers can help in many ways. In a published report, Gartner found that applications running in a container are more secure than their counterparts not running in a container. Containers use Linux security primitives such as Linux kernel namespaces to sandbox different applications running on the same computers and control groups (cgroups) to avoid the noisy-neighbor problem, where one bad application uses all the available resources of a server and starves all other applications. Since container images are immutable, as we will learn later, it is easy to have them scanned for common vulnerabilities and exposures (CVEs), and in doing so, increase the overall security of our applications. Another way to make our software supply chain more secure is to have our containers use content trust. Content trust ensures that the author of a container image is who they say they are and that the consumer of the container image has a guarantee that the image has not been tampered with in transit. The latter is known as a man-in-the-middle (MITM) attack.

Everything I have just said is, of course, technically also possible without using containers, but since containers introduce a globally accepted standard, they make it so much easier to implement these best practices and enforce them. OK, but security is not the only reason containers are important. There are other reasons too. One is the fact that containers make it easy to simulate a production-like environment, even on a developer’s laptop. If we can containerize any application, then we can also containerize, say, a database such as Oracle, PostgreSQL, or MS SQL Server. Now, everyone who has ever had to install an Oracle database on a computer knows that this is not the easiest thing to do, and it takes up a lot of precious space on your computer. You would not want to do that to your development laptop just to test whether the application you developed really works end to end. With containers to hand, we can run a full-blown relational database in a container as easily as saying 1, 2, 3. And when we are done with testing, we can just stop and delete the container and the database will be gone, without leaving a single trace on our computer. Since containers are very lean compared to VMs, it is common to have many containers running at the same time on a developer’s laptop without overwhelming the laptop. A third reason containers are important is that operators can finally concentrate on what they are good at – provisioning the infrastructure and running and monitoring applications in production. When the applications they must run on a production system are all containerized, then operators can start to standardize their infrastructure. Every server becomes just another Docker host. No special libraries or frameworks need to be installed on those servers – just an OS and a container runtime such as Docker. Furthermore, operators do not have to have intimate knowledge of the internals of applications anymore, since those applications run self-contained in containers that ought to look like black boxes to them like how shipping containers look to personnel in the transportation industry.

What is the benefit of using containers for me or for my company?

Somebody once said “...today every company of a certain size has to acknowledge that they need to be a software company...” In this sense, a modern bank is a software company that happens to specialize in the business of finance. Software runs all businesses, period. As every company becomes a software company, there is a need to establish a software supply chain. For the company to remain competitive, its software supply chain must be secure and efficient. Efficiency can be achieved through thorough automation and standardization. But in all three areas – security, automation, and standardization – containers have been shown to shine. Large and well-known enterprises have reported that when containerizing existing legacy applications (many call them traditional applications) and establishing a fully automated software supply chain based on containers, they can reduce the cost for the maintenance of those mission-critical applications by a factor of 50% to 60% and they can reduce the time between new releases of these traditional applications by up to 90%. That being said, the adoption of container technologies saves these companies a lot of money, and at the same time, it speeds up the development process and reduces the time to market.

The Moby project

Originally, when Docker (the company) introduced Docker containers, everything was open source. Docker did not have any commercial products then. Docker Engine, which the company developed, was a monolithic piece of software. It contained many logical parts, such as the container runtime, a network library, a RESTful (REST) API, a command-line interface, and much more. Other vendors or projects such as Red Hat or Kubernetes used Docker Engine in their own products, but most of the time, they were only using part of its functionality. For example, Kubernetes did not use the Docker network library for Docker Engine but provided its own way of networking. Red Hat, in turn, did not update Docker Engine frequently and preferred to apply unofficial patches to older versions of Docker Engine, yet they still called it Docker Engine.

For all these reasons, and many more, the idea emerged that Docker had to do something to clearly separate Docker’s open source part from Docker’s commercial part. Furthermore, the company wanted to prevent competitors from using and abusing the name Docker for their own gains. This was the main reason the Moby project was born. It serves as an umbrella for most of the open source components Docker developed and continues to develop. These open source projects do not carry the name Docker anymore. The Moby project provides components used for image management, secret management, configuration management, and networking and provisioning. Also, part of the Moby project are special Moby tools that are, for example, used to assemble components into runnable artifacts. Some components that technically belong to the Moby project have been donated by Docker to the Cloud Native Computing Foundation (CNCF) and thus do not appear in the list of components anymore. The most prominent ones are notary, containerd, and runc, where the first is used for content trust and the latter two form the container runtime.

In the words of Docker, “... Moby is an open framework created by Docker to assemble specialized container systems without reinventing the wheel. It provides a “Lego set” of dozens of standard components and a framework for assembling them into custom platforms....”

Docker products

In the past, up until 2019, Docker separated its product lines into two segments. There was the Community Edition (CE), which was closed source yet completely free, and then there was the Enterprise Edition (EE), which was also closed source and needed to be licensed yearly. These enterprise products were backed by 24/7 support and were supported by bug fixes.

In 2019, Docker felt that what they had were two very distinct and different businesses. Consequently, they split away the EE and sold it to Mirantis. Docker itself wanted to refocus on developers and provide them with the optimal tools and support to build containerized applications.

Docker Desktop

Part of the Docker offering are products such as Docker Toolbox and Docker Desktop with its editions for Mac, Windows, and Linux. All these products are mainly targeted at developers. Docker Desktop is an easy-to-install desktop application that can be used to build, debug, and test dockerized applications or services on a macOS, Windows, or Linux machine. Docker Desktop is a complete development environment that is deeply integrated with the hypervisor framework, network, and filesystem of the respective underlying operating system. These tools are the fastest and most reliable ways to run Docker on a Mac, Windows, or Linux machine.

Note

Docker Toolbox has been deprecated and is no longer in active development. Docker recommends using Docker Desktop instead.

Docker Hub

Docker Hub is the most popular service for finding and sharing container images. It is possible to create individual, user-specific accounts and organizational accounts under which Docker images can be uploaded and shared inside a team, an organization, or with the wider public. Public accounts are free while private accounts require one of several commercial licenses. Later in this book, we will use Docker Hub to download existing Docker images and upload and share our own custom Docker images.

Docker Enterprise Edition

Docker EE – now owned by Mirantis – consists of the Universal Control Plane (UCP) and the Docker Trusted Registry (DTR), both of which run on top of Docker Swarm. Both are Swarm applications. Docker EE builds on top of the upstream components of the Moby project and adds enterprise-grade features such as role-based access control (RBAC), multi-tenancy, mixed clusters of Docker Swarm and Kubernetes, a web-based UI, and content trust, as well as image scanning on top.

Docker Swarm

Docker Swarm provides a powerful and flexible platform for deploying and managing containers in a production environment. It provides the tools and features you need to build, deploy, and manage your applications with ease and confidence.

Container architecture

Now, let us discuss how a system that can run Docker containers is designed at a high level. The following diagram illustrates what a computer that Docker has been installed on looks like. Note that a computer that has Docker installed on it is often called a Docker host because it can run or host Docker containers:

Figure 1.3 – High-level architecture diagram of Docker Engine

Figure 1.3 – High-level architecture diagram of Docker Engine

In the preceding diagram, we can see three essential parts:

  • At the bottom, we have the Linux Operating System
  • In the middle, we have the Container Runtime
  • At the top, we have Docker Engine

Containers are only possible because the Linux OS supplies some primitives, such as namespaces, control groups, layer capabilities, and more, all of which are used in a specific way by the container runtime and Docker Engine. Linux kernel namespaces, such as process ID (pid) namespaces or network (net) namespaces, allow Docker to encapsulate or sandbox processes that run inside the container. Control groups make sure that containers do not suffer from noisy-neighbor syndrome, where a single application running in a container can consume most or all the available resources of the whole Docker host. Control groups allow Docker to limit the resources, such as CPU time or the amount of RAM, that each container is allocated. The container runtime on a Docker host consists of containerd and runc. runc is the low-level functionality of the container runtime such as container creation or management, while containerd, which is based on runc, provides higher-level functionality such as image management, networking capabilities, or extensibility via plugins. Both are open source and have been donated by Docker to the CNCF. The container runtime is responsible for the whole life cycle of a container. It pulls a container image (which is the template for a container) from a registry, if necessary, creates a container from that image, initializes and runs the container, and eventually stops and removes the container from the system when asked. Docker Engine provides additional functionality on top of the container runtime, such as network libraries or support for plugins. It also provides a REST interface over which all container operations can be automated. The Docker command-line interface that we will use often in this book is one of the consumers of this REST interface.

Summary

In this chapter, we looked at how containers can massively reduce friction in the software supply chain and, on top of that, make the supply chain much more secure. In the next chapter, we will familiarize ourselves with containers. We will learn how to run, stop, and remove containers and otherwise manipulate them. We will also get a pretty good overview of the anatomy of containers. For the first time, we are really going to get our hands dirty and play with these containers. So, stay tuned!

Further reading

The following is a list of links that lead to more detailed information regarding the topics we discussed in this chapter:

Questions

Please answer the following questions to assess your learning progress:

  1. Which statements are correct (multiple answers are possible)?
    1. A container is kind of a lightweight VM
    2. A container only runs on a Linux host
    3. A container can only run one process
    4. The main process in a container always has PID 1
    5. A container is one or more processes encapsulated by Linux namespaces and restricted by cgroups
  2. In your own words, using analogies, explain what a container is.
  3. Why are containers considered to be a game-changer in IT? Name three or four reasons.
  4. What does it mean when we claim, if a container runs on a given platform, then it runs anywhere? Name two to three reasons why this is true.
  5. Is the following claim true or false: Docker containers are only useful for modern greenfield applications based on microservices? Please justify your answer.
  6. How much does a typical enterprise save when containerizing its legacy applications?
    1. 20%
    2. 33%
    3. 50%
    4. 75%
  7. Which two core concepts of Linux are containers based on?
  8. On which operating systems is Docker Desktop available?

Answers

  1. The correct answers are D and E.
  2. A Docker container is to IT what a shipping container is to the transportation industry. It defines a standard on how to package goods. In this case, goods are the application(s) developers write. The suppliers (in this case, the developers) are responsible for packaging the goods into the container and making sure everything fits as expected. Once the goods are packaged into a container, it can be shipped. Since it is a standard container, the shippers can standardize their means of transportation, such as lorries, trains, or ships. The shipper does not really care what is in the container. Also, the loading and unloading process from one means of transportation to another (for example, train to ship) can be highly standardized. This massively increases the efficiency of transportation. Analogous to this is an operations engineer in IT, who can take a software container built by a developer and ship it to a production system and run it there in a highly standardized way, without worrying about what is in the container. It will just work.
  3. Some of the reasons why containers are game-changers are as follows:
    • Containers are self-contained and thus if they run on one system, they run anywhere that a Docker container can run.
    • Containers run on-premises and in the cloud, as well as in hybrid environments. This is important for today’s typical enterprises since it allows a smooth transition from on-premises to the cloud.
    • Container images are built or packaged by the people who know best – the developers.
    • Container images are immutable, which is important for good release management.
    • Containers are enablers of a secure software supply chain based on encapsulation (using Linux namespaces and cgroups), secrets, content trust, and image vulnerability scanning.
  4. A container runs on any system that can host containers. This is possible for the following reasons:
    • Containers are self-contained black boxes. They encapsulate not only an application but also all its dependencies, such as libraries and frameworks, configuration data, certificates, and so on.
    • Containers are based on widely accepted standards such as OCI.
  5. The answer is false. Containers are useful for modern applications and to containerize traditional applications. The benefits for an enterprise when doing the latter are huge. Cost savings in the maintenance of legacy apps of 50% or more have been reported. The time between new releases of such legacy applications could be reduced by up to 90%. These numbers have been publicly reported by real enterprise customers.
  6. 50% or more.
  7. Containers are based on Linux namespaces (network, process, user, and so on) and cgroups. The former help isolate processes running on the same machine, while the latter are used to limit the resources a given process can access, such as memory or network bandwidth.
  8. Docker Desktop is available for macOS, Windows, and Linux.
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Master Docker container setup, operation, and debugging
  • Use Docker compose for managing multi-service applications
  • Navigate orchestrators like Kubernetes and Docker swarmkit
  • Purchase of the print or Kindle book includes a free PDF eBook

Description

The Ultimate Docker Container Book, 3rd edition enables you to leverage Docker containers for streamlined software development. You’ll uncover Docker fundamentals and how containers improve software supply chain efficiency and enhance security. You’ll start by learning practical skills such as setting up Docker environments, handling stateful components, running and testing code within containers, and managing Docker images. You’ll also explore how to adapt legacy applications for containerization and understand distributed application architecture. Next, you’ll delve into Docker's networking model, software-defined networks for secure applications, and Docker compose for managing multi-service applications along with tools for log analysis and metrics. You’ll further deepen your understanding of popular orchestrators like Kubernetes and Docker swarmkit, exploring their key concepts, and deployment strategies for resilient applications. In the final sections, you’ll gain insights into deploying containerized applications on major cloud platforms, including Azure, AWS, and GCE and discover techniques for production monitoring and troubleshooting. By the end of this book, you’ll be well-equipped to manage and scale containerized applications effectively.

Who is this book for?

This book is for Linux professionals, system administrators, operations engineers, DevOps engineers, software architects, and developers looking to work with Docker and Kubernetes from scratch. A basic understanding of Docker containers is recommended, but no prior knowledge of Kubernetes is required. Familiarity with scripting tools such as Bash or PowerShell will be advantageous.

What you will learn

  • Understand the benefits of using containers
  • Manage Docker containers effectively
  • Create and manage Docker images
  • Explore data volumes and environment variables
  • Master distributed application architecture
  • Deep dive into Docker networking
  • Use Docker Compose for multi-service apps
  • Deploy apps on major cloud platforms

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Aug 31, 2023
Length: 626 pages
Edition : 3rd
Language : English
ISBN-13 : 9781804613184
Concepts :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Aug 31, 2023
Length: 626 pages
Edition : 3rd
Language : English
ISBN-13 : 9781804613184
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 117.97
The Ultimate Docker Container Book
€37.99
Mastering Kubernetes
€41.99
50 Algorithms Every Programmer Should Know
€37.99
Total 117.97 Stars icon

Table of Contents

25 Chapters
Part 1:Introduction Chevron down icon Chevron up icon
Chapter 1: What Are Containers and Why Should I Use Them? Chevron down icon Chevron up icon
Chapter 2: Setting Up a Working Environment Chevron down icon Chevron up icon
Part 2:Containerization Fundamentals Chevron down icon Chevron up icon
Chapter 3: Mastering Containers Chevron down icon Chevron up icon
Chapter 4: Creating and Managing Container Images Chevron down icon Chevron up icon
Chapter 5: Data Volumes and Configuration Chevron down icon Chevron up icon
Chapter 6: Debugging Code Running in Containers Chevron down icon Chevron up icon
Chapter 7: Testing Applications Running in Containers Chevron down icon Chevron up icon
Chapter 8: Increasing Productivity with Docker Tips and Tricks Chevron down icon Chevron up icon
Part 3:Orchestration Fundamentals Chevron down icon Chevron up icon
Chapter 9: Learning about Distributed Application Architecture Chevron down icon Chevron up icon
Chapter 10: Using Single-Host Networking Chevron down icon Chevron up icon
Chapter 11: Managing Containers with Docker Compose Chevron down icon Chevron up icon
Chapter 12: Shipping Logs and Monitoring Containers Chevron down icon Chevron up icon
Chapter 13: Introducing Container Orchestration Chevron down icon Chevron up icon
Chapter 14: Introducing Docker Swarm Chevron down icon Chevron up icon
Chapter 15: Deploying and Running a Distributed Application on Docker Swarm Chevron down icon Chevron up icon
Part 4:Docker, Kubernetes, and the Cloud Chevron down icon Chevron up icon
Chapter 16: Introducing Kubernetes Chevron down icon Chevron up icon
Chapter 17: Deploying, Updating, and Securing an Application with Kubernetes Chevron down icon Chevron up icon
Chapter 18: Running a Containerized Application in the Cloud Chevron down icon Chevron up icon
Chapter 19: Monitoring and Troubleshooting an Application Running in Production Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.5
(11 Ratings)
5 star 72.7%
4 star 18.2%
3 star 0%
2 star 0%
1 star 9.1%
Filter icon Filter
Top Reviews

Filter reviews by




N/A Nov 26, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
It is an insighful read, to the point and not leaning too much to Kubernetes.
Feefo Verified review Feefo
N/A Feb 11, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Feefo Verified review Feefo
N/A Feb 08, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Very thorough explanation and easy to follow labs by the author.
Feefo Verified review Feefo
Mike Schwartz Oct 14, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book goes into detail about how to use Docker and Kubernetes to develop and test containerized applications and to deploy them to the cloud. It is a great addition to any developer’s library. If you want to learn how to use containers, this book is for you!
Amazon Verified review Amazon
Tomica Kaniski Oct 06, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
If you need a book on Docker that contains everything about Docker, this may be the resource for you. It goes through all the topics needed to learn about Docker, containers, and container orchestration, either on-premises or in cloud(s). With 600+ pages, it's a long read, but If you know some of the topics already, you may even skip some chapters. I would use it to learn Docker from scratch.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.