Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Docker on Amazon Web Services
Docker on Amazon Web Services

Docker on Amazon Web Services: Build, deploy, and manage your container applications at scale

eBook
$29.99 $43.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Table of content icon View table of contents Preview book icon Preview Book

Docker on Amazon Web Services

Container and Docker Fundamentals

Docker and Amazon Web Services are two of the hottest and most popular technologies available right now.  Docker is the most popular container platform on the planet right now, while Amazon Web Services is the number 1 public cloud provider.  Organizations both large and small are adopting containers en masse, and the public cloud is no longer the playground of start-ups, with large enterprises and organizations migrating to the cloud in droves. The good news is that this book will give you practical, real-world insights and knowledge of how to use both Docker and AWS together to help you test, build, publish, and deploy your applications faster and more efficiently than ever before.

In this chapter, we will briefly discuss the history of Docker, why Docker is so revolutionary, and the high level architecture of Docker.  We will describe the various services that support running Docker in AWS, and discuss why you might choose one service over another based upon the requirements of your organization.

We will then focus on getting your local environment up-and-running with Docker, and install the various software prerequisites required to run the sample application for this book.  The sample application is a simple web application written in Python that stores data in a  MySQL database, and this book will use the sample application to help you solve real-world challenges such as testing, building, and publishing Docker images, as well as deploying and running Docker applications in a variety of container management platforms on AWS. Before you can package the sample application as a Docker image, you need to understand the application's external dependencies and the key tasks that are required to test, build, deploy, and run the application, and you will learn how to install application dependencies, run unit tests, start the application up locally, and orchestrate key operational tasks such as establishing the initial database schema and tables required for the sample application to run.  

The following topics will be covered in this chapter:

  • Introduction to containers and Docker
  • Why containers are revolutionary
  • Docker architecture
  • Docker in AWS
  • Setting up a local Docker environment
  • Installing the sample application

Technical requirements

Introduction to containers and Docker

In recent times, containers have become a common lingua franca in the technology world, and it's difficult to imagine a world where, just a mere few years ago, only a small portion of the technology community had even heard about containers.

To trace the origins of containers, you need to rewind way back to 1979, when Unix V7 introduced the chroot system call.  The chroot system call provided the ability to change the root directory of a running process to a different location in the file system, and was the first mechanism available to provide some form of process isolation. chroot was added to the Berkeley Software Distribution (BSD) in 1982 (this is an ancestor of the modern macOS operating system), and not much more happened in terms of containerization and isolation for a number of years, until a feature called FreeBSD Jails was released in 2000, which provided separate environments called "jails" that could each be assigned their own IP address and communicate independently on the network.

Later, in 2004, Solaris launched the first public beta of Solaris Containers (which eventually became known as Solaris Zones), which provided system resource separation by creating zones. This was a technology I remember using back in 2007 to help overcome a lack of expensive physical Sun SPARC infrastructure and run multiple versions of an application on a single SPARC server.

In the mid 2000s, a lot more progress in the march toward containers occurred, with Open Virtuozzo (Open VZ) being released in 2005, which patched the Linux kernel to provide operating system level virtualization and isolation.  In 2006, Google launched a feature called process containers (which was eventually renamed to control groups or cgroups) that provided the ability to restrict CPU, memory, network, and disk usage for a set of processes. In 2008, a feature called Linux namespaces, which provided the ability to isolate different types of resources from each other, was combined with cgroups to create Linux Containers (LXC), forming the initial foundation to modern containers as we know them today.  

In 2010, as cloud computing was starting to gain popularity, a number of Platform-as-a-Service (PaaS) start-ups appeared, which provided fully managed runtime environments for specific application frameworks such as Java Tomcat or Ruby on Rails.  One start-up called dotCloud was quite different, in that it was the first "polyglot" PaaS provider, meaning that you could run virtually any application environment you wanted using their service.  The technology underpinning this was Linux Containers, and dotCloud added a number or proprietary features to provide a fully managed container platform for their customers.  By 2013, the PaaS market had well and truly entered the Gartner hype cycle (https://en.wikipedia.org/wiki/Hype_cycle) trough of disillusionment, and dotCloud was on the brink of financial collapse. One of the co-founders of the company, Solomon Hykes, pitched an idea to the board to open source their container management technology, sensing that there was huge potential.  The board disagreed, however Solomon and his technical team proceeded regardless, and the rest, as they say, is history. 

After announcing Docker as a new open source container management platform to the world in 2013, Docker quickly rose in prominence, becoming the darling of the open source world and vendor community alike, and is likely one of the fastest growing  technologies in history.  By the end of 2014, during which time Docker 1.0 was released, over 100 million Docker containers had been downloaded  fast forward to March 2018, and that number sat at 37 billion downloads. At the end of 2017, container usage amongst Fortune 100 companies sat at 71%, indicating that Docker and containers have become universally accepted for both start-ups and enterprises alike.  Today, if you are building modern, distributed applications based upon microservice architectures, chances are that your technology stack will be underpinned by Docker and containers.

Why containers are revolutionary

The brief and successful history of containers speaks for itself, which leads to the question, why are containers so popular?  The following provides some of the more important answers to this question:

  • Lightweight: Containers are often compared to virtual machines, and in this context, containers are much more lightweight that virtual machines.  A container can start up an isolated and secure runtime environment for your application in seconds, compared with the handful of minutes a typical virtual machine takes to start. Container images are also much smaller than their virtual machine counterparts.
  • Speed: Containers are fast they can be downloaded and started within seconds, and within a few minutes you can test, build, and publish your Docker image for immediate download.  This allows organizations to innovate faster, which is critical in today's ever increasing competitive landscape.  
  • Portable: Docker makes it easier than ever to run your applications on your local machine, in your data center, and in the public cloud.  Because Docker packages are complete runtime environments for your application complete with operating system dependencies and third-party packages, your container hosts don't required any special prior setup or configuration specific to each individual application  all of these specific dependencies and requirements are self-contained within the Docker image, making comments like "But it worked on my machine!" relics of the past. 
  • Security: There has been a lot of debate about the security of containers, but in my opinion, if implemented correctly, containers actually offer greater security than non-container alternative approaches.  The main reason for this is that containers express security context very well applying security controls at the container level typically represents the right level of context for those controls. A lot of these security controls are provided by "default" for example, namespaces are inherently a security mechanism in that they provide isolation.  A more explicit example is that they can apply SELinux or AppArmor profiles on a per container basis, making it very easy to define different profiles depending on specific security requirements of each container.
  • Automation: Organizations are adopting software delivery practices such as continuous delivery, where automation is a fundamental requirement.  Docker natively supports automation at its core, a Dockerfile is an automation specification of sorts that allows the Docker client to automatically build your containers, and other Docker tools such as Docker Compose allow you express connected multi-container environments that you can automatically create and tear down in seconds.

Docker architecture

As discussed in the preface of this book, I assume that you have at least a basic working knowledge of Docker. If you are new to Docker, then I recommend that you supplement your learning in this chapter by reading the Docker overview at https://docs.docker.com/engine/docker-overview/, and running through some of the Docker tutorials at https://docs.docker.com/get-started/.

The Docker architecture includes several core components, as follows:

  • Docker Engine: This provides several server code components for running your container workloads, including an API server for communications with Docker clients, and the Docker daemon that provides the core runtime of Docker.  The daemon is responsible for the complete life cycle of your containers and other resources, and also ships with built-in clustering support to allow you to build clusters or swarms of your Docker Engines.  
  • Docker client: This provides a client for building Docker images, running Docker containers, and managing other resources such as Docker volumes and Docker networks. The Docker client is the primary tool you will work with when using Docker, and interacts with both the Docker Engine and Docker registry components.
  • Docker registry: This is responsible for storing and distributing Docker images for your application.  Docker supports both public and private registries, and the ability to package and distribute your applications via a Docker registry is one of the major reasons for Docker's success.  In this book, you will download third-party images from Docker Hub, and you will store your own application images in the private AWS registry service called Elastic Container Registry (ECR).
  • Docker Swarm: A swarm is a collection of Docker Engines that form a self-managing and self-healing cluster, allowing you to horizontally scale your container workloads and provide resiliency in the event of Docker Engine failures. A Docker Swarm cluster includes a number of master nodes that form the cluster control plane, and a number of worker nodes that actually run your container workloads.

When you work with the preceding components, you interact with a number of different types of objects in the Docker architecture:

  • Images: An image is built using a Dockerfile, which includes a number of instructions on how to build the runtime environment for your containers.  The result of executing each of these build instructions is stored as a set of layers and is distributed as a downloadable and installable image, and the Docker Engine reads the instructions in each layer to construct a runtime environment for all containers based on a given image.
  • Containers: A container is the runtime manifestation of a Docker image. Under the hood, a container is comprised of a collection of Linux namespaces, control groups, and storage that collectively create an isolated runtime environment form which you can run a given application process.  
  • Volumes: By default, the underlying storage mechanism for containers is based upon the union file system, which allows a virtual file system to be constructed from the various layers in a Docker image. This approach is very efficient in that you can share layers and build up multiple containers from these shared layers, however this does have a performance penalty and does not support persistence.  Docker volumes provide access to a dedicated pluggable storage medium, which your containers can use for IO intensive applications and to persist data.
  • Networks: By default, Docker containers each operate in their own network namespace, which provides isolation between containers. However, they must still provide network connectivity to other containers and the outside world.  Docker supports a variety of network plugins that support connectivity between containers, which can even extend across Docker Swarm clusters.
  • Services: A service provides an abstraction that allows you to scale your applications by spinning up multiple containers or replicas of your service that can be load balanced across multiple Docker Engines in a Docker Swarm cluster.

Running Docker in AWS

Along with Docker, the other major technology platform we will target in this book is AWS.   

AWS is the world's leading public cloud provider, and as such offers a variety of ways to run your Docker applications:

  • Elastic Container Service (ECS): In 2014, AWS launched ECS, which was the first dedicated public cloud offering that supported Docker.  ECS provides a hybrid managed service of sorts, where ECS is responsible for orchestrating and deploying your container applications (such as the control plane of a container management platform), and you are responsible for providing the Docker Engines (referred to as ECS container instances) that your containers will actually run on.  ECS is free to use (you only pay for the ECS container instances that run your containers), and removes much of the complexity of managing container orchestration and ensuring your applications are always up and running. However, this does require you to manage the EC2 infrastructure that runs your ECS container instances.  ECS is considered Amazon's flagship Docker service and as such will be the primary service that we will focus on in this book.
  • Fargate: Fargate was launched in late 2017 and provides a fully managed container platform that manages both the ECS control plane and ECS container instances for you.  With Fargate, your container applications are deployed onto shared ECS container instance infrastructures that you have no visibility of which AWS manages, allowing you to focus on building, testing, and deploying your container applications without having to worry about any underlying infrastructure. Fargate is a fairly new service that, at the time of writing this book, has limited regional availability, and has some constraints that mean it is not suitable for all use cases.  We will cover the Fargate service in Chapter 14, Fargate and ECS Service Discovery.
  • Elastic Kubernetes Service (EKS): EKS launched in June 2018 and supports the popular open source Kubernetes container management platform. EKS is similar to ECS in that it is a hybrid managed service where Amazon provides fully managed Kubernetes master nodes (the Kubernetes control plane), and you provide Kubernetes worker nodes in the form of EC2 autoscaling groups that run your container workloads.  Unlike ECS, EKS is not free and at the time of writing this book costs 0.20c USD per hour, plus any EC2 infrastructure costs associated with your worker nodes.  Given the ever growing popularity of Kubernetes as a cloud/infrastructure agnostic container management platform, along with its open source community, EKS is sure to become very popular, and we will provide an introduction to Kubernetes and EKS in Chapter 17Elastic Kubernetes Service.
  • Elastic Beanstalk (EBS): Elastic Beanstalk is a popular Platform as a Service (PaaS) offering provided by AWS that provides a complete and fully managed environment that targets different types of popular programming languages and application frameworks such as Java, Python, Ruby, and Node.js. Elastic Beanstalk also supports Docker applications, allowing you to support a wide variety of applications written in the programming language of your choice. You will learn how to deploy a multi-container Docker application in Chapter 15, Elastic Beanstalk.
  • Docker Swarm in AWS: Docker Swarm is the native container management and clustering platform built into Docker that leverages the native Docker and Docker Compose tool chain to manage and deploy your container applications.  At the time of writing this book, AWS does not provide a managed offering for Docker Swarm, however Docker provides a CloudFormation template (CloudFormation is a free Infrastructure as Code automation and management service provided by AWS) that allows you to quickly deploy a Docker Swarm cluster in AWS that integrates with native AWS offerings include the Elastic Load Balancing (ELB) and Elastic Block Store (EBS) services.  We will cover all of this and more in the chapter Docker Swarm in AWS.

  • CodeBuild: AWS CodeBuild is a fully managed build service that supports continuous delivery use cases by providing a container-based build agent that you can use to test, build, and deploy your applications without having to manage any of the infrastructure traditionally associated with continuous delivery systems.  CodeBuild uses Docker as its container platform for spinning up build agents on demand, and you will be introduced to CodeBuild along with other continuous delivery tools such as CodePipeline in the chapter Continuously Delivering ECS Applications.
  • Batch: AWS Batch provides a fully managed service based upon ECS that allows you to run container-based batch workloads without needing to worry about managing or maintaining any supporting infrastructure.  We will not be covering AWS Batch in this book, however you can learn more about this service at https://aws.amazon.com/batch/.

With such a wide variety of options to run your Docker applications on AWS, it is important to be able to choose the right solution based upon the requirements of your organization or specific use cases.

If you are a small to medium sized organization that wants to get up and running quickly with Docker on AWS, and you don't want to manage any supporting infrastructure, then Fargate or Elastic Beanstalk are options that you may prefer.  Fargate supports native integration with key AWS services, and is a building block component that doesn't dictate how your build, deploy, or operate your applications.  At the time of writing this book, Fargate is not available in all regions, is comparatively expensive when compared to other solutions, and has some limitations such as not being able to support persistent storage.  Elastic Beanstalk provides a comprehensive end-to-end solution for managing your Docker applications, providing a variety of integrations out of the box, and includes operational tooling to manage the complete life cycle of your applications. Elastic Beanstalk does require you to buy into a very opinionated framework and methodology of how to build, deploy, and run your applications, and can be difficult to customize to meet your needs. 

If you are a larger organization that has specific requirements around security and compliance, or just wants greater flexibility and control over the infrastructure that runs your container workloads, then you should consider ECS, EKS, and Docker Swarm. ECS is the native flagship container management platform of choice for AWS, and as such has a large customer base that has been running containers at scale for a number of years.  As you will learn in this book, ECS is integrated with CloudFormation, which allows you to define all of your clusters, application services, and container definitions using an Infrastructure as Code approach that can be combined with other AWS resources to provide you with the ability to deploy complete, complex environments with the click of a button. That said, the main criticism of ECS is that it is a proprietary solution specific to AWS, meaning that you can't use it in other cloud environments or run it on your own infrastructure.  Increasingly larger organizations are looking to infrastructure and cloud agnostic cloud management platforms, and this is where you should consider EKS or Docker Swarm if these are your goals. Kubernetes has taken the container orchestration world by storm, and is now one of the largest and most popular open source projects.  AWS now offers a managed Kubernetes service in the form of EKS, which makes it very easy to get Kubernetes up and running in AWS, and leverage core integrations with CloudFormation, and the Elastic Load Balancing (ELB) and Elastic Block Store (EBS) services. Docker Swarm is a competitor to Kubernetes, and although it seems to have lost the battle for container orchestration supremacy to Kubernetes, it does have the advantage of being a native out-of-the-box feature integrated with Docker which is very easy to get up and running using familiar Docker tools.  Docker does currently publish CloudFormation templates and support key integrations with AWS services that makes it very easy to get up and running in AWS. However, there are concerns around the longevity of such solutions given that Docker Inc. is a commercial entity and the ever growing popularity and dominance of Kubernetes may force Docker Inc. to focus solely on its paid Docker Enterprise Edition and other commercial offerings in the future.

As you can see, there are many considerations when it comes to choosing a solution that is right for you, and the great thing about this book is that you will learn how to use each of these approaches to deploy and run your Docker applications in AWS.  Regardless of which solution you think might sounds more suited to you right now, I encourage you to read through and complete all of the chapters in this book, as much of the content you will learn for one specific solution can be applied to the other solutions, and you will be in a much better position to tailor and build a comprehensive container management solution based upon your desired outcomes.

Setting up a local Docker environment

With introductions out of the way, it is time to get started and set up a local Docker environment that you will use to test, build, and deploy a Docker image for the sample application used for this book.  For now, we will focus on getting Docker up and running, however note that later on we will also use your local environment to interact with the various container management platforms discussed in this book, and to manage all of your AWS resources using the AWS console, AWS command-line interface, and AWS CloudFormation service.  

Although this book is titled Docker on Amazon Web Services, it is important to note that Docker containers come in two flavors:

  • Linux containers
  • Windows containers

This book is exclusively focused on Linux containers, which are designed to run on a Linux-based kernel with the Docker Engine installed. When you want to use your local environment to build, test, and run Linux containers locally, this means you must have access to a local Linux-based Docker Engine.  If you are operating on a Linux-based system such as Ubuntu, you can install a Docker Engine natively in your operating system. However, if you are using Windows or macOS, this requires you to set up a local virtual machine that runs the Docker Engine and install a Docker client for your operating system.

Luckily, Docker has great packaging and tooling for making this process very simple on Windows and macOS environments, and we will now discuss how to set up a local Docker environment for macOS, Windows 10, and Linux, along with other tools that will be used in this book such as Docker Compose and GNU Make.  For Windows 10 environments, I will also cover how to set up the Windows 10 Linux subsystem to interact with your local Docker installation, which will provide you with access to an environment where you can run the other Linux-based tools that are used throughout this book.

Before we continue, it's also important to note that from a licensing perspective, Docker is currently available in two different editions, which you can learn more about at https://docs.docker.com/install/overview/:

  • Community edition (CE)
  • Enterprise edition (EE)

We will be working exclusively with the free community edition (Docker CE), which includes the core Docker Engine.  Docker CE is suitable for use with all of the technologies and services we will cover in this book, including Elastic Container Service (ECS), Fargate, Docker Swarm, Elastic Kubernetes Service (EKS), and Elastic Beanstalk.  

Along with Docker, we also need a few other tools to help automate a number of build, test, and deployment tasks that we will be performing throughout this book:

  • Docker Compose: This allows you to orchestrate and run multi-container environments both locally and on Docker Swarm clusters
  • Git: This is required to fork and clone the sample application from GitHub and create your own Git repositories for the various applications and environments you will create in this book
  • GNU Make 3.82 or higher: This provides task automation, allowing you run simple commands (for example, make test) to execute a given task
  • jq: A command-line utility for parsing JSON
  • curl: A command-line HTTP client
  • tree: A command-line client for displaying folder structures in the shell
  • Python interpreter: This is required for Docker Compose and the AWS Command-Line Interface (CLI) tool that we will install in a later chapter
  • pip: A Python package manager for installing Python applications such as the AWS CLI
Some of the tools used in this book are representative only, meaning that you can replace them with alternatives if you desire.  For example, you could replace GNU Make with another tool to provide task automation.

One other important tool that you will need is a decent text editor  Visual Studio Code (https://code.visualstudio.com/) and Sublime Text (https://www.sublimetext.com/) are excellent choices which are available on Windows, macOS, and Linux. 

Now, let's discuss how to install and configure your local Docker environment for the following operating systems:

  • macOS
  • Windows 10
  • Linux

Setting up a macOS environment

If you are running macOS, the quickest way to get Docker up and running is to install Docker for Mac, which you can read more about at https://docs.docker.com/docker-for-mac/install/ and download from https://store.docker.com/editions/community/docker-ce-desktop-mac.  Under the hood, Docker for Mac leverages the native macOS  hypervisor framework, creating a Linux virtual machine to run the Docker Engine and installing a Docker client in your local macOS environment.

You will first need to create a free Docker Hub account in order to proceed, and once you have completed registration and logged in, click the Get Docker button to download the latest version of Docker:

Downloading Docker for Mac

Once you have completed the download, open the download file, drag the Docker icon to the Applications folder, and then run Docker:

Installing Docker

Proceed through the Docker installation wizard and once complete, you should see a Docker icon on your macOS toolbar:

Docker icon on macOS toolbar

If you click on this icon and select Preferences, a Docker Preferences dialog will be displayed, which allows you to configure various Docker settings.  One setting you may want to immediately change is the memory allocated to the Docker Engine, which in my case I have increased from the default of 2 GB to 8 GB:

Increasing memory

At this point, you should be able to start up a Terminal and run the docker info command:

> docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 18.06.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
...
...

You can also start a new container using the docker run command:

> docker run -it alpine echo "Hello World"
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
ff3a5c916c92: Pull complete
Digest: sha256:e1871801d30885a610511c867de0d6baca7ed4e6a2573d506bbec7fd3b03873f
Status: Downloaded newer image for alpine:latest
Hello World
> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS
a251bd2c53dd alpine "echo 'Hello World'" 3 seconds ago Exited (0) 2 seconds ago
> docker rm a251bd2c53dd
a251bd2c53dd

In the preceding example, you must run the alpine image, which is based on the lightweight Alpine Linux distribution, and run the echo "Hello World" command. The -it flags specify that you need to run the container in an interactive terminal environment, which allows you to see standard output and also interact with the container via a console.

Once the container exits, you can use the docker ps command to show running containers, and append the -a flag to show both running and stopped containers.  Finally, you can use the docker rm command to remove a stopped container.

Installing other tools

As discussed earlier in this section, we also require a number of other tools to help automate a number of build, test, and deployment tasks. On macOS, some of these tools are already included, and are outlined as follows:

  • Docker Compose: This is already included when you install Docker for Mac.
  • Git: When you install the Homebrew package manager (we will discuss Homebrew shortly), XCode command-line utilities are installed, which include Git.  If you use another package manager, you may need to install Git using your package manager.
  • GNU Make 3.82 or higher: macOS includes Make 3.81, which doesn't quite meet the requirements of version 3.82, therefore you need to install GNU Make using a third-party package manager such as Homebrew.
  • curl: This is included by default with macOS, and therefore requires no installation.
  • jq and tree: These are not included by default in macOS, and therefore they need to be installed via a third-party package manager such as Homebrew.
  • Python interpreter: macOS includes a system installation of Python that you can use to run Python applications, however I recommend leaving the system Python installation alone and instead install Python using the Homebrew package manager (https://docs.brew.sh/Homebrew-and-Python).
  • pip: The system install of Python does not include the popular PIP Python package manager, hence you must install this separately if using the system Python interpreter.  If you choose to install Python using Homebrew, this will include PIP.

The easiest way to install the preceding tools on macOS is to first install a third-party package manager called Homebrew.  You can install Homebrew by simply browsing to the Homebrew homepage at https://brew.sh/:

Installing Homebrew

Simply copy and paste the highlighted command into your terminal prompt, which will automatically install the Homebrew package manager.  Once complete, you will be able to install each of the previously listed utilities using the brew command:

> brew install make --with-default-names
==> Downloading https://ftp.gnu.org/gnu/make/make-4.2.1.tar.bz2
Already downloaded: /Users/jmenga/Library/Caches/Homebrew/make-4.2.1.tar.bz2
==> ./configure --prefix=/usr/local/Cellar/make/4.2.1_1
==> make install
/usr/local/Cellar/make/4.2.1_1: 13 files, 959.5KB, built in 29 seconds
> brew install jq tree
==> Downloading https://homebrew.bintray.com/bottles/jq-1.5_3.high_sierra.bottle.tar.gz
Already downloaded: /Users/jmenga/Library/Caches/Homebrew/jq-1.5_3.high_sierra.bottle.tar.gz
==> Downloading https://homebrew.bintray.com/bottles/tree-1.7.0.high_sierra.bottle.1.tar.gz
Already downloaded: /Users/jmenga/Library/Caches/Homebrew/tree-1.7.0.high_sierra.bottle.1.tar.gz
==> Pouring jq-1.5_3.high_sierra.bottle.tar.gz
/usr/local/Cellar/jq/1.5_3: 19 files, 946.6KB
==> Pouring tree-1.7.0.high_sierra.bottle.1.tar.gz
/usr/local/Cellar/tree/1.7.0: 8 files, 114.3KB

You must first install GNU Make using the --with-default-names flag, which will replace the system version of Make that is installed on macOS.  If you prefer to omit this flag, then the GNU version of make will be available via the gmake command, and the existing system version of make will not be affected.

Finally, to install Python using Homebrew, you can run the brew install python command, which will install Python 3 and also install the PIP package manager.  Note that when you use brew to install Python 3, the Python interpreter is accessed via the python3 command, while the PIP package manager is accessed via the pip3 command rather than the pip command:

> brew install python
==> Installing dependencies for python: gdbm, openssl, readline, sqlite, xz
...
...
==> Caveats
Python has been installed as
/usr/local/bin/python3

Unversioned symlinks `python`, `python-config`, `pip` etc. pointing to
`python3`, `python3-config`, `pip3` etc., respectively, have been installed into
/usr/local/opt/python/libexec/bin

If you need Homebrew's Python 2.7 run
brew install python@2

Pip, setuptools, and wheel have been installed. To update them run
pip3 install --upgrade pip setuptools wheel

You can install Python packages with
pip3 install <package>
They will install into the site-package directory
/usr/local/lib/python3.7/site-packages

See: https://docs.brew.sh/Homebrew-and-Python
==> Summary
/usr/local/Cellar/python/3.7.0: 4,788 files, 102.2MB

On macOS, if you use Python which has been installed via brew or another package manager, you should also add the site module USER_BASE/bin folder to your local path, as this is where PIP will install any applications or libraries that you install with the --user flag (the AWS CLI is an example of such an application that you will install in this way later on in this book):

> python3 -m site --user-base
/Users/jmenga/Library/Python/3.7
> echo 'export PATH=/Users/jmenga/Library/Python/3.7/bin:$PATH' >> ~/.bash_profile
> source ~/.bash_profile
Ensure that you use single quotes in the preceding example, which ensures the reference to $PATH is not expanded in your shell session and is instead written as a literal value to the .bash_profile file.

In the preceding example, you call the site module with the --user-base flag, which tells you where user binaries will be installed. You can then add the bin subfolder of this path to your PATH variable and append this to the .bash_profile file in your home directory, which is executed whenever you spawn a new shell, ensuring that you will always be able to execute Python applications that have been installed with the --user flag.  Note that you can use the source command to process the .bash_profile file immediately without having to log out and log back in.

Setting up a Windows 10 environment

Just like for macOS, if you are running Windows 10, the quickest way to get Docker up and running is to install Docker for Windows, which you can read more about at https://docs.docker.com/docker-for-windows/ and download from https://store.docker.com/editions/community/docker-ce-desktop-windows.  Under the hood, Docker for Windows leverages the native Windows hypervisor called Hyper-V, creating a virtual machine to run the Docker Engine and installing a Docker client for Windows.

You will first need to create a free Docker Hub account in order to proceed, and once you have completed registration and logged in, click the Get Docker button to download the latest version of Docker for Windows.

Once you have completed the download, start the installation and ensure that the Use Windows containers option is NOT selected:

Using Linux containers

The installation will continue and you will be asked to log out of Windows to complete the installation. After logging back into Windows, you will be prompted to enable Windows Hyper-V and Containers features:

Enabling Hyper-V

Your computer will now enable the required Windows features and reboot.  Once you have logged back in, open the Docker for Windows application and ensure that you select the Expose daemon on tcp://localhost:2375 without TLS option:

Enabling legacy client access to Docker

This setting must be enabled in order to allow the Windows subsystem for Linux to access the Docker Engine.

Installing the Windows subsystem for Linux

Now that you have installed Docker for Windows, you next need to install the Windows subsystem for Linux, which provides a Linux environment where you can install the Docker client, Docker Compose, and the other tools we will use throughout this book.

If you are using Windows, then throughout this book I am assuming that you are using the Windows subsystem for Linux as your shell environment.

To enable the Windows subsystem for Linux, you need to run PowerShell as an Administrator (right-click the PowerShell program and select Run as Administrator) and then run the following command:

PS > Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

After enabling this feature, you will be prompted to reboot your machine. Once your machine has rebooted, you then need to install a Linux distribution.  You can find links to the various distributions in the article https://docs.microsoft.com/en-us/windows/wsl/install-win10  see step 1 in Install Your Linux Distribution of Choice

For example, the link for Ubuntu is https://www.microsoft.com/p/ubuntu/9nblggh4msv6 and if you click on Get the app, you will be directed to the Microsoft Store app on your local machine and you can download the application for free:

Ubuntu distribution for Windows

Once the download is complete, click on the Launch button, which will run the Ubuntu installer and install Ubuntu on the Windows subsystem for Linux.  You will be prompted to enter a username and password, and assuming you are using the Ubuntu distribution, you can run the lsb_release -a command to show the specific version of Ubuntu that was installed:

Installing the Ubuntu distribution for Windows
The information that has been provided is for recent versions of Windows 10.  For older versions of Windows 10, you may need to follow the instructions at https://docs.microsoft.com/en-us/windows/wsl/install-win10#for-anniversary-update-and-creators-update-install-using-lxrun.

Note that the Windows file system is mounted into the Linux subsystem for Windows under /mnt/c (where c corresponds to the Windows C: drive), so in order to use a text editor installed on Windows to modify files that you can access in the Linux subsystem, you may want to change your home directory to your Windows home folders under /mnt/c/Users/<user name> as follows:

> exec sudo usermod -d /mnt/c/Users/jmenga jmenga
[sudo] password for jmenga:

Note that the Linux subsystem will exit immediately after entering the preceding command.  If you open the Linux subsystem again (click on the Start button and type Ubuntu), your home directory should now be your Windows home directory:

> pwd
/mnt/c/Users/jmenga
> echo $HOME
/mnt/c/Users/jmenga

Installing Docker in the Windows subsystem for Linux

Now that you have the Windows subsystem for Linux installed, you need to install the Docker client, Docker Compose, and other supporting tools in your distribution. In this section, I will assume that you are using the Ubuntu Xenial (16.04) distribution.

To install Docker, follow the instructions at https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-docker-ce to install Docker:

> sudo apt-get update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [107 kB]
Hit:2 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]
...
...
> sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
...
...
> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
OK
> sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable"
> sudo apt-get update
...
...
> sudo apt-get install docker-ce
...
...
> docker --version
Docker version 18.06.0-ce, build 0ffa825
> docker info
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

In the preceding example, you must follow the various instructions to add the Docker CE repository to Ubuntu.  After installation is completed, you must execute the docker --version command to check the installed version, and then the docker info command to connect to the Docker Engine. Notice that this fails, as the Windows subsystem for Linux is a user space component that does not include the necessary kernel components required to run a Docker Engine.

The Windows subsystem for Linux is not a virtual machine technology and instead relies on kernel emulation features provided by the Windows kernel that makes the underlying Windows kernel appear like a Linux kernel.  This kernel emulation mode of operation does not support the various system calls that support containers, and hence cannot run the Docker Engine.  

To enable the Windows subsystem for Linux to connect to the Docker Engine that was installed by Docker for Windows, you need to set the DOCKER_HOST environment variable to localhost:2375, which will configure the Docker client to connect to TCP port 2375 rather than attempt to connect to the default /var/run/docker.sock socket file:

> export DOCKER_HOST=localhost:2375
> docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 18.06.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
...
...
> echo "export DOCKER_HOST=localhost:2375" >> ~/.bash_profile

Because you enabled the Expose daemon on tcp://localhost:2375 without TLS option earlier when you installed Docker and Windows to expose local ports to the Windows subsystem for Linux, the Docker client can now communicate with the Docker Engine running in a separate Hyper-V virtual machine that was installed by Docker for Windows.  You also add the export DOCKER_HOST command to the .bash_profile file in the home directory of your user, which is executed every time you spawn a new shell. This ensures that your Docker client will always attempt to connect to the correct Docker Engine.

Installing other tools in the Windows subsystem for Linux

At this point, you need to install the following supporting tools that we will be using throughout this book in the Windows Subsystem for Linux:

  • Python
  • pip package manager
  • Docker Compose
  • Git
  • GNU Make
  • jq
  • Build essentials and Python development libraries (required to build dependencies of the sample application)

You just need to follow the normal Linux distribution procedures for installing each of the preceding components.  The Ubuntu 16.04 Windows subsystem for Linux distribution already includes Python 3, so you can run the following commands to install the pip package manager, and also set up your environment to be able to locate Python packages that you can install as user packages with the --user flag:

> curl -O https://bootstrap.pypa.io/get-pip.py
> python3 get-pip.py --user
Collecting pip
...
...
Installing collected packages: pip, setuptools, wheel
Successfully installed pip-10.0.1 setuptools-39.2.0 wheel-0.31.1
> rm get-pip.py
> python3 -m site --user-base
/mnt/c/Users/jmenga/.local
> echo 'export PATH=/mnt/c/Users/jmenga/.local/bin:$PATH' >> ~/.bash_profile
> source ~/.bash_profile

Now, you can install Docker Compose by using the pip install docker-compose --user command:

> pip install docker-compose --user
Collecting docker-compose
...
...
Successfully installed cached-property-1.4.3 docker-3.4.1 docker-compose-1.22.0 docker-pycreds-0.3.0 dockerpty-0.4.1 docopt-0.6.2 jsonschema-2.6.0 texttable-0.9.1 websocket-client-0.48.0
> docker-compose --version
docker-compose version 1.22.0, build f46880f

Finally, you can install Git, GNU Make, jq, tree, build essentials, and Python3 development libraries using the apt-get install command:

> sudo apt-get install git make jq tree build-essential python3-dev
Reading package lists... Done
Building dependency tree
...
...
Setting up jq (1.5+dfsg-1) ...
Setting up make (4.1-6) ...
Processing triggers for libc-bin (2.23-0ubuntu10) ...
> git --version
git version 2.7.4
> make --version
GNU Make 4.1
Built for x86_64-pc-linux-gnu
Copyright (C) 1988-2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
> jq --version
jq-1.5-1-a5b5cbe

Setting up a Linux environment

Docker is natively supported on Linux, meaning that you can install and run the Docker Engine in your local operating system without needing to set up a virtual machine.  Docker officially supports the following Linux distributions (https://docs.docker.com/install/) for installing and running Docker CE:

Once you have installed Docker, you can install the various tools required to complete this book as follows:

  • Docker Compose: See the Linux tab at https://docs.docker.com/compose/install/.  Alternatively, as you require Python to install the AWS CLI tool, you can use the pip Python package manager to install Docker Compose, as demonstrated earlier for Mac and Windows, by running pip install docker-compose.
  • Python, pip, GitGNU Make, jq, tree, build essentials, and Python3 development libraries: Use your Linux distribution's package manager (for example, yum or apt) to install these tools. See the preceding example for a demonstration of this when using Ubuntu Xenial.

Installing the sample application

Now that you have set up your local environment to support Docker and the various tools required to complete this book, it's time to install the sample application for this course.

The sample application is a simple Todo items web service called todobackend that provides a REST API that allows you to create, read, update, and delete Todo items (for example, Wash the car or Walk the dog).  This application is a Python application based on Django, which is a popular framework for creating web applications. You can read more about this at https://www.djangoproject.com/.  Don't worry if you are not familiar with Python the sample application is already created for you and all you need to do as you read through this book is build and test the application, package and publish the application as a Docker image, and then deploy your application using the various container management platforms discussed in this book.

Forking the sample application

To install the sample application, you will need to fork the application from GitHub (we will discuss what this means shortly), which requires you to have an active GitHub account.  If you already have a GitHub account, you can skip this step, however if you don't have an account, you can sign up for a free account at https://github.com:

Signing up for GitHub

Once you have an active GitHub account, you can access the sample application repository at https://github.com/docker-in-aws/todobackend.  Rather than clone the repository, a better approach is to fork the repository, which means that a new repository will be created in your own GitHub account that is linked to the original todobackend repository (hence the term fork).  Forking is a popular pattern in the open source community, and allows you to make your own independent changes to the forked repository.  This is particularly useful for this book, as you will be making your own changes to the todobackend repository, adding a local Docker workflow to build, test, and publish the sample application as a Docker image, and other changes as you progress throughout this book.

To fork the repository, click on the fork button that is located in the top right hand corner:

Forking the todobackend repository

A few seconds a after clicking the fork button, a new repository should be created with the name <your-github-username>/todobackend.  At this point, you can now clone your fork of the repository by clicking on the Clone or download button.  If you have just set up a new account, choose the Clone with HTTPS option and copy the URL that's presented:

Getting the Git URL for the todobackend repository

Open a new terminal and run the git clone <repository-url> command, where <repository-url> is the URL you copied in the preceding example, and then go into the newly created todobackend folder:

> git clone https://github.com/<your-username>/todobackend.git
Cloning into 'todobackend'...
remote: Counting objects: 231, done.
remote: Total 231 (delta 0), reused 0 (delta 0), pack-reused 231
Receiving objects: 100% (231/231), 31.75 KiB | 184.00 KiB/s, done. Resolving deltas: 100% (89/89), done.
> cd todobackend
todobackend>

As you work through this chapter, I encourage you to commit any changes you make frequently, along with descriptive messages that clearly identify the changes you make. 

The sample repository includes a branch called final, which represents the final state of the repository after completing all chapters in this took.  You can use this as a reference point if you run into any issues by running the command git checkout final.  You can switch back to the master branch by running git checkout master.

If you are unfamiliar with Git, you can refer to any of the numerous tutorials online (for example, https://www.atlassian.com/git/tutorials), however in general you will need to perform the following commands when committing a change:

> git pull
Already up to date.
> git diff
diff --git a/Dockerfile b/Dockerfile
index e56b47f..4a73ce3 100644
--- a/Dockerfile
+++ b/Dockerfile
-COPY --from=build /build /build
-COPY --from=build /app /app
-WORKDIR /app
+# Create app user
+RUN addgroup -g 1000 app && \
+ adduser -u 1000 -G app -D app

+# Copy and install application source and pre-built dependencies
> git status
On branch master
Your branch is up to date with 'origin/master'.

Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)

modified: src/todobackend/settings.py
modified: src/todobackend/wsgi.py

Untracked files:
(use "git add <file>..." to include in what will be committed)

docker-compose.yml
src/acceptance.bats
> git add -A
> git commit -a -m "Some commit message"
> git push -u origin master
> git push

You should always check frequently that you have the most up-to-date version of the repository by running the git pull command, as this avoids messy automatic merges and push failures, particularly when you are working with other people that may be collaborating on your project.  Next, you can use the git diff command to show, at a content level, any changes you have made to existing files, while the git status command shows, at a file level, changes to existing files and also identifies any new files that you may have added to the repository.  The git add -A command adds all new files to the repository, and the git commit -a -m "<message>" command commits all changes (including any files you have added with git add -A) with the specified message.  Finally, you can push your changes using the git push command the first time you push, you must specify the remote branch at the origin using the git push -u origin <branch> command after which you can just use the shorter git push variant to push your changes.

A common mistake is to forget to add new files to your Git repository, which may not be apparent until you clone the repository to a different machine.  Always ensure that you run the git status command to identify any new files that are not currently being tracked before committing your changes.

Running the sample application locally

Now that you have downloaded the source code for the sample application locally, you can now build and run the application locally. When you are packaging an application into a Docker image, you need to understand at a detailed level how to build and run your application, so running the application locally is the first step in the journey of being able to build a container for your application.

Installing application dependencies

To run the application, you need to first install any dependencies that the application requires. The sample application includes a file called requirements.txt in the src folder, which lists all required Python packages that must be installed for the application to run:

Django==2.0
django-cors-headers==2.1.0
djangorestframework==3.7.3
mysql-connector-python==8.0.11
pytz==2017.3
uwsgi==2.0.17

To install these requirements, ensure you have changed to the src folder and configure the PIP package manager to read the requirements file using the -r flag.  Note that the best practice for day to day development is to install your application dependencies in a virtual environment (see https://packaging.python.org/guides/installing-using-pip-and-virtualenv/) however given we are installing the application mainly for demonstration purposes, I won't be taking this approach here:

todobackend> cd src
src> pip3 install -r requirements.txt --user
Collecting Django==2.0 (from -r requirements.txt (line 1))
...
...
Successfully installed Django-2.0 django-cors-headers-2.1.0 djangorestframework-3.7.3 mysql-connector-python-8.0.11 pytz-2017.3 uwsgi-2.0.17
Over time, the specific versions of each dependency may change to ensure that the sample application continues to work as expected.

Running database migrations

With the application dependencies installed, you can run the python3 manage.py command to perform various Django management functions, such as running tests, generating static web content, running database migrations, and running a local instance of your web application.

In a local development context, you first need to run database migrations, which means your local database will be initialized with an appropriate database schema, as configured by your application. By default, Django uses the lightweight SQLite database that's included with Python, which is suitable for development purposes and requires no setup to get up and running. Therefore, you simply run the python3 manage.py migrate command, which will run all database migrations that are configured in the application automatically for you:

src> python3 manage.py migrate
Operations to perform:
Apply all migrations: admin, auth, contenttypes, sessions, todo
Running migrations:
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying admin.0001_initial... OK
Applying admin.0002_logentry_remove_auto_add... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying auth.0008_alter_user_username_max_length... OK
Applying auth.0009_alter_user_last_name_max_length... OK
Applying sessions.0001_initial... OK
Applying todo.0001_initial... OK

When you run Django migrations, Django will automatically detect if an existing schema is in place, and create a new schema if one does not exist (this is the case in the preceding example). If you run the migrations again, notice that Django detects that an up-to-date schema is already in place, and therefore nothing is applied:

src> python3 manage.py migrate
Operations to perform:
Apply all migrations: admin, auth, contenttypes, sessions, todo
Running migrations:
No migrations to apply.

Running the local development web server

With the local SQLite database now in place, you can run your application by executing the python3 manage.py runserver command, which starts a local development web server on port 8000:

src> python3 manage.py runserver
Performing system checks...

System check identified no issues (0 silenced).
July 02, 2018 - 07:23:49
Django version 2.0, using settings 'todobackend.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.

If you open a browser to http://localhost:8000/, you should see a web page titled Django REST framework:

The todobackend application

This page is the root of the application, and you can see that the Django REST framework provides a graphical interface for navigating the API when you use a browser.  If you use the curl command instead of a browser, notice that Django detects a simple HTTP client and just returns a JSON response:

src> curl localhost:8000
{"todos":"http://localhost:8000/todos"}

If you click on the hypermedia link for the todos item (http://localhost:8000/todos), you will be presented with a list of Todo items, which is currently empty:

Todo Item List

Notice that you can create a new Todo item with a title and order using the web interface, which will populate the list of Todo items once you click on the POST button:

Creating a Todo Item

Of course, you also can use the command line and the curl command to create new Todo items, list all Todo items, and update Todo items:

> curl -X POST -H "Content-Type: application/json" localhost:8000/todos \
-d '{"title": "Wash the car", "order": 2}'
{"url":"http://localhost:8000/todos/2","title":"Wash the car","completed":false,"order":2}

> curl -s localhost:8000/todos | jq
[
{
"url": "http://localhost:8000/todos/1",
"title": "Walk the dog",
"completed": false,
"order": 1
},
{
"url": "http://localhost:8000/todos/2",
"title": "Wash the car",
"completed": false,
"order": 2
}
]

> curl -X PATCH -H "Content-Type: application/json" localhost:8000/todos/2 \
-d '{"completed": true}'
{"url":"http://localhost:8000/todos/2","title":"Wash the car","completed":true,"order":1}

In the preceding example, you first create a new Todo item using the HTTP POST method, and then verify that the Todos list now contains two Todo items, piping the output of the curl command to the jq utility you installed previously to format the returned items.  Finally, you use the HTTP PATCH method to make a partial update to the Todo item, marking the item as completed.

All of the Todo items you created and modified will be persisted in the application database, which in this case is a SQLite database running on your development machine.

Testing the sample application locally

Now that you have had a walkthrough of the sample application, let's take a look at how you can run tests locally to verify that the application is functioning as expected.  The todobackend application includes a small set of tests for Todo items that are located in the src/todo/tests.py file.  Understanding how these tests are written is outside the scope of this book, however knowing how to run these tests is critical in being able to test, build, and ultimately package the application into a Docker image.

When testing your application, it is very common to have additional dependencies that are specific to application testing, and are not required if you are building your application to run in production.  This sample application defines test dependencies in a file called src/requirements_test.txt, which imports all of the core application dependencies in src/requirements.txt and adds additional test-specific dependencies:

-r requirements.txt
colorama==0.3.9
coverage==4.4.2
django-nose==1.4.5
nose==1.3.7
pinocchio==0.4.2

To install these requirements, you need to the run the PIP package manager, referencing the requirements_test.txt file:

src> pip3 install -r requirements_test.txt --user
Requirement already satisfied: Django==2.0 in /usr/local/lib/python3.7/site-packages (from -r requirements.txt (line 1)) (2.0)
Requirement already satisfied: django-cors-headers==2.1.0 in /usr/local/lib/python3.7/site-packages (from -r requirements.txt (line 2)) (2.1.0)
...
...
Installing collected packages: django-coverage, nose, django-nose, pinocchio
Successfully installed django-nose-1.4.5 pinocchio-0.4.2

You can now run tests for the sample application by running the python3 manage.py test command, passing in the --settings flag, which allows you specify a custom settings configuration. In the sample application, there are additional test settings which are defined in the src/todobackend/settings_test.py file that extend the default settings included in src/todobackend/settings.py, which add testing enhancements such as specs style formatting and code coverage statistics:

src> python3 manage.py test --settings todobackend.settings_test
Creating test database for alias 'default'...

Ensure we can create a new todo item
- item has correct title
- item was created
- received 201 created status code
- received location header hyperlink

Ensure we can delete all todo items
- all items were deleted
- received 204 no content status code

Ensure we can delete a todo item
- received 204 no content status code
- the item was deleted

Ensure we can update an existing todo item using PATCH
- item was updated
- received 200 ok status code

Ensure we can update an existing todo item using PUT
- item was updated
- received 200 created status code

----------------------------------------------------------------------
XML: /Users/jmenga/todobackend/src/unittests.xml
Name Stmts Miss Cover
-----------------------------------------------------
todo/__init__.py 0 0 100%
todo/admin.py 1 1 0%
todo/migrations/0001_initial.py 5 0 100%
todo/migrations/__init__.py 0 0 100%
todo/models.py 6 6 0%
todo/serializers.py 7 0 100%
todo/urls.py 6 0 100%
todo/views.py 17 0 100%
-----------------------------------------------------
TOTAL 42 7 83%
----------------------------------------------------------------------
Ran 12 tests in 0.281s

OK

Destroying test database for alias 'default'...

Notice that Django test runner scans the various folders in the repository for tests, creates a test database, and then runs each test.  After all tests are complete, the test runner automatically destroys the test database, so you don't have to perform any manual setup or cleanup tasks.

Summary

In this chapter, you were introduced to Docker and containers, and learned about the history of containers and how Docker has risen to become one of most popular solutions for testing, building, deploying, and running your container workloads.  You learned about the basic architecture of Docker, which includes the Docker client, Docker Engine, and Docker registry, and we introduced the various types of objects and resources that you will work with when using Docker, which include Docker images, volumes, networks, services, and, of course, Docker containers. 

We also discussed the wide array of options you have to run your Docker applications in AWS, which include the Elastic Container Service, Fargate, Elastic Kubernetes Service, Elastic Beanstalk, and running your own Docker platforms, such as Docker Swarm. 

You then installed Docker in your local environment, which is supported natively on Linux and requires a virtual machine on macOS and Windows platforms.  Docker for Mac and Docker for Windows automatically installs and configures a virtual machine for you, making it easier than ever to get up and running with Docker on these platforms.  You also learned how to integrate the Windows subsystem for Linux with Docker for Windows, which will allow you to support the *nix-based tooling that we will use throughout this book.

Finally, you set up a GitHub account, forked the sample application repository to your account, and cloned the repository to your local environment.  You then learned how to install the sample application dependencies, how to run a local development server,  how to run database migrations to ensure that the application database schema and tables are in place, and how to run unit tests to ensure that the application is functioning as expected.  All of these tasks are important to understand before you can expect to be able to test, build, and publish your applications as Docker images, which will be the focus of the next chapter, where you will create a complete local Docker workflow to automate the process of creating production-ready Docker images for your application.

Questions

  1. True/false: The Docker client communicates with the Docker Engine using named pipes.
  2. True/false: The Docker Engine runs natively on macOS.
  3. True/false: Docker images are published to the Docker store for download.
  4. You install the Windows subsystem for Linux and install a Docker client.  Your Docker client cannot communicate with your Docker for Windows installation.  How can you resolve this?
  5. True/false: Volumes, networks, containers, images, and services are all entities that you can work with using Docker.
  6. You install Docker Compose by running the pip install docker-compose --user command flag, however you receive a message stating docker-compose: not found when attempting to run the program. How can you resolve this?

Further reading

You can check the following links for more information about the topics covered in this chapter:

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Configure Docker for the ECS environment
  • Integrate Docker with different AWS tools
  • Implement container networking and deployment at scale

Description

Over the last few years, Docker has been the gold standard for building and distributing container applications. Amazon Web Services (AWS) is a leader in public cloud computing, and was the first to offer a managed container platform in the form of the Elastic Container Service (ECS). Docker on Amazon Web Services starts with the basics of containers, Docker, and AWS, before teaching you how to install Docker on your local machine and establish access to your AWS account. You'll then dig deeper into the ECS, a native container management platform provided by AWS that simplifies management and operation of your Docker clusters and applications for no additional cost. Once you have got to grips with the basics, you'll solve key operational challenges, including secrets management and auto-scaling your infrastructure and applications. You'll explore alternative strategies for deploying and running your Docker applications on AWS, including Fargate and ECS Service Discovery, Elastic Beanstalk, Docker Swarm and Elastic Kubernetes Service (EKS). In addition to this, there will be a strong focus on adopting an Infrastructure as Code (IaC) approach using AWS CloudFormation. By the end of this book, you'll not only understand how to run Docker on AWS, but also be able to build real-world, secure, and scalable container platforms in the cloud.

Who is this book for?

Docker on Amazon Web Services is for you if you want to build, deploy, and operate applications using the power of containers, Docker, and Amazon Web Services. Basic understanding of containers and Amazon Web Services or any other cloud provider will be helpful, although no previous experience of working with these is required.

What you will learn

  • Build, deploy, and operate Docker applications using AWS
  • Solve key operational challenges, such as secrets management
  • Exploit the powerful capabilities and tight integration of other AWS services
  • Design and operate Docker applications running on ECS
  • Deploy Docker applications quickly, consistently, and reliably using IaC
  • Manage and operate Docker clusters and applications for no additional cost
Estimated delivery fee Deliver to Thailand

Standard delivery 10 - 13 business days

$8.95

Premium delivery 5 - 8 business days

$45.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Aug 30, 2018
Length: 822 pages
Edition : 1st
Language : English
ISBN-13 : 9781788626507
Vendor :
Docker
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Estimated delivery fee Deliver to Thailand

Standard delivery 10 - 13 business days

$8.95

Premium delivery 5 - 8 business days

$45.95
(Includes tracking information)

Product Details

Publication date : Aug 30, 2018
Length: 822 pages
Edition : 1st
Language : English
ISBN-13 : 9781788626507
Vendor :
Docker
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 158.97
Mastering Docker
$54.99
Docker Cookbook
$48.99
Docker on Amazon Web Services
$54.99
Total $ 158.97 Stars icon

Table of Contents

19 Chapters
Container and Docker Fundamentals Chevron down icon Chevron up icon
Building Applications Using Docker Chevron down icon Chevron up icon
Getting Started with AWS Chevron down icon Chevron up icon
Introduction to ECS Chevron down icon Chevron up icon
Publishing Docker Images Using ECR Chevron down icon Chevron up icon
Building Custom ECS Container Instances Chevron down icon Chevron up icon
Creating ECS Clusters Chevron down icon Chevron up icon
Deploying Applications Using ECS Chevron down icon Chevron up icon
Managing Secrets Chevron down icon Chevron up icon
Isolating Network Access Chevron down icon Chevron up icon
Managing ECS Infrastructure Life Cycle Chevron down icon Chevron up icon
ECS Auto Scaling Chevron down icon Chevron up icon
Continuously Delivering ECS Applications Chevron down icon Chevron up icon
Fargate and ECS Service Discovery Chevron down icon Chevron up icon
Elastic Beanstalk Chevron down icon Chevron up icon
Docker Swarm in AWS Chevron down icon Chevron up icon
Elastic Kubernetes Service Chevron down icon Chevron up icon
Assessments Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.2
(5 Ratings)
5 star 80%
4 star 0%
3 star 0%
2 star 0%
1 star 20%
R. Langham Jan 24, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I routinely read development books, and this is one the best that I have read. It is very complete and goes into a lot of detail. It it went into areas of ECS that I had not expected, such as being able to create your own custom EC2 image for the ECS docker EC2 instances. The first few chapters are review of general aws and docker. Skipped them at first, but then went back to them after completing reading most of the other chapters. I learned a few new things there.One of the nice things the book does initially is cover how to do the ECS operations in the aws console, command line, and cloudformation. Eventually, as your progress thru the chapters, it ends up being mostly cloudformation. This was a big plus for me as for my work we deploy all infrastructure using cloudformation. I recommended the book to a couple of others at work.
Amazon Verified review Amazon
Nick from Chicago Jul 26, 2020
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
This seems like a comprehensive book, however the code is outdated and does not work. Chapter one had an issue that I was able to work around, now I'm on chapter 2 and again running into code that's not working. There is a github repo with the code from the book, however it appears that hasn't been updated in the past couple of years, and contains the same out of date code from the book. And the problem is each chapter has a prerequisite that the previous chapter was completed, so, if the code for one particular chapter isn't working, you can't continue on to any subsequent chapters.
Amazon Verified review Amazon
Daniel Aboyewa Dec 07, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Very informative
Amazon Verified review Amazon
Amazon Customer Sep 25, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is a well structured and well written guide to deploying dockerized apps on AWS - it gives a good overview of the tools available and also detailed guidance on how to use each - its well written and very comprehensive (700 + pages). There are an almost bewildering variety of tools and techniques available when creating infrastructure as code and this book does a good job of giving an organized view of them and suggesting best practices.
Amazon Verified review Amazon
Constantine Sep 24, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is a must if you want to learn how to deploy scalable applications using AWS container services. It is a step by step guide to everything from creating your IAM policies to setting up auto scaling. The amount of information in this book is pretty ridiculous.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela