Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Getting Started with Kubernetes, Second Edition
Getting Started with Kubernetes, Second Edition

Getting Started with Kubernetes, Second Edition: Orchestrate and manage large-scale Docker deployments , Second Edition

eBook
$27.98 $39.99
Paperback
$48.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Table of content icon View table of contents Preview book icon Preview Book

Getting Started with Kubernetes, Second Edition

Introduction to Kubernetes

In this book, we will help you learn to build and manage Kubernetes clusters. You will be given some of the basic container concepts and the operational context, wherever possible. Throughout the book, you'll be given examples that you can apply as you progress through the book. By the end of the book, you should have a solid foundation and even dabble in some of the more advance topics such as federation and security. 

This chapter will give a brief overview of containers and how they work as well as why management and orchestration is important to your business and/or project team. The chapter will also give a brief overview of how Kubernetes orchestration can enhance our container management strategy and how we can get a basic Kubernetes cluster up, running, and ready for container deployments.

This chapter will include the following topics:

  • Introducing container operations and management
  • Why container management is important?
  • The advantages of Kubernetes
  • Downloading the latest Kubernetes
  • Installing and starting up a new Kubernetes cluster
  • The components of a Kubernetes cluster

A brief overview of containers

Over the past three years, containers have grown in popularity like wildfire. You would be hard-pressed to attend an IT conference without finding popular sessions on Docker or containers in general.

Docker lies at the heart of the mass adoption and the excitement in the container space. As Malcom McLean revolutionized the physical shipping world in the 1950s by creating a standardized shipping container, which is used today for everything from ice cube trays to automobiles (you can refer to more details about this in point 1 in the References section at the end of the chapter), Linux containers are revolutionizing the software development world by making application environments portable and consistent across the infrastructure landscape. As an organization, Docker has taken the existing container technology to a new level by making it easy to implement and replicate across environments and providers.

What is a container?

At the core of container technology are control groups (cgroups) and namespaces. Additionally, Docker uses union filesystems for added benefits to the container development process.

Cgroups work by allowing the host to share and also limit the resources each process or container can consume. This is important for both, resource utilization and security, as it prevents denial-of-service attacks on the host's hardware resources. Several containers can share CPU and memory while staying within the predefined constraints.

Namespaces offer another form of isolation for process interaction within operating systems. Namespaces limit the visibility a process has on other processes, networking, filesystems, and user ID components. Container processes are limited to see only what is in the same namespace. Processes from containers or the host processes are not directly accessible from within this container process. Additionally, Docker gives each container its own networking stack that protects the sockets and interfaces in a similar fashion.

Composition of a container

Union filesystems are also a key advantage of using Docker containers. Containers run from an image. Much like an image in the VM or Cloud world, it represents state at a particular point in time. Container images snapshot the filesystem, but tend to be much smaller than a VM. The container shares the host kernel and generally runs a much smaller set of processes, so the filesystem and boot strap period tend to be much smaller. Though those constraints are not strictly enforced. Second, the union filesystem allows for efficient storage, download, and execution of these images.

The easiest way to understand union filesystems is to think of them like a layer cake with each layer baked independently. The Linux kernel is our base layer; then, we might add an OS such as Red Hat Linux or Ubuntu. Next, we might add an application such as Nginx or Apache. Every change creates a new layer. Finally, as you make changes and new layers are added, you'll always have a top layer (think frosting) that is a writable layer.

Layered filesystem

What makes this truly efficient is that Docker caches the layers the first time we build them. So, let's say that we have an image with Ubuntu and then add Apache and build the image. Next, we build MySQL with Ubuntu as the base. The second build will be much faster because the Ubuntu layer is already cached. Essentially, our chocolate and vanilla layers, from the preceding Layered filesystem figure, are already baked. We simply need to bake the pistachio (MySQL) layer, assemble, and add the icing (the writable layer).

Why are containers so cool?

Containers on their own are not a new technology and have in fact been around for many years. What truly sets Docker apart is the tooling and ease of use they have brought to the community. Modern development practices promote the use of Continuous Integration and Continuous Deployment. These techniques, when done right, can have a profound impact on your software product quality.

The advantages of Continuous Integration/Continuous Deployment

ThoughtWorks defines Continuous Integration as a development practice that requires developers to integrate code into a shared repository several times a day. By having a continuous process of building and deploying code, organizations are able to instill quality control and testing as part of the everyday work cycle. The result is that updates and bug fixes happen much faster and the overall quality improves.

However, there has always been a challenge in creating development environments that match that of testing and production. Often inconsistencies in these environments make it difficult to gain the full advantage of continuous delivery.

Using Docker, developers are now able to have truly portable deployments. Containers that are deployed on a developer's laptop are easily deployed on an in-house staging server. They are then easily transferred to the production server running in the cloud. This is because Docker builds containers up with build files that specify parent layers. One advantage of this is that it becomes very easy to ensure OS, package, and application versions are the same across development, staging, and production environments.

Because all the dependencies are packaged into the layer, the same host server can have multiple containers running a variety of OS or package versions. Further, we can have various languages and frameworks on the same host server without the typical dependency clashes we would get in a virtual machine (VM) with a single operating system.

Resource utilization

The well-defined isolation and layer filesystem also make containers ideal for running systems with a very small footprint and domain-specific purposes. A streamlined deployment and release process means we can deploy quickly and often. As such, many companies have reduced their deployment time from weeks or months to days and hours in some cases. This development life cycle lends itself extremely well to small, targeted teams working on small chunks of a larger application.

Microservices and orchestration

As we break down an application into very specific domains, we need a uniform way to communicate between all the various pieces and domains. Web services have served this purpose for years, but the added isolation and granular focus that containers bring have paved a way for microservices.

The definition for microservices can be a bit nebulous, but a definition from Martin Fowler, a respected author and speaker on software development, says this (you can refer to more details about this in point 2 in the References section at the end of the chapter):

In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.

As the pivot to containerization and as microservices evolve in an organization, they will soon need a strategy to maintain many containers and microservices. Some organizations will have hundreds or even thousands of containers running in the years ahead.

Future challenges

Life cycle processes alone are an important piece of operations and management. How will we automatically recover when a container fails? Which upstream services are affected by such an outage? How will we patch our applications with minimal downtime? How will we scale up our containers and services as our traffic grows?

Networking and processing are also important concerns. Some processes are part of the same service and may benefit from the proximity to the network. Databases, for example, may send large amounts of data to a particular microservice for processing. How will we place containers near each other in our cluster? Is there common data that needs to be accessed? How will new services be discovered and made available to other systems?

Resource utilization is also a key. The small footprint of containers means that we can optimize our infrastructure for greater utilization. Extending the savings started in the elastic cloud will take us even further toward minimizing wasted hardware. How will we schedule workloads most efficiently? How will we ensure that our important applications always have the right resources? How can we run less important workloads on spare capacity?

Finally, portability is a key factor in moving many organizations to containerization. Docker makes it very easy to deploy a standard container across various operating systems, cloud providers, and on-premise hardware or even developer laptops. However, we still need tooling to move containers around. How will we move containers between different nodes on our cluster? How will we roll out updates with minimal disruption? What process do we use to perform blue-green deployments or canary releases?

Whether you are starting to build out individual microservices and separating concerns into isolated containers or if you simply want to take full advantage of the portability and immutability in your application development, the need for management and orchestration becomes clear. This is where orchestration tools such as Kubernetes offer the biggest value.

The birth of Kubernetes

Kubernetes (K8s) is an open source project that was released by Google in June, 2014. Google released the project as part of an effort to share their own infrastructure and technology advantage with the community at large.

Google launches 2 billion containers a week in their infrastructure and has been using container technology for over a decade. Originally, they were building a system named Borg, now called Omega, to schedule their vast quantities of workloads across their ever-expanding data center footprint. They took many of the lessons they learned over the years and rewrote their existing data center management tool for wide adoption by the rest of the world. The result was the Kubernetes open-source project (you can refer to more details about this in point 3 in the References section at the end of the chapter).

Since its initial release in 2014, K8s has undergone rapid development with contributions all across the open-source community, including Red Hat, VMware, and Canonical. The 1.0 release of Kubernetes went live in July, 2015. Since then, it's been a fast-paced evolution of the project with wide support from one of the largest open-source communities on GitHub today. We'll be covering version 1.5 throughout the book. K8s gives organizations a tool to deal with some of the major operations and management concerns. We will explore how Kubernetes helps deal with resource utilization, high availability, updates, patching, networking, service discovery, monitoring, and logging.

Our first cluster

Kubernetes is supported on a variety of platforms and OSes. For the examples in this book, I used an Ubuntu 16.04 Linux VirtualBox for my client and Google Compute Engine (GCE) with Debian for the cluster itself. We will also take a brief look at a cluster running on Amazon Web Services (AWS) with Ubuntu.

To save some money, both GCP and AWS offer free tiers and trial offers for their cloud infrastructure. It's worth using these free trials for your Kubernetes learning, if possible.
Most of the concepts and examples in this book should work on any installation of a Kubernetes cluster. To get more information on other platform setups, refer to the Kubernetes getting started page on the following GitHub link: 
http://kubernetes.io/docs/getting-started-guides/

First, let's make sure that our environment is properly set up before we install Kubernetes. Start by updating packages:

$ sudo apt-get update

Install Python and curl if they are not present:

$ sudo apt-get install python
$ sudo apt-get install curl

Install the gcloud SDK:

$ curl https://sdk.cloud.google.com | bash
We will need to start a new shell before gcloud is on our path.

Configure your Google Cloud Platform (GCP) account information. This should automatically open a browser from where we can log in to our Google Cloud account and authorize the SDK:

$ gcloud auth login
If you have problems with login or want to use another browser, you can optionally use the --no-launch-browser command. Copy and paste the URL to the machine and/or browser of your choice. Log in with your Google Cloud credentials and click Allow on the permissions page. Finally, you should receive an authorization code that you can copy and paste back into the shell where the prompt is waiting.

A default project should be set, but we can verify this with the following command:

$ gcloud config list project

We can modify this and set a new default project with this command. Make sure to use project ID and not project name, as follows:

$ gcloud config set project <PROJECT ID>
We can find our project ID in the console at the following URL:
https://console.developers.google.com/project
Alternatively, we can list active projects:
$ gcloud alpha projects list

Now that we have our environment set up, installing the latest Kubernetes version is done in a single step, as follows:

$ curl -sS https://get.k8s.io | bash

It may take a minute or two to download Kubernetes depending on your connection speed. Earlier versions would automatically call the kube-up.sh script and start building our cluster. In version 1.5, we will need to call the kube-up.sh script ourselves to launch the cluster. By default, it will use the Google Cloud and GCE:

$ kubernetes/cluster/kube-up.sh

After you run the kube-up.sh script, we will see quite a few lines roll past. Let's take a look at them one section at a time:

GCE prerequisite check
If your gcloud components are not up to date, you may be prompted to update them.

The preceding image, GCE prerequisite check, shows the checks for prerequisites as well as making sure that all components are up to date. This is specific to each provider. In the case of GCE, it will verify that the SDK is installed and that all components are up to date. If not, you will see a prompt at this point to install or update:

Upload cluster packages

Now the script is turning up the cluster. Again, this is specific to the provider. For GCE, it first checks to make sure that the SDK is configured for a default project and zone. If they are set, you'll see those in the output.

Next, it uploads the server binaries to Google Cloud storage, as seen in the Creating gs:... lines:

Master creation

It then checks for any pieces of a cluster already running. Then, we finally start creating the cluster. In the output in the preceding figure Master creation, we see it creating the master server, IP address, and appropriate firewall configurations for the cluster:

Minion creation

Finally, it creates the minions or nodes for our cluster. This is where our container workloads will actually run. It will continually loop and wait while all the minions start up. By default, the cluster will have four nodes (minions), but K8s supports having more than 1000 (and soon beyond). We will come back to scaling the nodes later on in the book.

Cluster completion

Now that everything is created, the cluster is initialized and started. Assuming that everything goes well, we will get an IP address for the master. Also, note that configuration along with the cluster management credentials are stored in home/<Username>/.kube/config:

Cluster validation

Then, the script will validate the cluster. At this point, we are no longer running provider-specific code. The validation script will query the cluster via the kubectl.sh script. This is the central script for managing our cluster. In this case, it checks the number of minions found, registered, and in a ready state. It loops through giving the cluster up to 10 minutes to finish initialization.

After a successful startup, a summary of the minions and the cluster component health is printed on the screen:

Cluster summary

Finally, a kubectl cluster-info command is run, which outputs the URL for the master services including DNS, UI, and monitoring. Let's take a look at some of these components.

Kubernetes UI

Open a browser and run the following code:

https://<your master ip>/ui/

The certificate is self-signed by default, so you'll need to ignore the warnings in your browser before proceeding. After this, we will see a login dialog. This is where we use the credentials listed during the K8s installation. We can find them at any time by simply using the config command:

$ kubectl config view

Now that we have credentials for login, use those, and we should see a dashboard like the following image:

Kubernetes UI dashboard

The main dashboard takes us to a page with not much display at first. There is a link to deploy a containerized app that will take you to a GUI for deployment. This GUI can be a very easy way to get started deploying apps without worrying about the YAML syntax for Kubernetes. However, as your use of containers matures, it's good practice to use the YAML definitions that are checked in to source control.

If you click on the Nodes link on the left-hand side menu, you will see some metrics on the current cluster nodes:



Kubernetes Node Dashboard

At the top, we see an aggregate of the CPU and memory usages followed by a listing of our cluster nodes. Clicking on one of the nodes will take us to a page with detailed information about that node, its health, and various metrics.

The Kubernetes UI has a lot of other views that will become more useful as we start launching real applications and adding configurations to the cluster.

Grafana

Another service installed by default is Grafana. This tool will give us a dashboard to view metrics on the cluster nodes. We can access it using the following syntax in a browser: https://<your master ip>/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana

Kubernetes Grafana dashboard

From the main page, click on the Home dropdown and select Cluster. Here, Kubernetes is actually running a number of services. Heapster is used to collect the resource usage on the pods and nodes and stores the information in InfluxDB. The results, such as CPU and memory usage, are what we see in the Grafana UI. We will explore this in depth in Chapter 8, Monitoring and Logging.

Command line

The kubectl script has commands to explore our cluster and the workloads running on it. You can find it in the /kubernetes/client/bin folder. We will be using this command throughout the book, so let's take a second to set up our environment. We can do so by putting the binaries folder on our PATH, in the following manner:

$ export PATH=$PATH:/<Path where you downloaded K8s>/kubernetes/client/bin
$ chmod +x /<Path where you downloaded K8s>/kubernetes/client/bin
You may choose to download the kubernetes folder outside your home folder, so modify the preceding command as appropriate.
It is also a good idea to make the changes permanent by adding the export command to the end of your .bashrc file in your home directory.

Now that we have kubectl on our path, we can start working with it. It has quite a few commands. Since we have not spun up any applications yet, most of these commands will not be very interesting. However, we can explore with two commands right away.

First, we have already seen the cluster-info command during initialization, but we can run it again at any time with the following command:

$ kubectl cluster-info

Another useful command is get. It can be used to see currently running services, pods, replication controllers, and a lot more. Here are the three examples that are useful right out of the gate:

  • List the nodes in our cluster:
    $ kubectl get nodes
  • List cluster events:
    $ kubectl get events
  • Finally, we can see any services that are running in the cluster, as follows:
    $ kubectl get services

To start with, we will only see one service, named kubernetes. This service is the core API server for the cluster.

Services running on the master

Let's dig a little bit deeper into our new cluster and its core services. By default, machines are named with the kubernetes- prefix. We can modify this using $KUBE_GCE_INSTANCE_PREFIX before a cluster is spun up. For the cluster we just started, the master should be named kubernetes-master. We can use the gcloud command-line utility to SSH into the machine. The following command will start an SSH session with the master node. Be sure to substitute your project ID and zone to match your environment. Also, note that you can launch SSH from the Google Cloud console using the following syntax:

$ gcloud compute ssh --zone "<your gce zone>" "kubernetes-master"
If you have trouble with SSH via the Google Cloud CLI, you can use the Console which has a built-in SSH client. Simply go to the VM instances page and you'll see an SSH option as a column in the kubernetes-master listing. Alternatively, the VM instance details page has the SSH option at the top.

Once we are logged in, we should get a standard shell prompt. Let's run the docker command that filters for Image and Status:

$ sudo docker ps --format 'table {{.Image}}t{{.Status}}' 
Master container listing

Even though we have not deployed any applications on Kubernetes yet, we note that there are several containers already running. The following is a brief description of each container:

  • fluentd-gcp: This container collects and sends the cluster logs file to the Google Cloud Logging service.
  • node-problem-detector: This container is a daemon that runs on every node and currently detects issues at the hardware and kernel layer.
  • rescheduler: This is another add-on container that makes sure critical components are always running. In cases of low resources availability, it may even remove less critical pods to make room.
  • glbc: This is another Kubernetes add-on container that provides Google Cloud Layer 7 load balancing using the new Ingress capability.
  • kube-addon-manager: This component is core to the extension of Kubernetes through various add-ons. It also periodically applies any changes to  the /etc/kubernetes/addons directory.
  • etcd-empty-dir-cleanup: A utility to cleanup empty keys in etcd.
  • kube-controller-manager: This is a controller manager that controls a variety of cluster functions, ensuring accurate and up-to-date replication is one of its vital roles. Additionally, it monitors, manages, and discovers new nodes. Finally, it manages and updates service endpoints.
  • kube-apiserver: This container runs the API server. As we explored in the Swagger interface, this RESTful API allows us to create, query, update, and remove various components of our Kubernetes cluster.
  • kube-scheduler: This scheduler takes unscheduled pods and binds them to nodes based on the current scheduling algorithm.
  • etcd: This runs the etcd software built by CoreOS, and it is a distributed and consistent key-value store. This is where the Kubernetes cluster state is stored, updated, and retrieved by various components of K8s.
  • pause: This container is often referred to as the pod infrastructure container and is used to set up and hold the networking namespace and resource limits for each pod.
I omitted the amd64 for many of these names to make this more generic. The purpose of the pods remains the same.

To exit the SSH session, simply type exit at the prompt.

In the next chapter, we will also show how a few of these services work together in the first image, Kubernetes core architecture.

Services running on the minions

We could SSH to one of the minions, but since Kubernetes schedules workloads across the cluster, we would not see all the containers on a single minion. However, we can look at the pods running on all the minions using the kubectl command:

$ kubectl get pods

Since we have not started any applications on the cluster yet, we don't see any pods. However, there are actually several system pods running pieces of the Kubernetes infrastructure. We can see these pods by specifying the kube-system namespace. We will explore namespaces and their significance later, but for now, the --namespace=kube-system command can be used to look at these K8s system resources, as follows:

$ kubectl get pods --namespace=kube-system

We should see something similar to the following:

etcd-empty-dir-cleanup-kubernetes-master 
etcd-server-events-kubernetes-master
etcd-server-kubernetes-master
fluentd-cloud-logging-kubernetes-master
fluentd-cloud-logging-kubernetes-minion-group-xxxx
heapster-v1.2.0-xxxx
kube-addon-manager-kubernetes-master
kube-apiserver-kubernetes-master
kube-controller-manager-kubernetes-master
kube-dns-xxxx
kube-dns-autoscaler-xxxx
kube-proxy-kubernetes-minion-group-xxxx
kube-scheduler-kubernetes-master
kubernetes-dashboard-xxxx
l7-default-backend-xxxx
l7-lb-controller-v0.8.0-kubernetes-master
monitoring-influxdb-grafana-xxxx
node-problem-detector-v0.1-xxxx
rescheduler-v0.2.1-kubernetes-master

The first six lines should look familiar. Some of these are the services we saw running on the master and will see pieces of these on the nodes. There are a few additional services we have not seen yet. The kube-dns option provides the DNS and service discovery plumbing, kubernetes-dashboard-xxxx is the user interface for Kubernetes, l7-default-backend-xxxx provides the default load balancing backend for the new Layer-7 load balancing capability, and heapster-v1.2.0-xxxx and monitoring-influx-grafana provide the Heapster database and user interface to monitor resource usage across the cluster. Finally, kube-proxy-kubernetes-minion-group-xxxx  is the proxy which directs traffic to the proper backing services and pods running on our cluster.

If we did SSH into a random minion, we would see several containers that run across a few of these pods. A sample might look like this image:

Minion container listing

Again, we saw a similar line up of services on the master. The services we did not see on the master include the following:

  • kubedns: This container monitors the service and endpoint resources in Kubernetes and synchronizes any changes to DNS lookups.
  • kube-dnsmasq: This is another container that provides DNS caching.
  • dnsmasq-metrics: This provides metric reporting for DNS services in cluster.
  • l7-defaultbackend: This is the default backend for handling the GCE L7 load balancer and Ingress.
  • kube-proxy: This is the network and service proxy for your cluster. This component makes sure service traffic is directed to wherever your workloads are running on the cluster. We will explore this in more depth later in the book.
  • heapster: This container is for monitoring and analytics.
  • addon-resizer: This cluster utility is for scaling containers.
  • heapster_grafana: This does resource usage and monitoring.
  • heapster_influxdb: This time-series database is for Heapster data.
  • cluster-proportional-autoscaler: This cluster utility is for scaling containers in proportion to the cluster size.
  • exechealthz: This performs health checks on the pods.
Again, I have omitted the amd64 for many of these names to make this more generic. The purpose of the pods remains the same.

Tear down cluster

Alright, this is our first cluster on GCE, but let's explore some other providers. To keep things simple, we need to remove the one we just created on GCE. We can tear down the cluster with one simple command:

$ kube-down.sh

Working with other providers

By default, Kubernetes uses the GCE provider for Google Cloud. We can override this default by setting the KUBERNETES_PROVIDER environment variable. The following providers are supported with values listed in this table:

Provider KUBERNETES_PROVIDER value Type
Google Compute Engine gce Public cloud
Google Container Engine gke Public cloud
Amazon Web Services aws Public cloud
Microsoft Azure azure Public cloud
Hashicorp Vagrant vagrant Virtual development environment
VMware vSphere vsphere Private cloud/on-premise virtualization
Libvirt running CoreOS libvirt-coreos Virtualization management tool
Canonical Juju (folks behind Ubuntu) juju OS service orchestration tool
Kubernetes providers

Let's try setting up the cluster on AWS. As a prerequisite, we need to have AWS Command Line Interface (CLI) installed and configured for our account. The AWS CLI installation and configuration documentation can be found at the following links:

Then, it is a simple environment variable setting, as follows:

$ export KUBERNETES_PROVIDER=aws

Again, we can use the kube-up.sh command to spin up the cluster, as follows:

$ kube-up.sh

As with GCE, the setup activity will take a few minutes. It will stage files in S3 and create the appropriate instances, Virtual Private Cloud (VPC), security groups, and so on in our AWS account. Then, the Kubernetes cluster will be set up and started. Once everything is finished and started, we should see the cluster validation at the end of the output:

AWS cluster validation

Note that the region where the cluster is spun up is determined by the KUBE_AWS_ZONE environment variable. By default, this is set to us-west-2a (the region is derived from this Availability Zone). Even if you have a region set in your AWS CLI, it will use the region defined in KUBE_AWS_ZONE.

Once again, we will SSH into master. This time, we can use the native SSH client. We'll find the key files in /home/<username>/.ssh:

$ ssh -v -i /home/<username>/.ssh/kube_aws_rsa ubuntu@<Your master IP>

We'll use sudo docker ps --format 'table {{.Image}}t{{.Status}}' to explore the running containers. We should see something like the following:

Master container listing (AWS)

We see some of the same containers as our GCE cluster had. However, there are several missing. We see the core Kubernetes components, but the fluentd-gcp service is missing as well as some of the newer utilities such as node-problem-detector , rescheduler , glbc , kube-addon-manager , and etcd-empty-dir-cleanup. This reflects some of the subtle differences in the kube-up script between various Public Cloud providers. This is ultimately decided by the efforts of the large Kubernetes open-source community, but GCP often has many of the latest features first.

On the AWS provider, Elasticsearch and Kibana are set up for us. We can find the Kibana UI using the following syntax as URL:

https://<your master ip>/api/v1/proxy/namespaces/kube-system/services/kibana-logging

As in the case of the UI, you will be prompted for admin credentials, which can be obtained using the config command, as shown here:

$ kubectl config view

On the first visit, you'll need to set up your index. You can leave the defaults and choose @timestamp for the Time-field name. Then, click on Create and you'll be taken to the index settings page. From there, click on the Discover tab at the top and you can explore the log dashboards:

Kubernetes Kibana dashboard

Resetting the cluster

You just had a little taste of running the cluster on AWS. For the remainder of the book, I will be basing my examples on a GCE cluster. For the best experience following along, you can get back to a GCE cluster easily.

Simply tear down the AWS cluster, as follows:

$ kube-down.sh

Then, create a GCE cluster again using the following:

$ export KUBERNETES_PROVIDER=gce
$ kube-up.sh

Modifying kube-up parameters

It's worth getting to know the parameters used for the kube-up.sh script. Each provider under the kubernetes/cluster/ folder has its own su  folder which containers a config-default.sh  script.

For example, kubernetes/cluster/aws/config-default.sh has the default settings for using kube-up.sh with AWS. At the start of this script, you will see many of these values defined as well as environment variables that can be used to overrides the defaults.

In the following example, the ZONE variable is set for the script and it uses the value from the environment variable named KUBE_AWS_ZONE. If this variable is not set, it will use the default us-west-2a:

ZONE=${KUBE_AWS_ZONE:-us-west-2a}

Understanding these parameters will help you get a lot more mileage out of your kube-up.sh script.

Alternatives to kube-up.sh

The kube-up.sh script is still a pretty handy way to get started using Kubernetes on your platform of choice. However, it's not without flaws and can sometimes run aground when conditions are not just so.

Luckily, since K8's inception, a number of alternative methods for creating clusters have emerged. Two such GitHub projects are KOPs and kube-aws. While the later is tied to AWS, they both provide an alternative method to easily spinning up your new cluster:

Additionally, a number of managed services have arisen including Google Container Engine (GKE) and Microsoft Azure Container Service (ACS), which provide an automated install and some managed cluster operations. We will look at a brief demo of these in Chapter 12, Towards Production Ready.

Starting from scratch

Finally, there is the option to start from scratch. Luckily, starting in 1.4, the Kubernetes team has put a major focus in easing the cluster setup process. To that end they have introduced kubeadm for Ubuntu 16.04, CentOS 7, and HypriotOS v1.0.1+.

Let's take a quick look at spinning up a cluster on AWS from scratch using the kubeadm tool.

Cluster setup

We will need to provision our cluster master and nodes beforehand. For the moment, we are limited to the operating systems and version listed earlier. Additionally, it is recommended that you have at least 1 GB of RAM and all the nodes must have network connectivity to one another.

For this walk through, we will need one t2.medium (master node) and three t2.mirco (nodes) sized instances on AWS. These instance have burstable CPU and come with the minimum 1 GB of RAM needed. We will need to create one master and three worker nodes.

We will also need to create some security groups for the cluster. The following ports are needed for the master:

Type Protocol Port range Source
All Traffic All All {This SG ID (Master SG)}
All Traffic All All {Node SG ID}
SSH TCP 22 {Your Local Machine's IP}
HTTTPS TCP 443 {Range allowed to access K8s API and UI}
Master Security Group Rules

The next table shows the ports node security groups:

Type Protocol Port range Source
All Traffic All All {Master SG ID}
All Traffic All All {This SG ID (Node SG)}
SSH TCP 22 {Your Local Machine's IP}
Node Security Group Rules

Once you have these SGs, go ahead and spin up four instances (one t2.medium and three t2.mircos) using Ubuntu 16.04. If you are new to AWS, refer to the documentation on spinning up EC2 instances at the following URL:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/LaunchingAndUsingInstances.html

Be sure to identify the t2.medium instance as the master and associate the master security group. Name the other three as nodes and associate the node security group with those.

These steps are adapted from the walk-through in the manual. For more information or to work with an alternative to Ubuntu refer to https://kubernetes.io/docs/getting-started-guides/kubeadm/.

Installing Kubernetes components (kubelet and kubeadm)

Next we will need to SSH into all four of the instances and install the Kubernetes components.

As root, perform the following steps on all four instances:

1. Update packages and install the apt-transport-https package so we can download from sources   that use HTTPS:

   $ apt-get update 
$ apt-get install -y apt-transport-https

2. Install the Google Cloud public key:

   $ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg |
apt-key add -

3. Next, create a source list for the Kubernetes package downloads with your favorite editor:

   $ vi /etc/apt/sources.list.d/kubernetes.list

4. Use the following as contents for this file and save:

   deb http://apt.kubernetes.io/ kubernetes-xenial main

Listing 1-1. /etc/apt/sources.list.d/kubernetes.list

5. Update your sources once more:

   $ apt-get update

6. Install Docker and the core Kubernetes components:

   $ apt-get install -y docker.io 
$ apt-get install -y kubelet kubeadm kubectl kubernetes-cni

Setting up a Master

On the instance you have previously chosen as master, we will run master initialization. Again, as root run the following command:

$ kubeadm init

Note that initialization can only be run once, so if you run into problems you'll kubeadm reset.

Joining nodes

After a successful initialization, you will get a join command that can be used by the nodes. Copy this down for the join process later on. It should look similar to this:

$ kubeadm join --token=<some token> <master ip address>

The token is used to authenticate cluster nodes, so make sure to store it somewhere securely for future use.

Networking

Our cluster will need a networking layer for the pods to communicate on. Note that kubeadm requires a CNI compatible network fabric. The list of plugins currently available can be found here:

http://kubernetes.io/docs/admin/addons/

For our example, we will use calico. We will need to create the calico components on our cluster using the following yaml. For convenience you can download it here:

http://docs.projectcalico.org/v1.6/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml

Once you have this file on your master, create the components with the following command:

$ kubectl apply -f calico.yaml

Give this a minute to run setup and then list the kube-system nodes to check:

$ kubectl get pods --namespace=kube-system

You should get a listing similar to the following one with three new calico pods and one completed job that is not shown:

Calico setup

Joining the cluster

Now we need to run the join command we copied earlier, on each of our node instances:

$ kubeadm join --token=<some token> <master ip address>

Once you've finished that, you should be able to see all nodes from the master by running this command:

$ kubectl get nodes

If all went well, this will show three nodes and one master, as shown here:

Calico setup

Summary

We took a very brief look at how containers work and how they lend themselves to the new architecture patterns in microservices. You should now have a better understanding of how these two forces will require a variety of operations and management tasks and how Kubernetes offers strong features to address these challenges. We created two different clusters on both GCE and AWS and explored the startup script as well as some of the built-in features of Kubernetes. Finally, we looked at the alternatives to the kube-up script and tried the new kubeadm tool on AWS with Ubuntu 16.04.

In the next chapter, we will explore the core concept and abstractions K8s provides to manage containers and full application stacks. We will also look at basic scheduling, service discovery, and health checking.

References

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Get well-versed with the fundamentals of Kubernetes and get it production-ready for deployments
  • Confidently manage your container clusters and networks using Kubernetes
  • This practical guide will show you container application examples throughout to illustrate the concepts and features of Kubernetes

Description

Kubernetes has continued to grow and achieve broad adoption across various industries, helping you to orchestrate and automate container deployments on a massive scale. This book will give you a complete understanding of Kubernetes and how to get a cluster up and running. You will develop an understanding of the installation and configuration process. The book will then focus on the core Kubernetes constructs such as pods, services, replica sets, replication controllers, and labels. You will also understand how cluster level networking is done in Kubernetes. The book will also show you how to manage deployments and perform updates with minimal downtime. Additionally, you will learn about operational aspects of Kubernetes such as monitoring and logging. Advanced concepts such as container security and cluster federation will also be covered. Finally, you will learn about the wider Kubernetes ecosystem with OCP, CoreOS, and Tectonic and explore the third-party extensions and tools that can be used with Kubernetes. By the end of the book, you will have a complete understanding of the Kubernetes platform and will start deploying applications on it.

Who is this book for?

This book is for developers, sys admins, and DevOps engineers who want to automate the deployment process and scale their applications. You do not need any knowledge about Kubernetes.

What you will learn

  • Download, install, and configure the Kubernetes codebase
  • Understand the core concepts of a Kubernetes cluster
  • Be able to set up and access monitoring and logging for Kubernetes clusters
  • Set up external access to applications running in the cluster
  • Understand how CoreOS and Kubernetes can help you achieve greater performance and container implementation agility
  • Run multiple clusters and manage from a single control plane
  • Explore container security as well as securing Kubernetes clusters
  • Work with third-party extensions and tools
Estimated delivery fee Deliver to Russia

Economy delivery 10 - 13 business days

$6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : May 31, 2017
Length: 286 pages
Edition : 2nd
Language : English
ISBN-13 : 9781787283367
Vendor :
Google
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Estimated delivery fee Deliver to Russia

Economy delivery 10 - 13 business days

$6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Publication date : May 31, 2017
Length: 286 pages
Edition : 2nd
Language : English
ISBN-13 : 9781787283367
Vendor :
Google
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 158.97
DevOps with Kubernetes
$54.99
Getting Started with Kubernetes, Second Edition
$48.99
Mastering Kubernetes
$54.99
Total $ 158.97 Stars icon

Table of Contents

12 Chapters
Introduction to Kubernetes Chevron down icon Chevron up icon
Pods, Services, Replication Controllers, and Labels Chevron down icon Chevron up icon
Networking, Load Balancers, and Ingress Chevron down icon Chevron up icon
Updates, Gradual Rollouts, and Autoscaling Chevron down icon Chevron up icon
Deployments, Jobs, and DaemonSets Chevron down icon Chevron up icon
Storage and Running Stateful Applications Chevron down icon Chevron up icon
Continuous Delivery Chevron down icon Chevron up icon
Monitoring and Logging Chevron down icon Chevron up icon
Cluster Federation Chevron down icon Chevron up icon
Container Security Chevron down icon Chevron up icon
Extending Kubernetes with OCP, CoreOS, and Tectonic Chevron down icon Chevron up icon
Towards Production Ready Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.7
(3 Ratings)
5 star 66.7%
4 star 33.3%
3 star 0%
2 star 0%
1 star 0%
KPaddy Aug 30, 2017
Full star icon Full star icon Full star icon Full star icon Full star icon 5
First things first, do not expect to use this book for installation and maintenance of a K8s cluster or from IT perspective. That affair changes too fast to be addressed in a book. Unfortunately the first chapter gives a hint of that and that is where I kept down the book once. And then it took me another week to give it another go.This book is more for all the post-installation tasks including cluster configuration, understanding the netwrking, service-discovery, storage configuration, application deployment etc. And the book does a good job of that.Relating K8s to DevOps is something I had hard time doing, looking at the documentation and online blogs. Networking documentation online are either too abstract or too detailed. Both these concerns are addressed adequately in the book.Overall a good read for devops and developers. Especially with the lack of another good and "current" book.As for installations of the cluster, if you are new to K8s, try a installation template from a cloud platform, based on their "current" documentation. Azure, GCP, Bluemix all provide cluster service natively, though Bluemix trial cluster has limitations on storage and networking. And it is not hard to get started on AWS. This book will help you while creating a cluster on premise, but again, you are better off following the documentation.
Amazon Verified review Amazon
f. e. nar Feb 06, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
It is a good starter book for Kubernetes but do not expect more than "Start" for details.
Amazon Verified review Amazon
Makros Sep 29, 2017
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
This particular book emphasizes using Kubernetes with Google's cloud service, with fewer references to AWS. Actually unsurprising, given that Kubernetes was invented by Google, but something you should know.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela