Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Kubernetes Cookbook
Kubernetes Cookbook

Kubernetes Cookbook: Practical solutions to container orchestration , Second Edition

Arrow left icon
Profile Icon Hideto Saito Profile Icon Ke-Jou Carol Hsu Profile Icon Hui-Chuan Chloe Lee
Arrow right icon
AU$60.99
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.8 (12 Ratings)
Paperback May 2018 554 pages 2nd Edition
eBook
AU$33.99 AU$48.99
Paperback
AU$60.99
Subscription
Free Trial
Renews at AU$24.99p/m
Arrow left icon
Profile Icon Hideto Saito Profile Icon Ke-Jou Carol Hsu Profile Icon Hui-Chuan Chloe Lee
Arrow right icon
AU$60.99
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.8 (12 Ratings)
Paperback May 2018 554 pages 2nd Edition
eBook
AU$33.99 AU$48.99
Paperback
AU$60.99
Subscription
Free Trial
Renews at AU$24.99p/m
eBook
AU$33.99 AU$48.99
Paperback
AU$60.99
Subscription
Free Trial
Renews at AU$24.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Table of content icon View table of contents Preview book icon Preview Book

Kubernetes Cookbook

Building Your Own Kubernetes Cluster

In this chapter, we will cover the following recipes:

  • Exploring the Kubernetes architecture
  • Setting up a Kubernetes cluster on macOS by minikube
  • Setting up a Kubernetes cluster on Windows by minikube
  • Setting up a Kubernetes cluster on Linux by kubeadm
  • Setting up a Kubernetes cluster on Linux by Ansible (kubespray)
  • Running your first container in Kubernetes

Introduction

Welcome to your journey into Kubernetes! In this very first section, you will learn how to build your own Kubernetes cluster. Along with understanding each component and connecting them together, you will learn how to run your first container on Kubernetes. Having a Kubernetes cluster will help you continue your studies in the chapters ahead.

Exploring the Kubernetes architecture

Kubernetes is an open source container management tool. It is a Go language-based (https://golang.org), lightweight and portable application. You can set up a Kubernetes cluster on a Linux-based OS to deploy, manage, and scale Docker container applications on multiple hosts.

Getting ready

Kubernetes is made up of the following components:

  • Kubernetes master
  • Kubernetes nodes
  • etcd
  • Kubernetes network

These components are connected via a network, as shown in the following diagram:

The preceding diagram can be summarized as follows:

  • Kubernetes master: It connects to etcd via HTTP or HTTPS to store the data
  • Kubernetes nodes: It connect to the Kubernetes master via HTTP or HTTPS to get a command and report the status
  • Kubernetes network: It L2, L3 or overlay make a connection of their container applications

How to do it...

In this section, we are going to explain how to use the Kubernetes master and nodes to realize the main functions of the Kubernetes system.

Kubernetes master

The Kubernetes master is the main component of the Kubernetes cluster. It serves several functionalities, such as the following:

  • Authorization and authentication
  • RESTful API entry point
  • Container deployment scheduler to Kubernetes nodes
  • Scaling and replicating controllers
  • Reading the configuration to set up a cluster

The following diagram shows how master daemons work together to fulfill the aforementioned functionalities:

There are several daemon processes that form the Kubernetes master's functionality, such as kube-apiserver, kube-scheduler and kube-controller-manager. Hypercube, the wrapper binary, can launch all these daemons.

In addition, the Kubernetes command-line interface, kubect can control the Kubernetes master functionality.

API server (kube-apiserver)

The API server provides an HTTP- or HTTPS-based RESTful API, which is the hub between Kubernetes components, such as kubectl, the scheduler, the replication controller, the etcd data store, the kubelet and kube-proxy, which runs on Kubernetes nodes, and so on.

Scheduler (kube-scheduler)

The scheduler helps to choose which container runs on which nodes. It is a simple algorithm that defines the priority for dispatching and binding containers to nodes. For example:

  • CPU
  • Memory
  • How many containers are running?

Controller manager (kube-controller-manager)

The controller manager performs cluster operations. For example:

  • Manages Kubernetes nodes
  • Creates and updates the Kubernetes internal information
  • Attempts to change the current status to the desired status

Command-line interface (kubectl)

After you install the Kubernetes master, you can use the Kubernetes command-line interface, kubectl, to control the Kubernetes cluster. For example, kubectl get cs returns the status of each component. Also, kubectl get nodes returns a list of Kubernetes nodes:

//see the Component Statuses
# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok nil
scheduler Healthy ok nil
etcd-0 Healthy {"health": "true"} nil


//see the nodes
# kubectl get nodes
NAME LABELS STATUS AGE
kub-node1 kubernetes.io/hostname=kub-node1 Ready 26d
kub-node2 kubernetes.io/hostname=kub-node2 Ready 26d

Kubernetes node

The Kubernetes node is a slave node in the Kubernetes cluster. It is controlled by the Kubernetes master to run container applications using Docker (http://docker.com) or rkt (http://coreos.com/rkt/docs/latest/). In this book, we will use the Docker container runtime as the default engine.

Node or slave?

The term slave is used in the computer industry to represent the cluster worker node; however, it is also associated with discrimination. The Kubernetes project uses minion in the early version and node in the current version.

The following diagram displays the role and tasks of daemon processes in the node:

The node also has two daemon processes, named kubelet and kube-proxy, to support its functionalities.

kubelet

kubelet is the main process on the Kubernetes node that communicates with the Kubernetes master to handle the following operations:

  • Periodically accesses the API controller to check and report
  • Performs container operations
  • Runs the HTTP server to provide simple APIs

Proxy (kube-proxy)

The proxy handles the network proxy and load balancer for each container. It changes Linux iptables rules (nat table) to control TCP and UDP packets across the containers.

After starting the kube-proxy daemon, it configures iptables rules; you can use iptables -t nat -L or iptables -t nat -S to check the nat table rules, as follows:

//the result will be vary and dynamically changed by kube-proxy
# sudo iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N DOCKER
-N FLANNEL
-N KUBE-NODEPORT-CONTAINER
-N KUBE-NODEPORT-HOST
-N KUBE-PORTALS-CONTAINER
-N KUBE-PORTALS-HOST
-A PREROUTING -m comment --comment "handle ClusterIPs; NOTE: this must be before the NodePort rules" -j KUBE-PORTALS-CONTAINER
-A PREROUTING -m addrtype --dst-type LOCAL -m comment --comment "handle service NodePorts; NOTE: this must be the last rule in the chain" -j KUBE-NODEPORT-CONTAINER
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "handle ClusterIPs; NOTE: this must be before the NodePort rules" -j KUBE-PORTALS-HOST
-A OUTPUT -m addrtype --dst-type LOCAL -m comment --comment "handle service NodePorts; NOTE: this must be the last rule in the chain" -j KUBE-NODEPORT-HOST
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 192.168.90.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 192.168.0.0/16 -j FLANNEL
-A FLANNEL -d 192.168.0.0/16 -j ACCEPT
-A FLANNEL ! -d 224.0.0.0/4 -j MASQUERADE

How it works...

There are two more components to complement Kubernetes node functionalities, the data store etcd and the inter-container network. You can learn how they support the Kubernetes system in the following subsections.

etcd

etcd (https://coreos.com/etcd/) is the distributed key-value data store. It can be accessed via the RESTful API to perform CRUD operations over the network. Kubernetes uses etcd as the main data store.

You can explore the Kubernetes configuration and status in etcd (/registry) using the curl command, as follows:

//example: etcd server is localhost and default port is 4001
# curl -L http://127.0.0.1:4001/v2/keys/registry
{"action":"get","node":{"key":"/registry","dir":true,"nodes":[{"key":"/registry/namespaces","dir":true,"modifiedIndex":6,"createdIndex":6},{"key":"/registry/pods","dir":true,"modifiedIndex":187,"createdIndex":187},{"key":"/registry/clusterroles","dir":true,"modifiedIndex":196,"createdIndex":196},{"key":"/registry/replicasets","dir":true,"modifiedIndex":178,"createdIndex":178},{"key":"/registry/limitranges","dir":true,"modifiedIndex":202,"createdIndex":202},{"key":"/registry/storageclasses","dir":true,"modifiedIndex":215,"createdIndex":215},{"key":"/registry/apiregistration.k8s.io","dir":true,"modifiedIndex":7,"createdIndex":7},{"key":"/registry/serviceaccounts","dir":true,"modifiedIndex":70,"createdIndex":70},{"key":"/registry/secrets","dir":true,"modifiedIndex":71,"createdIndex":71},{"key":"/registry/deployments","dir":true,"modifiedIndex":177,"createdIndex":177},{"key":"/registry/services","dir":true,"modifiedIndex":13,"createdIndex":13},{"key":"/registry/configmaps","dir":true,"modifiedIndex":52,"createdIndex":52},{"key":"/registry/ranges","dir":true,"modifiedIndex":4,"createdIndex":4},{"key":"/registry/minions","dir":true,"modifiedIndex":58,"createdIndex":58},{"key":"/registry/clusterrolebindings","dir":true,"modifiedIndex":171,"createdIndex":171}],"modifiedIndex":4,"createdIndex":4}}

Kubernetes network

Network communication between containers is the most difficult part. Because Kubernetes manages multiple nodes (hosts) running several containers, those containers on different nodes may need to communicate with each other.

If the container's network communication is only within a single node, you can use Docker network or Docker compose to discover the peer. However, along with multiple nodes, Kubernetes uses an overlay network or container network interface (CNI) to achieve multiple container communication.

See also

This recipe describes the basic architecture and methodology of Kubernetes and the related components. Understanding Kubernetes is not easy, but a step-by-step learning process on how to set up, configure, and manage Kubernetes is really fun.

Setting up the Kubernetes cluster on macOS by minikube

Kubernetes consists of combination of multiple open source components. These are developed by different parties, making it difficult to find and download all the related packages and install, configure, and make them work from scratch.

Fortunately, there are some different solutions and tools that have been developed to set up Kubernetes clusters effortlessly. Therefore, it is highly recommended you use such a tool to set up Kubernetes on your environment.

The following tools are categorized by different types of solution to build your own Kubernetes:

A self-managed solution is suitable if we just want to build a development environment or do a proof of concept quickly.

By using minikube (https://github.com/kubernetes/minikube) and kubeadm (https://kubernetes.io/docs/admin/kubeadm/), we can easily build the desired environment on our machine locally; however, it is not practical if we want to build a production environment.

By using kubespray (https://github.com/kubernetes-incubator/kubespray) and kops (https://github.com/kubernetes/kops), we can also build a production-grade environment quickly from scratch.

An enterprise solution or cloud-hosted solution is the easiest starting point if we want to create a production environment. In particular, the Google Kubernetes Engine (GKE), which has been used by Google for many years, comes with comprehensive management, meaning that users don't need to care much about the installation and settings. Also, Amazon EKS is a new service that was introduced at AWS re: Invent 2017, which is managed by the Kubernetes service on AWS.

Kubernetes can also run on different clouds and on-premise VMs by custom solutions. To get started, we will build Kubernetes using minikube on macOS desktop machines in this chapter.

Getting ready

minikube runs Kubernetes on the Linux VM on macOS. It relies on a hypervisor (virtualization technology), such as VirtualBox (https://www.virtualbox.org), VMWare fusion (https://www.vmware.com/products/fusion.html), or hyperkit (https://github.com/moby/hyperkit) In addition, we will need to have the Kubernetes command-line interface (CLI) kubectl, which is used to connect through the hypervisor, to control Kubernetes.

With minikube, you can run the entire suite of the Kubernetes stack on your macOS, including the Kubernetes master, node, and CLI. It is recommended that macOS has enough memory to run Kubernetes. By default, minikube uses VirtualBox as the hypervisor.

In this chapter, however, we will demonstrate how to use hyperkit, which is the most lightweight solution. As Linux VM consumes 2 GB of memory, at least 4 GB of memory is recommended. Note that hyperkit is built on the top of the hypervisor framework (https://developer.apple.com/documentation/hypervisor) on macOS; therefore, macOS 10.10 Yosemite or later is required.

The following diagram shows the relationship between kubectl, the hypervisor, minikube, and macOS:

How to do it...

macOS doesn't have an official package management tool, such as yum and apt-get on Linux. But there are some useful tools available for macOS. Homebrew (https://brew.sh) is the most popular package management tool and manages many open source tools, including minikube.

In order to install Homebrew on macOS, perform the following steps:

  1. Open the Terminal and then type the following command:
$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
  1. Once installation is completed, you can type /usr/local/bin/brew help to see the available command options.

If you just install or upgrade Xcode on your macOS, the Homebrew installation may stop. In that case, open Xcode to accept the license agreement or type sudo xcodebuild -license beforehand.

  1. Next, install the hyperkit driver for minikube. At the time of writing (February 2018), HomeBrew does not support hyperkit; therefore type the following command to install it:
$ curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-hyperkit \
&& chmod +x docker-machine-driver-hyperkit \
&& sudo mv docker-machine-driver-hyperkit /usr/local/bin/ \
&& sudo chown root:wheel /usr/local/bin/docker-machine-driver-hyperkit \
&& sudo chmod u+s /usr/local/bin/docker-machine-driver-hyperkit
  1. Next, let's install the Kubernetes CLI. Use Homebrew with the following comment to install the kubectl command on your macOS:
//install kubectl command by "kubernetes-cli" package
$ brew install kubernetes-cli

Finally, you can install minikube. It is not managed by Homebrew; however, Homebrew has an extension called homebrew-cask (https://github.com/caskroom/homebrew-cask) that supports minikube.

  1. In order to install minikube by homebrew-cask, just simply type the following command:
//add "cask" option
$ brew cask install minikube
  1. If you have never installed Docker for Mac on your machine, you need to install it via homebrew-cask as well
//only if you don't have a Docker for Mac
$ brew cask install docker

//start Docker
$ open -a Docker.app
  1. Now you are all set! The following command shows whether the required packages have been installed on your macOS or not:
//check installed package by homebrew
$ brew list
kubernetes-cli


//check installed package by homebrew-cask
$ brew cask list
minikube

How it works...

minikube is suitable for setting up Kubernetes on your macOS with the following command, which downloads and starts a Kubernetes VM stet, and then configures the kubectl configuration (~/.kube/config):

//use --vm-driver=hyperkit to specify to use hyperkit
$ /usr/local/bin/minikube start --vm-driver=hyperkit
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
150.53 MB / 150.53 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.


//check whether .kube/config is configured or not
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /Users/saito/.minikube/ca.crt
server: https://192.168.64.26:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
as-user-extra: {}
client-certificate: /Users/saito/.minikube/client.crt
client-key: /Users/saito/.minikube/client.key

After getting all the necessary packages, perform the following steps:

  1. Wait for a few minutes for the Kubernetes cluster setup to complete.
  2. Use kubectl version to check the Kubernetes master version and kubectl get cs to see the component status.
  3. Also, use the kubectl get nodes command to check whether the Kubernetes node is ready or not:
//it shows kubectl (Client) is 1.10.1, and Kubernetes master (Server) is 1.10.0
$ /usr/local/bin/kubectl version --short
Client Version: v1.10.1
Server Version: v1.10.0


//get cs will shows Component Status
$ kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}


//Kubernetes node (minikube) is ready
$ /usr/local/bin/kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 2m v1.10.0
  1. Now you can start to use Kubernetes on your machine. The following sections describe how to use the kubectl command to manipulate Docker containers.
  2. Note that, in some cases, you may need to maintain the Kubernetes cluster, such as starting/stopping the VM or completely deleting it. The following commands maintain the minikube environment:

Command

Purpose

minikube start --vm-driver=hyperkit

Starts the Kubernetes VM using the hyperkit driver

minikube stop

Stops the Kubernetes VM

minikube delete

Deletes a Kubernetes VM image

minikube ssh

ssh to the Kubernetes VM guest

minikube ip

Shows the Kubernetes VM (node) IP address

minikube update-context

Checks and updates ~/.kube/config if the VM IP address is changed

minikube dashboard

Opens the web browser to connect the Kubernetes UI

For example, minikube starts a dashboard (the Kubernetes UI) by the default. If you want to access the dashboard, type minikube dashboard; it then opens your default browser and connects the Kubernetes UI, as illustrated in the following screenshot:

See also

This recipe describes how to set up a Kubernetes cluster on your macOS using minikube. It is the easiest way to start using Kubernetes. We also learned how to use kubectl, the Kubernetes command-line interface tool, which is the entry point to control our Kubernetes cluster!

Setting up the Kubernetes cluster on Windows by minikube

By nature, Docker and Kubernetes are based on a Linux-based OS. Although it is not ideal to use the Windows OS to explore Kubernetes, many people are using the Windows OS as their desktop or laptop machine. Luckily, there are a lot of ways to run the Linux OS on Windows using virtualization technologies, which makes running a Kubernetes cluster on Windows machines possible. Then, we can build a development environment or do a proof of concept on our local Windows machine.

You can run the Linux VM by using any hypervisor on Windows to set up Kubernetes from scratch, but using minikube (https://github.com/kubernetes/minikube) is the fastest way to build a Kubernetes cluster on Windows. Note that this recipe is not ideal for a production environment because it will set up a Kubernetes on Linux VM on Windows.

Getting ready

To set up minikube on Windows requires a hypervisor, either VirtualBox (https://www.virtualbox.org) or Hyper-V, because, again, minikube uses the Linux VM on Windows. This means that you cannot use the Windows virtual machine (for example, running the Windows VM on macOS by parallels).

However, kubectl , the Kubernetes CLI, supports a Windows native binary that can connect to Kubernetes over a network. So, you can set up a portable suite of Kubernetes stacks on your Windows machine.

The following diagram shows the relationship between kubectl, Hypervisor, minikube, and Windows:

Hyper-V is required for Windows 8 Pro or later. While many users still use Windows 7, we will use VirtualBox as the minikube hypervisor in this recipe.

How to do it...

First of all, VirtualBox for Windows is required:

  1. Go to the VirtualBox website (https://www.virtualbox.org/wiki/Downloads) to download the Windows installer.
  2. Installation is straightforward, so we can just choose the default options and click Next:
  1. Next, create the Kubernetes folder, which is used to store the minikube and kubectl binaries. Let's create the k8s folder on top of the C: drive, as shown in the following screenshot:
  1. This folder must be in the command search path, so open System Properties, then move to the Advanced tab.
  2. Click the Environment Variables... button, then choose Path , and then click the Edit... button, as shown in the following screenshot:
  1. Then, append c:\k8s , as follows:
  1. After clicking the OK button, log off and logo on to Windows again (or reboot) to apply this change.
  2. Next, download minikube for Windows. It is a single binary, so use any web browser to download https://github.com/kubernetes/minikube/releases/download/v0.26.1/minikube-windows-amd64 and then copy it to the c:\k8s folder, but change the filename to minikube.exe.

  1. Next, download kubectl for Windows, which can communicate with Kubernetes. It is also single binary like minikube. So, download https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/windows/amd64/kubectl.exe and then copy it to the c:\k8s folder as well.
  2. Eventually, you will see two binaries in the c:\k8s folder, as shown in the following screenshot:
If you are running anti-virus software, it may prevent you from running kubectl.exe and minikube.exe. If so, please update your anti-virus software setting that allows running these two binaries.

How it works...

Let's get started!

  1. Open Command Prompt and then type minikube start , as shown in the following screenshot:
  1. minikube downloads the Linux VM image and then sets up Kubernetes on the Linux VM; now if you open VirtualBox, you can see that the minikube guest has been registered, as illustrated in the following screenshot:
  1. Wait for a few minutes to complete the setup of the Kubernetes cluster.
  2. As per the following screenshot, type kubectl version to check the Kubernetes master version.
  3. Use the kubectl get nodes command to check whether the Kubernetes node is ready or not:
  1. Now you can start to use Kubernetes on your machine! Again, Kubernetes is running on the Linux VM, as shown in the next screenshot.
  2. Using minikube ssh allows you to access the Linux VM that runs Kubernetes:

Therefore, any Linux-based Docker image is capable of running on your Windows machine.

  1. Type minikube ip to verify which IP address the Linux VM uses and also minikube dashboard, to open your default web browser and navigate to the Kubernetes UI ,as shown in the following screenshot:
  1. If you don't need to use Kubernetes anymore, type minikube stop or open VirtualBox to stop the Linux guest and release the resource, as shown in the following screenshot:

See also

This recipe describes how to set up a Kubernetes cluster on your Windows OS using minikube. It is the easiest way to start using Kubernetes. It also describes kubectl, the Kubernetes command-line interface tool, which is the entry point form which to control your Kubernetes.

Setting up the Kubernetes cluster on Linux via kubeadm

In this recipe, we are going to show how to create a Kubernetes cluster along with kubeadm (https://github.com/kubernetes/kubeadm) on Linux servers. Kubeadm is a command-line tool that simplifies the procedure of creating and managing a Kubernetes cluster. Kubeadm leverages the fast deployment feature of Docker, running the system services of the Kubernetes master and the etcd server as containers. When triggered by the kubeadm command, the container services will contact kubelet on the Kubernetes node directly; kubeadm also checks whether every component is healthy. Through the kubeadm setup steps, you can avoid having a bunch of installation and configuration commands when you build everything from scratch.

Getting ready

We will provide instructions of two types of OS:

  • Ubuntu Xenial 16.04 (LTS)
  • CentOS 7.4

Make sure the OS version is matched before continuing. Furthermore, the software dependency and network settings should be also verified before you proceed to thecd cd next step. Check the following items to prepare the environment:

  • Every node has a unique MAC address and product UUID: Some plugins use the MAC address or product UUID as a unique machine ID to identify nodes (for example, kube-dns). If they are duplicated in the cluster, kubeadm may not work while starting the plugin:
// check MAC address of your NIC
$ ifconfig -a
// check the product UUID on your host
$ sudo cat /sys/class/dmi/id/product_uuid
  • Every node has a different hostname: If the hostname is duplicated, the Kubernetes system may collect logs or statuses from multiple nodes into the same one.
  • Docker is installed: As mentioned previously, the Kubernetes master will run its daemon as a container, and every node in the cluster should get Docker installed. For how to perform the Docker installation, you can follow the steps on the official website: (Ubuntu: https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/, and CentOS: https://docs.docker.com/engine/installation/linux/docker-ce/centos/) Here we have Docker CE 17.06 installed on our machines; however, only Docker versions 1.11.2 to 1.13.1, and 17.03.x are verified with Kubernetes version 1.10.
  • Network ports are available: The Kubernetes system services need network ports for communication. The ports in the following table should now be occupied according to the role of the node:
Node role Ports System service
Master 6443 Kubernetes API server
10248/10250/10255 kubelet local healthz endpoint/Kubelet API/Heapster (read-only)
10251 kube-scheduler
10252 kube-controller-manager
10249/10256 kube-proxy
2379/2380 etcd client/etcd server communication
Node 10250/10255 Kubelet API/Heapster (read-only)
30000~32767 Port range reserved for exposing container service to outside world
  • The Linux command, netstat, can help to check if the port is in use or not:

// list every listening port
$ sudo netstat -tulpn | grep LISTEN
  • Network tool packages are installed. ethtool and ebtables are two required utilities for kubeadm. They can be download and installed by theapt-get or yumpackage managing tools.

How to do it...

The installation procedures for two Linux OSes, Ubuntu and CentOS, are going to be introduced separately in this recipe as they have different setups.

Package installation

Let's get the Kubernetes packages first! The repository for downloading needs to be set in the source list of the package management system. Then, we are able to get them installed easily through the command-line.

Ubuntu

To install Kubernetes packages in Ubuntu perform the following steps:

  1. Some repositories are URL with HTTPS. The apt-transport-https package must be installed to access the HTTPS endpoint:
$ sudo apt-get update && sudo apt-get install -y apt-transport-https
  1. Download the public key for accessing packages on Google Cloud, and add it as follows:
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
OK
  1. Next, add a new source list for the Kubernetes packages:
$ sudo bash -c 'echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list'
  1. Finally, it is good to install the Kubernetes packages:
// on Kubernetes master
$ sudo apt-get update && sudo apt-get install -y kubelet kubeadm kubectl
// on Kubernetes node

$ sudo apt-get update && sudo apt-get install -y kubelet

CentOS

To install Kubernetes packages in CentOS perform the following steps:

  1. As with Ubuntu, new repository information needs to be added:
$ sudo vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
  1. Now, we are ready to pull the packages from the Kubernetes source base via the yum command:
// on Kubernetes master
$ sudo yum install -y kubelet kubeadm kubectl
// on Kubernetes node
$ sudo yum install -y kubelet
  1. No matter what OS it is, check the version of the package you get!
// take it easy! server connection failed since there is not server running
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server 192.168.122.101:6443 was refused - did you specify the right host or port?

System configuration prerequisites

Before running up the whole system by kubeadm, please check that Docker is running on your machine for Kubernetes. Moreover, in order to avoid critical errors while executing kubeadm, we will show the necessary service configuration on both the system and kubelet. As well as the master, please set the following configurations on the Kubernetes nodes to get kubelet to work fine with kubeadm.

CentOS system settings

There are other additional settings in CentOS to make Kubernetes behave correctly. Be aware that, even if we are not using kubeadm to manage the Kubernetes cluster, the following setup should be considered while running kubelet:

  1. Disable SELinux, since kubelet does not support SELinux completely:
// check the state of SELinux, if it has already been disabled, bypass below commands
$ sestatus

We can disable SELinux through the following command, or by modifying the configuration file:

// disable SELinux through command
$ sudo setenforce 0
// or modify the configuration file
$ sudo sed –I 's/ SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

Then we'll need to reboot the machine:

// reboot is required
$ sudo reboot
  1. Enable the usage of iptables. To prevent some routing errors happening, add runtime parameters:
// enable the parameters by setting them to 1
$ sudo bash -c 'echo "net.bridge.bridge-nf-call-ip6tables = 1" > /etc/sysctl.d/k8s.conf'
$ sudo bash -c 'echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.d/k8s.conf'
// reload the configuration
$ sudo sysctl --system

Booting up the service

Now we can start the service. First enable and then start kubelet on your Kubernetes master machine:

$ sudo systemctl enable kubelet && sudo systemctl start kubelet

While checking the status of kubelet, you may be worried to see the status displaying activating (auto-restart); and you may get further frustrated to see the detail logs by the journalctl command, as follows:

error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory

Don't worry. kubeadm takes care of creating the certificate authorities file. It is defined in the service configuration file, /etc/systemd/system/kubelet.service.d/10-kubeadm.conf by argument KUBELET_AUTHZ_ARGS. The kubelet service won't be a healthy without this file, so keep trying to restart the daemon by itself.

Go ahead and start all the master daemons via kubeadm. It is worth noting that using kubeadm requires the root permission to achieve a service level privilege. For any sudoer, each kubeadm would go after the sudo command:

$ sudo kubeadm init
Find preflight checking error while firing command kubeadm init? Using following one to disable running swap as description.

$ sudo kubeadm init --ignore-preflight-errors=Swap

And you will see the sentence Your Kubernetes master has initialized successfully! showing on the screen. Congratulations! You are almost done! Just follow the information about the user environment setup below the greeting message:

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

The preceding commands ensure every Kubernetes instruction is fired by your account execute with the proper credentials and connects to the correct server portal:

// Your kubectl command works great now
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

More than that, kubelet goes into a healthy state now:

// check the status of kubelet
$ sudo systemctl status kubelet
...
Active: active (running) Mon 2018-04-30 18:46:58 EDT;
2min 43s ago
...

Network configurations for containers

After the master of the cluster is ready to handle jobs and the services are running, for the purpose of making containers accessible to each other through networking, we need to set up the network for container communication. It is even more important initially while building up a Kubernetes cluster with kubeadm, since the master daemons are all running as containers. kubeadm supports the CNI (https://github.com/containernetworking/cni). We are going to attach the CNI via a Kubernetes network add-on.

There are many third-party CNI solutions that supply secured and reliable container network environments. Calico (https://www.projectcalico.org), one CNI provide stable container networking. Calico is light and simple, but still well implemented by the CNI standard and integrated with Kubernetes:

$ kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml

Here, whatever your host OS is, the command kubectl can fire any sub command for utilizing resources and managing systems. We use kubectl to apply the configuration of Calico to our new-born Kubernetes.

More advanced management of networking and Kubernetes add-ons will be discussed in Chapter 7, Building Kubernetes on GCP.

Getting a node involved

Let's log in to your Kubernetes node to join the group controlled by kubeadm:

  1. First, enable and start the service, kubelet. Every Kubernetes machine should have kubelet running on it:
$ sudo systemctl enable kubelet && sudo systemctl start kubelet
  1. After that, fire the kubeadm join command with an input flag token and the IP address of the master, notifying the master that it is a secured and authorized node. You can get the token on the master node via the kubeadm command:
// on master node, list the token you have in the cluster
$ sudo kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
da3a90.9a119695a933a867 6h 2018-05-01T18:47:10-04:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
  1. In the preceding output, if kubeadm init succeeds, the default token will be generated. Copy the token and paste it onto the node, and then compose the following command:
// The master IP is 192.168.122.101, token is da3a90.9a119695a933a867, 6443 is the port of api server.
$ sudo kubeadm join --token da3a90.9a119695a933a867 192.168.122.101:6443 --discovery-token-unsafe-skip-ca-verification
What if you call kubeadm token list to list the tokens, and see they are all expired? You can create a new one manually by this command: kubeadm token create .
  1. Please make sure that the master's firewall doesn't block any traffic to port 6443, which is for API server communication. Once you see the words Successfully established connection showing on the screen, it is time to check with the master if the group got the new member:
// fire kubectl subcommand on master
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu01 Ready master 11h v1.10.2
ubuntu02 Ready <none> 26s v1.10.2

Well done! No matter if whether your OS is Ubuntu or CentOS, kubeadm is installed and kubelet is running. You can easily go through the preceding steps to build your Kubernetes cluster.

You may be wondering about the flag discovery-token-unsafe-skip-ca-verification used while joining the cluster. Remember the kubelet log that says the certificate file is not found? That's it, since our Kubernetes node is brand new and clean, and has never connected with the master before. There is no certificate file to find for verification. But now, because the node has shaken hands with the master, the file exists. We may join in this way (in some situation requiring rejoining the same cluster):

kubeadm join --token $TOKEN $MASTER_IPADDR:6443 --discovery-token-ca-cert-hash sha256:$HASH

The hash value can be obtained by the openssl command:

// rejoining the same cluster
$ HASH=$(openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //')
$ sudo kubeadm join --token da3a90.9a119695a933a867 192.168.122.101:6443 --discovery-token-ca-cert-hash sha256:$HASH

How it works...

When kubeadm init sets up the master, there are six stages:

  1. Generating certificate files and keys for services: Certificated files and keys are used for security management during cross-node communications. They are located in the /etc/kubernetes/pki directory. Take kubelet, for example. It cannot access the Kubernetes API server without passing the identity verification.
  2. Writing kubeconfig files: The kubeconfig files define permissions, authentication, and configurations for kubectl actions. In this case, the Kubernetes controller manager and scheduler have related kubeconfig files to fulfill any API requests.

  1. Creating service daemon YAML files: The service daemons under kubeadm's control are just like computing components running on the master. As with setting deployment configurations on disk, kubelet will make sure each daemon is active.
  2. Waiting for kubelet to be alive, running the daemons as pods: When kubelet is alive, it will boot up the service pods described in the files under the /etc/kubernetes/manifests directory. Moreover, kubelet guarantees to keep them activated, restarting the pod automatically if it crashes.
  3. Setting post-configuration for the cluster: Some cluster configurations still need to be set, such as configuring role-based accessing control (RBAC) rules, creating a namespace, and tagging the resources.
  4. Applying add-ons: DNS and proxy services can be added along with the kubeadm system.

While the user enters kubeadm and joins the Kubernetes node, kubeadm will complete the first two stages like the master.

If you have faced a heavy and complicated set up procedure in earlier versions of Kubernetes, it is quite a relief to set up a Kubernetes cluster with kubeadm. kubeadm reduces the overhead of configuring each daemon and starting them one by one. Users can still do customization on kubelet and master services, by just modifying a familiar file, 10-kubeadm.conf and the YAML files under /etc/kubernetes/manifests. Kubeadm not only helps to establish the cluster but also enhances security and availability, saving you time.

See also

We talked about how to build a Kubernetes cluster. If you're ready to run your first application on it, check the last recipe in this chapter and run the container! And for advanced management of your cluster, you can also look at Chapter 8, Advanced Cluster Administration, of this book:

  • Advanced settings in kubeconfig, in Chapter 8, Advanced Cluster Administration

Setting up the Kubernetes cluster on Linux via Ansible (kubespray)

If you are familiar with configuration management, such as Puppet, Chef and Ansible, kubespray (https://github.com/kubernetes-incubator/kubespray) is the best choice to set up a Kubernetes cluster from scratch. It provides the Ansible playbook that supports the majority of Linux distributions and public clouds, such as AWS and GCP.

Ansible (https://www.ansible.com) is a Python-based SSH automation tool that can configure Linux as your desired state based on the configuration, which is called playbook. This cookbook describes how to use kubespray to set up Kubernetes on Linux.

Getting ready

As of May 2018, the latest version of kubespray is 2.5.0, which supports the following operation systems to install Kubernetes:

  • RHEL/CentOS 7
  • Ubuntu 16.04 LTS

According to the kubespray documentation, it also supports CoreOS and debian distributions. However, those distributions may need some additional steps or have technical difficulties. This cookbook uses CentOS 7 and Ubuntu 16.04 LTS.

In addition, you need to install Ansible on your machine. Ansible works on Python 2.6, 2.7, and 3.5 or higher. macOS and Linux might be the best choice to install Ansible because Python is preinstalled by most of macOS and Linux distributions by default. In order to check which version of Python you have, open a Terminal and type the following command:

//Use capital V
$ python -V
Python 2.7.5

Overall, you need at least three machines, as mentioned in the following table:

Type of host

Recommended OS/Distribution

Ansible

macOS or any Linux which has Python 2.6, 2.7, or 3.5

Kubernetes master

RHEL/CentOS 7 or Ubuntu 16.04 LTS

Kubernetes node

RHEL/CentOS 7 or Ubuntu 16.04 LTS

There are some network communicating with each other, so you need to at least open a network port (for example, AWS Security Group or GCP Firewall rule) as:

  • TCP/22 (ssh): Ansible to Kubernetes master/node host
  • TCP/6443 (Kubernetes API server): Kubernetes node to master
  • Protocol 4 (IP encapsulated in IP): Kubernetes master and node to each other by Calico
In Protocol 4 (IP encapsulated in IP), if you are using AWS, set an ingress rule to specify aws ec2 authorize-security-group-ingress --group-id <your SG ID> --cidr <network CIDR> --protocol 4. In addition, if you are using GCP, set the firewall rule to specify as cloud compute firewall-rules create allow-calico --allow 4 --network <your network name> --source-ranges <network CIDR>.

Installing pip

The easiest way to install Ansible, is to use pip, the Python package manager. Some of newer versions of Python have pip already (Python 2.7.9 or later and Python 3.4 or later):

  1. To confirm whether pip is installed or not, similar to the Python command, use -V:
//use capital V
$ pip -V
pip 9.0.1 from /Library/Python/2.7/site-packages (python 2.7)
  1. On the other hand, if you see the following result, you need to install pip:
//this result shows you don't have pip yet
$ pip -V
-bash: pip: command not found
  1. In order to install pip, download get-pip.py and install by using the following command:
//download pip install script
$ curl -LO https://bootstrap.pypa.io/get-pip.py


//run get-pip.py by privileged user (sudo)
$ sudo python get-pip.py
Collecting pip
Downloading pip-9.0.1-py2.py3-none-any.whl (1.3MB)
100% |################################| 1.3MB 779kB/s
Collecting wheel
Downloading wheel-0.30.0-py2.py3-none-any.whl (49kB)
100% |################################| 51kB 1.5MB/s
Installing collected packages: pip, wheel
Successfully installed pip-9.0.1 wheel-0.30.0


//now you have pip command
$ pip -V
pip 9.0.1 from /usr/lib/python2.7/site-packages (python 2.7)

Installing Ansible

Perform the following steps to install Ansible:

  1. Once you have installed pip, you can install Ansible with the following command:
//ran by privileged user (sudo)
$ sudo pip install ansible
pip scans your Python and installs the necessary libraries for Ansible, so it may take a few minutes to complete.
  1. Once you have successfully installed Ansible by pip, you can verify it with the following command and see output as this:
$ which ansible
/usr/bin/ansible

$ ansible --version
ansible 2.4.1.0

Installing python-netaddr

Setting up ssh public key authentication

One more thing, as mentioned previously, Ansible is actually the ssh automation tool. If you log on to host via ssh, you have to have an appropriate credential (user/password or ssh public key) to the target machines. In this case, the target machines mean the Kubernetes master and nodes.

Due to security reasons, especially in the public cloud, Kubernetes uses only the ssh public key authentication instead of ID/password authentication.

To follow the best practice, let's copy the ssh public key from your Ansible machine to the Kubernetes master/node machines:

If you've already set up ssh public key authentication between the Ansible machine to Kubernetes candidate machines, you can skip this step.

  1. In order to create an ssh public/private key pair from your Ansible machine, type the following command:
//with –q means, quiet output
$ ssh-keygen -q
  1. It will ask you to set a passphrase. You may set or skip (empty) this, but you have to remember it.
  2. Once you have successfully created a key pair, you can see the private key as ~/.ssh/id_rsa and public key as ~/.ssh/id_rsa.pub. You need to append the public key to the target machine under ~/.ssh/authorized_keys, as shown in the following screenshot:
  1. You need to copy and paste your public key to all Kubernetes master and node candidate machines.
  2. To make sure your ssh public key authentication works, just ssh from the Ansible machine to the target host that won't ask for your logon password, as here:
//use ssh-agent to remember your private key and passphrase (if you set)
ansible_machine$ ssh-agent bash
ansible_machine$ ssh-add
Enter passphrase for /home/saito/.ssh/id_rsa: Identity added: /home/saito/.ssh/id_rsa (/home/saito/.ssh/id_rsa)


//logon from ansible machine to k8s machine which you copied public key
ansible_machine$ ssh 10.128.0.2
Last login: Sun Nov 5 17:05:32 2017 from 133.172.188.35.bc.googleusercontent.com
k8s-master-1$

Now you are all set! Let's set up Kubernetes using kubespray (Ansible) from scratch.

How to do it...

kubespray is provided through the GitHub repository (https://github.com/kubernetes-incubator/kubespray/tags), as shown in the following screenshot:

Because kubespray is an Ansible playbook, not a binary, you can download the latest version (as of May 2018, version 2.5.0 is the latest) of the zip or tar.gz to your Ansible machine directly and unarchive it with the following command:

//download tar.gz format
ansible_machine$ curl -LO https://github.com/kubernetes-incubator/kubespray/archive/v2.5.0.tar.gz


//untar
ansible_machine$ tar zxvf v2.5.0.tar.gz


//it unarchives under kubespray-2.5.0 directory
ansible_machine$ ls -F
get-pip.py kubespray-2.5.0/ v2.5.0.tar.gz


//change to kubespray-2.5.0 directory
ansible_machine$ cd kubespray-2.5.0/

Maintaining the Ansible inventory

In order to perform the Ansible playbook, you need to maintain your own inventory file, which contains target machine IP addresses:

  1. There is a sample inventory file under the inventory directory, so you can copy it by using the following:
//copy sample to mycluster
ansible_machine$ cp -rfp inventory/sample inventory/mycluster

//edit hosts.ini
ansible_machine$ vi inventory/mycluster/hosts.ini
  1. In this cookbook, we are using target machines that have the following IP addresses:
    • Kubernetes master : 10.128.0.2
    • Kubernetes node : 10.128.0.4
  2. In this case, hosts.ini should be in the following format:
  1. Please change the IP address to match your environment.

Note that hostname (my-master-1 and my-node-1) will be set by the kubespray playbook based on this hosts.ini, so feel free to assign a meaningful hostname.

Running the Ansible ad hoc command to test your environment

Before running the kubespray playbook, let's check whether hosts.ini and Ansible itself work properly or not:

  1. To do that, use the Ansible ad hoc command, using the ping module, as shown in the following screenshot:
  1. This result indicates SUCCESS. But if you see the following error, probably the IP address is wrong or the target machine is down, so please the check target machine first:
  1. Next, check your authority whether you can escalate a privilege on the target machine or not. In other words, whether you can run sudo or not. This is because you will need to install Kubernetes, Docker, and some related binaries, and configurations that need a root privilege. To confirm that, add the -b (become) option, as shown in the following screenshot:
  1. With the -b option, it actually tries to perform sudo on the target machine. If you see SUCCESS, you are all set! Go to the How it works… section to run kubespray.

If you're unfortunate enough to see some errors, please refer to the following section to solve Ansible issues.

Ansible troubleshooting

The ideal situation would be to use the same Linux distribution, version, settings, and logon user. However, the environment will be different based on policy, compatibility, and other reasons. Ansible is flexible and can support many use cases to run ssh and sudo.

Need to specify a sudo password

Based on your Linux machine setting, you may see the following error when adding the -b option. In this case, you need to type your password while running the sudo command:

In this case, add -K (ask for the sudo password) and run again. It will ask for your sudo password when running the Ansible command, as shown in the following screenshot:

If your Linux uses the su command instead of sudo, adding --become-method=su to run the Ansible command could help. Please read the Ansible documentation for more details : http://docs.ansible.com/ansible/latest/become.html

Need to specify different ssh logon user

Sometimes you may need to ssh to target machines using a different logon user. In this case, you can append the ansible_user parameter to an individual host in hosts.ini. For example:

  • Use the username kirito to ssh to my-master-1
  • Use the username asuna to ssh to my-node-1

In this case, change hosts.ini, as shown in the following code:

my-master-1 ansible_ssh_host=10.128.0.2 ansible_user=kirito
my-node-1 ansible_ssh_host=10.128.0.4 ansible_user=asuna

Need to change ssh port

Another scenario is where you may need to run the ssh daemon on some specific port number rather than the default port number 22. Ansible also supports this scenario and uses the ansible_port parameter to the individual host in hosts.ini, as shown in the following code (in the example, the ssh daemon is running at 10022 on my-node-1):

my-master-1 ansible_ssh_host=10.128.0.2
my-node-1 ansible_ssh_host=10.128.0.4 ansible_port=10022

Common ansible issue

Ansible is flexible enough to support any other situations. If you need any specific parameters to customize the ssh logon for the target host, read the Ansible inventory documentation to find a specific parameter: http://docs.ansible.com/ansible/latest/intro_inventory.html

In addition, Ansible has a configuration file, ansible.cfg, on top of the kubespray directory. It defines common settings for Ansible. For example, if you are using a very long username that usually causes an Ansible error, change ansible.cfg to set control_path to solve the issue, as shown in the following code:

[ssh_connection]
control_path = %(directory)s/%%h-%%r

If you plan to set up more than 10 nodes, you may need to increase ssh simultaneous sessions. In this case, adding the forks parameter also requires you to increase the ssh timeout from 10 seconds to 30 seconds by adding the timeout parameter, as shown in the following code:

[ssh_connection]
forks = 50
timeout = 30

The following screenshot contains all of the preceding configurations in ansible.cfg:

For more details, please visit the Ansible configuration documentation at http://docs.ansible.com/ansible/latest/intro_configuration.html

How it works...

Now you can start to run the kubepray playbook:

  1. You've already created an inventory file as inventory/mycluster/hosts.ini. Other than hosts.ini, you need to check and update global variable configuration files at inventory/mycluster/group_vars/all.yml.
  2. There are a lot of variables defined, but at least one variable, bootstrap_os , needs to be changed from none to your target Linux machine. If you are using RHEL/CentOS7, set bootstrap_os as centos. If you are using Ubuntu 16.04 LTS, set bootstrap_os as ubuntu as shown in the following screenshot:

You can also update other variables, such as kube_version, to change or install a Kubernetes version. For more details, read the documentation at https://github.com/kubernetes-incubator/kubespray/blob/master/docs/vars.md.

  1. Finally, you can execute the playbook. Use the ansible-playbook command instead of the Ansible command. Ansible-playbook runs multiple Ansible modules based on tasks and roles that are defined in the playbook.
  2. To run the kubespray playbook, type the ansible-playbook command with the following parameters:
//use –b (become), -i (inventory) and specify cluster.yml as playbook
$ ansible-playbook -b -i inventory/mycluster/hosts.ini cluster.yml

The ansible-playbook argument parameter is the same as the Ansible command. So, if you need to use -K (ask for the sudo password) or --become-method=su, you need to specify for ansible-playbook as well.

  1. It takes around 5 to 10 minutes to complete based on the machine spec and network bandwidth. But eventually you can see PLAY RECAP, as shown in the following screenshot, to see whether it has succeeded or not:
  1. If you see failed=0 like in the preceding screenshot, you have been successful in setting up a Kubernetes cluster. You can ssh to the Kubernetes master machine and run the /usr/local/bin/kubectl command to see the status, as shown in the following screenshot:
  1. The preceding screenshot shows that you have been successful in setting up the Kubernetes version 1.10.2 master and node. You can continue to use the kubectl command to configure you Kubernetes cluster in the following chapters.
  2. Unfortunately, if you see a failed count of more than 0, the Kubernetes cluster has probably not been set up correctly. Because failure is caused by many reasons, there is no single solution. It is recommended that you append the verbose option -v to see more detailed output from Ansible, as shown in the following code:
//use –b (become), -i (inventory) and –v (verbose)
$ ansible-playbook -v -b -i inventory/mycluster/hosts.ini cluster.yml
  1. If the failure is timeout, just retrying the ansible-playbook command again may solve it. Because Ansible is designed as an idempotency, if you re-perform the ansible-playbook command twice or more, Ansible still can configure correctly.

  1. If the failure is change target IP address after you run ansible-playbook (for example, re-using the Ansible machine to set up another Kubernetes cluster), you need to clean up the fact cache file. It is located under /tmp directory, so you just delete this file, as shown in the following screenshot:

See also

This section describes how to set up the Kubernetes cluster on the Linux OS using kubespray. It is the Ansible playbook that supports major Linux distribution. Ansible is simple, but due to supporting any situation and environment, you need to care about some different use cases. Especially with ssh and sudo-related configurations, you need to understand Ansible deeper to fit it with your environment.

Running your first container in Kubernetes

Congratulations! You've built your own Kubernetes cluster in the previous recipes. Now, let's get on with running your very first container, nginx (http://nginx.org/), which is an open source reverse proxy server, load balancer, and web server. Along with this recipe, you will create a simple nginx application and expose it to the outside world.

Getting ready

Before you start to run your first container in Kubernetes, it's better to check if your cluster is in a healthy mode. A checklist showing the following items would make your kubectl sub commands stable and successful, without unknown errors caused by background services:

  1. Checking the master daemons. Check whether the Kubernetes components are running:
// get the components status
$ kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
  1. Check the status of the Kubernetes master:
// check if the master is running
$ kubectl cluster-info
Kubernetes master is running at https://192.168.122.101:6443
KubeDNS is running at https://192.168.122.101:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
  1. Check whether all the nodes are ready:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu01 Ready master 20m v1.10.2
ubuntu02 Ready <none> 2m v1.10.2

Ideal results should look like the preceding outputs. You can successfully fire the kubectl command and get the response without errors. If any one of the checked items failed to meet the expectation, check out the settings in the previous recipes based on the management tool you used.

  1. Check the access permission of the Docker registry, as we will use the official free image as an example. If you want to run your own application, be sure to dockerize it first! What you need to do for your custom application is to write a Dockerfile (https://docs.docker.com/engine/reference/builder/), and build and push it into the public or private Docker registry.

Test your node connectivity with the public/private Docker registry

On your node, try the Docker pull nginx command to test whether you can pull the image from the Docker Hub. If you're behind a proxy, please add HTTP_PROXY into your Docker configuration file(https://docs.docker.com/engine/admin/systemd/#httphttps-proxy). If you want to run the image from the private repository in the Docker Hub, or the image from the private Docker registry, a Kubernetes secret is required. Please check Working with secrets, in Chapter 2, Working through Kubernetes Concepts, for the instructions.

How to do it...

We will use the official Docker image of nginx as an example. The image is provided in the Docker Hub (https://store.docker.com/images/nginx), and also the Docker Store (https://hub.docker.com/_/nginx/).

Many of the official and public images are available on the Docker Hub or Docker Store so that you do not need to build them from scratch. Just pull them and set up your custom setting on top of them.

Docker Store versus Docker Hub

As you may be aware, there is a more familiar official repository, Docker Hub, which was launched for the community for sharing the based image. Compared with the Docker Hub, the Docker Store is focused on enterprise applications. It provides a place for enterprise-level Docker images, which could be free or paid for software. You may feel more confident in using a more reliable image on the Docker Store.

Running a HTTP server (nginx)

On the Kubernetes master, we can use kubectl run to create a certain number of containers. The Kubernetes master will then schedule the pods for the nodes to run, with general command formatting, as follows:

$ kubectl run <replication controller name> --image=<image name> --replicas=<number of replicas> [--port=<exposing port>]

The following example will create two replicas with the name my-first-nginx from the nginx image and expose port 80. We can deploy one or more containers in what is referred to as a pod. In this case, we will deploy one container per pod. Just like a normal Docker behavior, if the nginx image doesn't exist locally, it will pull it from the Docker Hub by default: 


// run a deployment with 2 replicas for the image nginx and expose the container port 80
$ kubectl run my-first-nginx --image=nginx --replicas=2 --port=80
deployment "my-first-nginx" created

The name of deployment <my-first-nginx> cannot be duplicated

The resource (pods, services, deployment, and so on) in one Kubernetes namespace cannot be duplicated. If you run the preceding command twice, the following error will pop up: 


Error from server (AlreadyExists): deployments.extensions "my-first-nginx" already exists

Let's move on and see the current status of all the pods by kubectl get pods. Normally the status of the pods will hold on Pending for a while, since it takes some time for the nodes to pull the image from the registry:

// get all pods
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-first-nginx-7dcd87d4bf-jp572 1/1 Running 0 7m
my-first-nginx-7dcd87d4bf-ns7h4 1/1 Running 0 7m

If the pod status is not running for a long time

You could always use kubectl get pods to check the current status of the pods, and kubectl describe pods $pod_name to check the detailed information in a pod. If you make a typo of the image name, you might get the ErrImagePull error message, and if you are pulling 
the images from a private repository or registry without proper credentials, you might get the ImagePullBackOff message. If you get the Pending status for a long time and check out the node capacity, make sure you don't run too many replicas that exceed the node capacity. If there are other unexpected error messages, you could either stop the pods or the entire replication controller to force the master to schedule the tasks again.

You can also check the details about the deployment to see whether all the pods are ready:

// check the status of your deployment
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
my-first-nginx 2 2 2 2 2m

Exposing the port for external access

We might also want to create an external IP address for the nginx deployment. On cloud providers that support an external load balancer (such as Google compute engine), using the LoadBalancer type will provision a load balancer for external access. On the other hand, you can still expose the port by creating a Kubernetes service as follows, even though you're not running on platforms that support an external load balancer. We'll describe how to access this externally later:

// expose port 80 for replication controller named my-first-nginx
$ kubectl expose deployment my-first-nginx --port=80 --type=LoadBalancer
service "my-first-nginx" exposed

We can see the service status we just created:

// get all services
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h
my-first-nginx LoadBalancer 10.102.141.22 <pending> 80:31620/TCP 3m

You may find an additional service named kubernetes if the service daemon run as a container (for example, using kubeadm as a management tool). It is for exposing the REST API of the Kubernetes API server internally. The pending state of my-first-nginx service's external IP indicates that it is waiting for a specific public IP from cloud provider. Take a look at Chapter 6, Building Kubernetes on AWS, and Chapter 7, Building Kubernetes on GCP, for more details.

Congratulations! You just ran your first container with a Kubernetes pod and exposed port 80 with the Kubernetes service.

Stopping the application

We can stop the application using commands such as the delete deployment and service. Before this, we suggest you read through the following code first to understand more about how it works:

// stop deployment named my-first-nginx
$ kubectl delete deployment my-first-nginx
deployment.extensions "my-first-nginx" deleted

// stop service named my-first-nginx
$ kubectl delete service my-first-nginx
service "my-first-nginx" deleted

How it works...

Let's take a look at the insight of the service using describe in the kubectl command. We will create one Kubernetes service with the type LoadBalancer, which will dispatch the traffic into two endpoints, 192.168.79.9 and 192.168.79.10 with port 80:

$ kubectl describe service my-first-nginx
Name: my-first-nginx
Namespace: default
Labels: run=my-first-nginx
Annotations: <none>
Selector: run=my-first-nginx
Type: LoadBalancer
IP: 10.103.85.175
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31723/TCP
Endpoints: 192.168.79.10:80,192.168.79.9:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

The port here is an abstract service port, which will allow any other resources to access the service within the cluster. The nodePort will be indicating the external port to allow external access. The targetPort is the port the container allows traffic into; by default, it will be the same port.

In the following diagram, external access will access the service with nodePort. The service acts as a load balancer to dispatch the traffic to the pod using port 80. The pod will then pass through the traffic into the corresponding container using targetPort 80:

In any nodes or master, once the inter-connection network is set up, you should be able to access the nginx service using ClusterIP 192.168.61.150 with port 80:

// curl from service IP
$ curl 10.103.85.175:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

It will be the same result if we curl to the target port of the pod directly:

// curl from endpoint, the content is the same as previous nginx html
$ curl 192.168.79.10:80
<!DOCTYPE html>
<html>
...

If you'd like to try out external access, use your browser to access the external IP address. Please note that the external IP address depends on which environment you're running in.

In the Google compute engine, you could access it via a ClusterIP with a proper rewall rules setting:

$ curl http://<clusterIP>

In a custom environment, such as on-premise data center, you could go through the IP address of nodes to access :

$ curl http://<nodeIP>:<nodePort>

You should be able to see the following page using a web browser:

See also

We have run our very first container in this section. Go ahead and read the next chapter to aquire more knowledge about Kubernetes:

  • Chapter 2, Walking through Kubernetes Concepts
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Use containers to manage, scale and orchestrate apps in your organization
  • Transform the latest concept of Kubernetes 1.10 into examples
  • Expert techniques for orchestrating containers effectively

Description

Kubernetes is an open source orchestration platform to manage containers in a cluster environment. With Kubernetes, you can configure and deploy containerized applications easily. This book gives you a quick brush up on how Kubernetes works with containers, and an overview of main Kubernetes concepts, such as Pods, Deployments, Services and etc. This book explains how to create Kubernetes clusters and run applications with proper authentication and authorization configurations. With real-world recipes, you'll learn how to create high availability Kubernetes clusters on AWS, GCP and in on-premise datacenters with proper logging and monitoring setup. You'll also learn some useful tips about how to build a continuous delivery pipeline for your application. Upon completion of this book, you will be able to use Kubernetes in production and will have a better understanding of how to manage containers using Kubernetes.

Who is this book for?

This book is for system administrators, developers, DevOps engineers, or any stakeholder who wants to understand how Kubernetes works using a recipe-based approach. Basic knowledge of Kubernetes and Containers is required.

What you will learn

  • Build your own container cluster
  • Deploy and manage highly scalable, containerized applications with Kubernetes
  • Build high-availability Kubernetes clusters
  • Build a continuous delivery pipeline for your application
  • Track metrics and logs for every container running in your cluster
  • Streamline the way you deploy and manage your applications with large-scale container orchestration
Estimated delivery fee Deliver to Australia

Economy delivery 7 - 10 business days

AU$19.95

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : May 30, 2018
Length: 554 pages
Edition : 2nd
Language : English
ISBN-13 : 9781788837606
Vendor :
Google
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Estimated delivery fee Deliver to Australia

Economy delivery 7 - 10 business days

AU$19.95

Product Details

Publication date : May 30, 2018
Length: 554 pages
Edition : 2nd
Language : English
ISBN-13 : 9781788837606
Vendor :
Google
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
AU$24.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
AU$249.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just AU$5 each
Feature tick icon Exclusive print discounts
AU$349.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just AU$5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total AU$ 196.97
Kubernetes Cookbook
AU$60.99
Mastering Kubernetes
AU$67.99
Kubernetes for Developers
AU$67.99
Total AU$ 196.97 Stars icon

Table of Contents

10 Chapters
Building Your Own Kubernetes Cluster Chevron down icon Chevron up icon
Walking through Kubernetes Concepts Chevron down icon Chevron up icon
Playing with Containers Chevron down icon Chevron up icon
Building High-Availability Clusters Chevron down icon Chevron up icon
Building Continuous Delivery Pipelines Chevron down icon Chevron up icon
Building Kubernetes on AWS Chevron down icon Chevron up icon
Building Kubernetes on GCP Chevron down icon Chevron up icon
Advanced Cluster Administration Chevron down icon Chevron up icon
Logging and Monitoring Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.8
(12 Ratings)
5 star 58.3%
4 star 16.7%
3 star 0%
2 star 0%
1 star 25%
Filter icon Filter
Top Reviews

Filter reviews by




Kindleのお客様 Jun 24, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
まだ読み始めて少しですが、環境構築(Mac,Windows/miniKube, CentOS, Ubuntu)を含め解りやすい内容で書かれています。実行例が豊富で、Kubernetesの基礎的なことから学べます。Dockerについては詳しく説明はされていないので最低限の知識を持っていた方が理解しやすいと思います。CNIはFlannelで無く、Calicoで説明されているので、その辺りはこれから勉強です。英文がさほど苦でなければ、Kubernetesの入門には良い書籍に思われます。
Amazon Verified review Amazon
William Feb 04, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is a great book to learn k8s.With its step by step guide to do anything k8s, anyone should be able to pick it up real quick.Thank you authors.
Amazon Verified review Amazon
fasthall Aug 16, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is a well-written book. Very clear and easy to read. I'd recommend it to either a novice or a developer/user who is already familiar with Kubernetes.
Amazon Verified review Amazon
Amazon Customer Jul 14, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
An useful book to learn docker and kubernetes!
Amazon Verified review Amazon
Anon Jan 04, 2017
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I purchased this directly from the publisher; excellent k8 resource. I'm surprised there aren't more reviews as there aren't many kubernetes books out there and this one is excellent.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela