Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Hands-On Microservices with Kubernetes
Hands-On Microservices with Kubernetes

Hands-On Microservices with Kubernetes: Build, deploy, and manage scalable microservices on Kubernetes

eBook
Can$30.99 Can$44.99
Paperback
Can$55.99
Subscription
Free Trial

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Table of content icon View table of contents Preview book icon Preview Book

Hands-On Microservices with Kubernetes

Introduction to Kubernetes for Developers

In this chapter, we will introduce you to Kubernetes. Kubernetes is a big platform and it's difficult to do justice to it in just one chapter. Luckily, we have a whole book to explore it. Don't worry if you feel a little overwhelmed. I'll mention many concepts and capabilities briefly. In later chapters, we will cover many of these in detail, as well as the connections and interactions between those Kubernetes concepts. To spice things up and get hands-on early, you will also create a local Kubernetes cluster (Minikube) on your machine. This chapter will cover the following topics:

  • Kubernetes in a nutshell
  • The Kubernetes architecture
  • Kubernetes and microservices
  • Creating a local cluster

Technical requirements

In this chapter, you will need the following tools:

  • Docker
  • Kubectl
  • Minikube

Installing Docker

Installing kubectl

Installing Minikube

To install Minikube, follow the instructions here: https://kubernetes.io/docs/tasks/tools/install-minikube/.

Note that you need to install a hypervisor too. For the macOS, I find VirtualBox the most reliable. You may prefer another hypervisor, such as HyperKit. There will be more detailed instructions later when you get to play with Minikube.

The code

Kubernetes in a nutshell

In this section, you'll get a sense of what Kubernetes is all about, its history, and how it became so popular.

Kubernetes – the container orchestration platform

The primary function of Kubernetes is deploying and managing a large number of container-based workloads on a fleet of machines (physical or virtual). This means that Kubernetes provides the means to deploy containers to the cluster. It makes sure to comply with various scheduling constraints and pack the containers efficiently into the cluster nodes. In addition, Kubernetes automatically watches your containers and restarts them if they fail. Kubernetes will also relocate workloads off problematic nodes to other nodes. Kubernetes is an extremely flexible platform. It relies on a provisioned infrastructure layer of compute, memory, storage, and networking, and, with these resources, it works its magic.

The history of Kubernetes

Kubernetes and the entire cloud-native scene is moving at breakneck speed, but let's take a moment to reflect on how we got here. It will be a very short journey because Kubernetes came out of Google in June 2014, just a few years ago. When Docker became popular, it changed how people package, distribute, and deploy software. But, it soon became apparent that Docker doesn't scale on its own for large distributed systems. A few orchestration solutions became available, such as Apache Mesos, and later, Docker's own swarm. But, they never measured up to Kubernetes. Kubernetes was conceptually based on Google's Borg system. It brought together the design and technical excellence of a decade of Google engineering, but it was a new open source project. At OSCON 2015, Kubernetes 1.0 was released and the floodgates opened. The growth of Kubernetes, its ecosystem, and the community behind it, was as impressive as its technical excellence.

Kubernetes means helmsman in Greek. You'll notice many nautical terms in the names of Kubernetes-related projects.

The state of Kubernetes

Kubernetes is now a household name. The DevOps world pretty much equates container orchestration with Kubernetes. All major cloud providers offer managed Kubernetes solutions. It is ubiquitous in enterprise and in startup companies. While Kubernetes is still young and innovation keeps happening, it is all happening in a very healthy way. The core is rock solid, battle tested, and used in production across lots and lots of companies. There are very big players collaborating and pushing Kubernetes forward, such as Google (obviously), Microsoft, Amazon, IBM, and VMware.

The Cloud Native Computing Foundation (CNCF) open source organization offers certification. Every 3 months, a new Kubernetes release comes out, which is the result of a collaboration between hundreds of volunteers and paid engineers. There is a large ecosystem surrounding the main project of both commercial and open source projects. You will see later how Kubernetes' flexible and extensible design encourages this ecosystem and helps in integrating Kubernetes into any cloud platform.

Understanding the Kubernetes architecture

Kubernetes is a marvel of software engineering. The architecture and design of Kubernetes are a big part in its success. Each cluster has a control plane and data plane. The control plane consists of several components, such as an API server, a metadata store for keeping the state of a cluster, and multiple controllers that are responsible for managing the nodes in the data plane and providing access to users. The control plane in production will be distributed across multiple machines for high availability and robustness. The data plane consists of multiple nodes, or workers. The control plane will deploy and run your pods (groups of containers) on these nodes, and then watch for changes and respond.

Here is a diagram that illustrates the overall architecture:

Let's review in detail the control plane and the data plane, as well as kubectl, which is the command-line tool you use to interact with the Kubernetes cluster.

The control plane

The control plane consists of several components:

  • API server
  • The etcd metadata store
  • Scheduler
  • Controller manager
  • Cloud controller manager

Let's examine the role of each component.

The API server

The kube-api-server is a massive REST server that exposes the Kubernetes API to the world. You can have multiple instances of the API server in your control plane for high-availability. The API server keeps the cluster state in etcd.

The etcd store

The complete cluster is stored in etcd (https://coreos.com/etcd/), a consistent and reliable, distributed key-value store. The etcd store is an open source project (developed by CoreOS, originally).

It is common to have three or five instances of etcd for redundancy. If you lose the data in your etcd store, you lose your cluster.

The scheduler

The kube-scheduler is responsible for scheduling pods to worker nodes. It implements a sophisticated scheduling algorithm that takes a lot of information into account, such as resource availability on each node, various constraints specified by the user, types of available nodes, resource limits and quotas, and other factors, such as affinity, anti-affinity, tolerations, and taints.

The controller manager

The kube-controller manager is a single process that contains multiple controllers for simplicity. These controllers watch for events and changes to the cluster and respond accordingly:

  • Node controller: Responsible for noticing and responding when nodes go down.
  • Replication controller: This makes sure that there is the correct number of pods for each replica set or replication controller object.
  • Endpoints controller: This assigns for each service an endpoints object that lists the service's pods.
  • Service account and token controllers: These initialize new namespaces with default service accounts and corresponding API access tokens.

The data plane

The data plane is the collection of the nodes in the cluster that run your containerized workloads as pods. The data plane and control plane can share physical or virtual machines. This happens, of course, when you run a single node cluster, such as Minikube. But, typically, in a production-ready deployment, the data plane will have its own nodes. There are several components that Kubernetes installs on each node in order to communicate, watch, and schedule pods: kubelet, kube-proxy, and the container runtime (for example, the Docker daemon).

The kubelet

The kubelet is a Kubernetes agent. It's responsible for talking to the API server and for running and managing the pods on the node. Here are some of the responsibilities of the kubelet:

  • Downloading pod secrets from the API server
  • Mounting volumes
  • Running the pod container via the Container Runtime Interface (CRI)
  • Reporting the status of the node and each pod
  • Probe container liveness

The kube proxy

The kube proxy is responsible for the networking aspects of the node. It operates as a local front for services and can forward TCP and UDP packets. It discovers the IP addresses of services via DNS or environment variables.

The container runtime

Kubernetes eventually runs containers, even if they are organized in pods. Kubernetes supports different container runtimes. Originally, only Docker was supported. Now, Kubernetes runs containers through an interface called CRI, which is based on gRPC.

Each container runtime that implements CRI can be used on a node controlled by the kubelet, as shown in the preceding diagram.

Kubectl

Kubectl is a tool you should get very comfortable with. It is your command-line interface (CLI) to your Kubernetes cluster. We will use kubectl extensively throughout the book to manage and operate Kubernetes. Here is a short list of the capabilities kubectl puts literally at your fingertips:

  • Cluster management
  • Deployment
  • Troubleshooting and debugging
  • Resource management (Kubernetes objects)
  • Configuration and metadata

Just type kubectl to get a complete list of all the commands and kubectl <command> --help for more detailed info on specific commands.

Kubernetes and microservices – a perfect match

Kubernetes is a fantastic platform with amazing capabilities and a wonderful ecosystem. How does it help you with your system? As you'll see, there is a very good alignment between Kubernetes and microservices. The building blocks of Kubernetes, such as namespaces, pods, deployments, and services, map directly to important microservices concepts and an agile software development life cycle (SDLC). Let's dive in.

Packaging and deploying microservices

When you employ a microservice-based architecture, you'll have lots of microservices. Those microservices, in general, may be developed independently, and deployed independently. The packaging mechanism is simply containers. Every microservice you develop will have a Dockerfile. The resulting image represents the deployment unit for that microservice. In Kubernetes, your microservice image will run inside a pod (possibly alongside other containers). But an isolated pod, running on a node, is not very resilient. The kubelet on the node will restart the pod's container if it crashes, but if something happens to the node itself, the pod is gone. Kubernetes has abstractions and resources that build on the pod.

ReplicaSets are sets of pods with a certain number of replicas. When you create a ReplicaSet, Kubernetes will make sure that the correct number of pods you specify always run in the cluster. The deployment resource takes it a step further and provides an abstraction that exactly aligns with the way you consider and think about microservices. When you have a new version of a microservice ready, you will want to deploy it. Here is a Kubernetes deployment manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80

The file can be found at https://github.com/the-gigi/hands-on-microservices-with-kubernetes-code/blob/master/ch1/nginx-deployment.yaml.

This is a YAML file (https://yaml.org/) that has some fields that are common to all Kubernetes resources, and some fields that are specific to deployments. Let's break this down piece by piece. Almost everything you learn here will apply to other resources:

  • The apiVersion field marks the Kubernetes resources version. A specific version of the Kubernetes API server (for example, V1.13.0) can work with different versions of different resources. Resource versions have two parts: an API group (in this case, apps) and a version number (v1). The version number may include alpha or beta designations:
apiVersion: apps/v1
  • The kind field specifies what resource or API object we are dealing with. You will meet many kinds of resources in this chapter and later:
kind: Deployment
  • The metadata section contains the name of the resource (nginx) and a set of labels, which are just key-value string pairs. The name is used to refer to this particular resource. The labels allow for operating on a set of resources that share the same label. Labels are very useful and flexible. In this case, there is just one label (app: nginx):
metadata:
name: nginx
labels:
app: nginx

  • Next, we have a spec field. This is a ReplicaSet spec. You could create a ReplicaSet directly, but it would be static. The whole purpose of deployments is to manage its set of replicas. What's in a ReplicaSet spec? Obviously, it contains the number of replicas (3). It has a selector with a set of matchLabels (also app: nginx), and it has a pod template. The ReplicaSet will manage pods that have labels that match matchLabels:
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
...
  • Let's have a look at the pod template. The template has two parts: metadata and a spec. The metadata is where you specify the labels. The spec describes the containers in the pod. There may be one or more containers in a pod. In this case, there is just one container. The key field for a container is the image (often a Docker image), where you packaged your microservice. That's the code we want to run. There is also a name (nginx) and a set of ports:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80

There are more fields that are optional. If you want to dive in deeper, check out the API reference for the deployment resource at https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#deployment-v1-apps.

Exposing and discovering microservices

We deployed our microservice with a deployment. Now, we need to expose it, so that it can be used by other services in the cluster and possibly also make it visible outside the cluster. Kubernetes provides the Service resource for that purpose. Kubernetes services are backed up by pods, identified by labels:

apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
app: nginx

Services discover each other inside the cluster, using DNS or environment variables. This is the default behavior. But, if you want to make a service accessible to the world, you will normally set an ingress object or a load balancer. We will explore this topic in detail later.

Securing microservices

Kubernetes was designed for running large-scale critical systems, where security is of paramount concern. Microservices are often more challenging to secure than monolithic systems because there is so much internal communication across many boundaries. Also, microservices encourage agile development, which leads to a constantly changing system. There is no steady state you can secure once and be done with it. You must constantly adapt the security of the system to the changes. Kubernetes comes pre-packed with several concepts and mechanisms for secure development, deployment, and operation of your microservices. You still need to employ best practices, such as principle of least privilege, security in depth, and minimizing blast radius. Here are some of the security features of Kubernetes.

Namespaces

Namespaces let you isolate different parts of your cluster from each other. You can create as many namespaces as you want and scope many resources and operations to their namespace, including limits, and quotas. Pods running in a namespace can only access directly their own namespace. To access other namespaces, they must go through public APIs.

Service accounts

Service accounts provide identity to your microservices. Each service account will have certain privileges and access rights associated with its account. Service accounts are pretty simple:

apiVersion: v1
kind: ServiceAccount
metadata:
name: custom-service-account

You can associate service accounts with a pod (for example, in the pod spec of a deployment) and the microservices that run inside the pod will have that identity and all the privileges and restrictions associated with that account. If you don't assign a service account, then the pod will get the default service account of its namespace. Each service account is associated with a secret used to authenticate it.

Secrets

Kubernetes provides secret management capabilities to all microservices. The secrets can be encrypted at rest on etcd (since Kubernetes 1.7), and are always encrypted on the wire (over HTTPS). Secrets are managed per namespace. Secrets are mounted in pods as either files (secret volumes) or environment variables. There are multiple ways to create secrets. Secrets can contain two maps: data and stringData. The type of values in the data map can be arbitrary, but must be base64-encoded. Refer to the following, for example:

apiVersion: v1
kind: Secret
metadata:
name: custom-secret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm

Here is how a pod can load secrets as a volume:

apiVersion: v1
kind: Pod
metadata:
name: db
spec:
containers:
- name: mypod
image: postgres
volumeMounts:
- name: db_creds
mountPath: "/etc/db_creds"
readOnly: true
volumes:
- name: foo
secret:
secretName: custom-secret

The end result is that the DB credentials secrets that are managed outside the pod by Kubernetes show up as a regular file inside the pod accessible through the path /etc/db_creds.

Secure communication

Kubernetes utilizes client-side certificates to fully authenticate both sides of any external communication (for example, kubectl). All communication to the Kubernetes API from outside should be over HTTP. Internal cluster communication between the API server and the kubelet on the node is over HTTPS too (the kubelet endpoint). But, it doesn't use a client certificate by default (you can enable it).

Communication between the API server and nodes, pods, and services is, by default, over HTTP and is not authenticated. You can upgrade them to HTTPS, but note that the client certificate is checked, so don't run your worker nodes on public networks.

Network policies

In a distributed system, beyond securing each container, pod, and node, it is critical to also control communication over the network. Kubernetes supports network policies, which give you full flexibility to define and shape the traffic and access across the cluster.

Authenticating and authorizing microservices

Authentication and authorization are also related to security, by limiting access to trusted users and to limited aspects of Kubernetes. Organizations have a variety of ways to authenticate their users. Kubernetes supports many of the common authentication schemes, such as X.509 certificates, and HTTP basic authentication (not very secure), as well as an external authentication server via webhook that gives you ultimate control over the authentication process. The authentication process just matches the credentials of a request with an identity (either the original or an impersonated user). What that user is allowed to do is controlled by the authorization process. Enter RBAC.

Role-based access control

Role-based access control (RBAC) is not required! You can perform authorization using other mechanisms in Kubernetes. However, it is a best practice. RBAC is based on two concepts: role and binding. A role is a set of permissions on resources defined as rules. There are two types of roles: Role, which applies to a single namespace, and ClusterRole, which applies to all namespaces in a cluster.

Here is a role in the default namespace that allows the getting, watching, and listing of all pods. Each role has three components: API groups, resources, and verbs:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]

Cluster roles are very similar, except there is no namespace field because they apply to all namespaces.

A binding is associating a list of subjects (users, user groups, or service accounts) with a role. There are two types of binding, RoleBinding and ClusterRoleBinding, which correspond to Role and ClusterRole.

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
namespace: default
subjects:
- kind: User
name: gigi # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role # must be Role or ClusterRole
name: pod-reader # must match the name of the Role or ClusterRole you bind to
apiGroup: rbac.authorization.k8s.io

It's interesting that you can bind a ClusterRole to a subject in a single namespace. This is convenient for defining roles that should be used in multiple namespaces, once as a cluster role, and then binding them to specific subjects in specific namespaces.

The cluster role binding is similar, but must bind a cluster role and always applies to the whole cluster.

Note that RBAC is used to grant access to Kubernetes resources. It can regulate access to your service endpoints, but you may still need fine-grained authorization in your microservices.

Upgrading microservices

Deploying and securing microservices is just the beginning. As you develop and evolve your system, you'll need to upgrade your microservices. There are many important considerations regarding how to go about it that we will discuss later (versioning, rolling updates, blue-green, and canary). Kubernetes provides direct support for many of these concepts out of the box and the ecosystem built on top of it to provide many flavors and opinionated solutions.

The goal is often zero downtime and safe rollback if a problem occurs. Kubernetes deployments provide the primitives, such as updating a deployment, pausing a roll-out, and rolling back a deployment. Specific workflows are built on these solid foundations.
The mechanics of upgrading a service typically involve upgrading its image to a new version and sometimes changes to its support resources and access: volumes, roles, quotas, limits, and so on.

Scaling microservices

There are two aspects to scaling a microservice with Kubernetes. The first aspect is scaling the number of pods backing up a particular microservice. The second aspect is the total capacity of the cluster. You can easily scale a microservice explicitly by updating the number of replicas of a deployment, but that requires constant vigilance on your part. For services that have large variations in the volume of requests they handle over long periods (for example, business hours versus off hours or week days versus weekends), it might take a lot of effort. Kubernetes provides horizontal pod autoscaling, which is based on CPU, memory, or custom metrics, and can scale your service up and down automatically.

Here is how to scale our nginx deployment that is currently fixed at three replicas to go between 2 and 5, depending on the average CPU usage across all instances:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: nginx
namespace: default
spec:
maxReplicas: 5
minReplicas: 2
targetCPUUtilizationPercentage: 90
scaleTargetRef:
apiVersion: v1
kind: Deployment
name: nginx

The outcome is that Kubernetes will watch CPU utilization of the pods that belong to the nginx deployment. When the average CPU over a certain period of time (5 minutes, by default) exceeds 90%, it will add more replicas until the maximum of 5, or until utilization drops below 90%. The HPA can scale down too, but will always maintain a minimum of two replicas, even if the CPU utilization is zero.

Monitoring microservices

Your microservices are deployed and running on Kubernetes. You can update the version of your microservices whenever it is needed. Kubernetes takes care of healing and scaling automatically. However, you still need to monitor your system and keep track of errors and performance. This is important for addressing problems, but also for informing you on potential improvements, optimizations, and cost cutting.

There are several categories of information that are relevant and that you should monitor:

  • Third-party logs
  • Application logs
  • Application errors
  • Kubernetes events
  • Metrics

When considering a system composed of multiple microservices and multiple supporting components, the number of logs will be substantial. The solution is central logging, where all the logs go to a single place where you can slice and dice at your will. Errors can be logged, of course, but often it is useful to report errors with additional metadata, such as stack trace, and review them in their own dedicated environment (for example, sentry or rollbar). Metrics are useful for detecting performance and system health problems or trends over time.

Kubernetes provides several mechanisms and abstractions for monitoring your microservices. The ecosystem provides a number of useful projects too.

Logging

There are several ways to implement central logging with Kubernetes:

  • Have a logging agent that runs on every node
  • Inject a logging sidecar container to every application pod
  • Have your application send its logs directly to a central logging service

There are pros and cons to each approach. But, the main thing is that Kubernetes supports all approaches and makes container and pod logs available for consumption.

Metrics

Kubernetes comes with cAdvisor (https://github.com/google/cadvisor), which is a tool for collecting container metrics integrated into the kubelet binary. Kubernetes used to provide a metrics server called heapster that required additional backends and a UI. But, these days, the best in class metrics server is the open source Prometheus project. If you run Kubernetes on Google's GKE, then Google Cloud Monitoring is a great option that doesn't require additional components to be installed in your cluster. Other cloud providers also have integration with their monitoring solutions (for example, CloudWatch on EKS).

Creating a local cluster

One of the strengths of Kubernetes as a deployment platform is that you can create a local cluster and, with relatively little effort, have a realistic environment that is very close to your production environment. The main benefit is that developers can test their microservices locally and collaborate with the rest of the services in the cluster. When your system is comprised of many microservices, the more significant tests are often integration tests and even configuration and infrastructure tests, as opposed to unit tests. Kubernetes makes that kind of testing much easier and requires much less brittle mocking.

In this section, you will install a local Kubernetes cluster and some additional projects, and then have some fun exploring it using the invaluable kubectl command-line tool.

Installing Minikube

Minikube is a single node Kubernetes cluster that you can install anywhere. I used macOS here, but, in the past, I used it successfully on Windows too. Before installing Minikube itself, you must install a hypervisor. I prefer HyperKit:

$ curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-hyperkit \
&& chmod +x docker-machine-driver-hyperkit \
&& sudo mv docker-machine-driver-hyperkit /usr/local/bin/ \
&& sudo chown root:wheel /usr/local/bin/docker-machine-driver-hyperkit \
&& sudo chmod u+s /usr/local/bin/docker-machine-driver-hyperkit

But, I've run into trouble with HyperKit from time to time. If you can't overcome the issues, I suggest using VirtualBox as the hypervisor instead. Run the following command to install VirtualBox via Homebrew:

$ brew cask install virtualbox

Now, you can install Minikube itself. Homebrew is the best way to go again:

brew cask install minikube

If you're not on macOS, follow the official instructions here: https://kubernetes.io/docs/tasks/tools/install-minikube/.

You must turn off any VPN before starting Minikube with HyperKit. You can restart your VPN after Minikube has started.

Minikube supports multiple versions of Kubernetes. At the moment, the default version is 1.10.0, but 1.13.0 is already out and supported, so let's use that version:

$ minikube start --vm-driver=hyperkit --kubernetes-version=v1.13.0

If you're using VirtualBox as your hypervisor, you don't need to specify --vm-driver:

$ minikube start --kubernetes-version=v1.13.0

You should see the following:

$ minikube start --kubernetes-version=v1.13.0
Starting local Kubernetes v1.13.0 cluster...
Starting VM...
Downloading Minikube ISO
178.88 MB / 178.88 MB [============================================] 100.00% 0s
Getting VM IP address...
E0111 07:47:46.013804 18969 start.go:211] Error parsing version semver: Version string empty
Moving files into cluster...
Downloading kubeadm v1.13.0
Downloading kubelet v1.13.0
Finished Downloading kubeadm v1.13.0
Finished Downloading kubelet v1.13.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Stopping extra container runtimes...
Starting cluster components...
Verifying kubelet health ...
Verifying apiserver health ...Kubectl is now configured to use the cluster.
Loading cached images from config file.


Everything looks great. Please enjoy minikube!
Minikube will automatically download the Minikube VM (178.88 MB) if it's the first time you are starting your Minikube cluster.

At this point, your Minikube cluster is ready to go.

Troubleshooting Minikube

If you run into some trouble (for example, if you forgot to turn off your VPN), try to delete your Minikube installation and restart it with verbose logging:

$ minikube delete
$ rm -rf ~/.minikube
$ minikube start --vm-driver=hyperkit --kubernetes-version=v1.13.0 --logtostderr --v=3

If your Minikube installation just hangs (maybe waiting for SSH), you might have to reboot to unstick it. If that doesn't help, try the following:

sudo mv /var/db/dhcpd_leases /var/db/dhcpd_leases.old
sudo touch /var/db/dhcpd_leases

Then, reboot again.

Verifying your cluster

If everything is OK, you can check your Minikube version:

$ minikube version
minikube version: v0.31.0

Minikube has many other useful commands. Just type minikube to see the list of commands and flags.

Playing with your cluster

Minikube is running, so let's have some fun. Your kubectl is going to serve you well in this section. Let's start by examining our node:

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 4m v1.13.0

Your cluster already has some pods and services running. It turns out that Kubernetes is dogfooding and many of its own services are plain services and pods. But, those pods and services run in namespaces. Here are all the namespaces:

$ kubectl get ns
NAME STATUS AGE
default Active 18m
kube-public Active 18m
kube-system Active 18m

To see all the services in all the namespaces, you can use the --all-namespaces flag:

$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 19m
kube-system kubernetes-dashboard ClusterIP 10.111.39.46 <none> 80/TCP 18m

The Kubernetes API server, itself, is running as a service in the default namespace and then we have kube-dns and the kubernetes-dashboard running in the kube-system namespace.

To explore the dashboard, you can run the dedicated Minikube command, minikube dashboard. You can also use kubectl, which is more universal and will work on any Kubernetes cluster:

$ kubectl port-forward deployment/kubernetes-dashboard 9090

Then, browse to http://localhost:9090 and you will see the following dashboard:

Installing Helm

Helm is the Kubernetes package manager. It doesn't come with Kubernetes, so you have to install it. Helm has two components: a server-side component called tiller, and a CLI called helm.

Let's install helm locally first, using Homebrew:

$ brew install kubernetes-helm

Then, properly initialize both the server and client type:

$ helm init
$HELM_HOME has been configured at /Users/gigi.sayfan/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

With Helm in place, you can easily install all kinds of goodies in your Kubernetes cluster. There are currently 275 chars (the Helm term for a package) in the stable chart repository:

$ helm search | wc -l
275

For example, check out all the releases tagged with the db type:

$ helm search db
NAME CHART VERSION APP VERSION DESCRIPTION
stable/cockroachdb 2.0.6 2.1.1 CockroachDB is a scalable, survivable, strongly-consisten...
stable/hlf-couchdb 1.0.5 0.4.9 CouchDB instance for Hyperledger Fabric (these charts are...
stable/influxdb 1.0.0 1.7 Scalable datastore for metrics, events, and real-time ana...
stable/kubedb 0.1.3 0.8.0-beta.2 DEPRECATED KubeDB by AppsCode - Making running production...
stable/mariadb 5.2.3 10.1.37 Fast, reliable, scalable, and easy to use open-source rel...
stable/mongodb 4.9.1 4.0.3 NoSQL document-oriented database that stores JSON-like do...
stable/mongodb-replicaset 3.8.0 3.6 NoSQL document-oriented database that stores JSON-like do...
stable/percona-xtradb-cluster 0.6.0 5.7.19 free, fully compatible, enhanced, open source drop-in rep...
stable/prometheus-couchdb-exporter 0.1.0 1.0 A Helm chart to export the metrics from couchdb in Promet...
stable/rethinkdb 0.2.0 0.1.0 The open-source database for the realtime web
jenkins-x/cb-app-slack 0.0.1 A Slack App for CloudBees Core
stable/kapacitor 1.1.0 1.5.1 InfluxDB's native data processing engine. It can process ...
stable/lamp 0.1.5 5.7 Modular and transparent LAMP stack chart supporting PHP-F...
stable/postgresql 2.7.6 10.6.0 Chart for PostgreSQL, an object-relational database manag...
stable/phpmyadmin 2.0.0 4.8.3 phpMyAdmin is an mysql administration frontend
stable/unifi 0.2.1 5.9.29 Ubiquiti Network's Unifi Controller

We will use Helm a lot throughout the book.

Summary

In this chapter, you received a whirlwind tour of Kubernetes and got an idea of how well it aligns with microservices. The extensible architecture of Kubernetes empowers a large community of enterprise organizations, startup companies, and open source organizations to collaborate and create an ecosystem around Kubernetes that multiplies its benefits and ensures its staying power. The concepts and abstractions built into Kubernetes are very well suited for microservice-based systems. They support every phase of the SDLC, from development, through testing, and deployments, and all the way to monitoring and troubleshooting. The Minikube project lets every developer run a local Kubernetes cluster, which is great for experimenting with Kubernetes itself, as well as testing locally in an environment that is very similar to the production environment. The Helm project is a fantastic addition to Kubernetes and provides great value as the de facto package management solution. In the next chapter, we will dive into the world of microservices and learn why they are the best approach for developing complex and fast-moving distributed systems that run in the cloud.

Further reading

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Learn to design a scalable architecture by building continuous integration (CI) pipelines with Kubernetes
  • Get an in-depth understanding of role-based access control (RBAC), continuous deployment (CD), and observability
  • Monitor a Kubernetes cluster with Prometheus and Grafana

Description

Kubernetes is among the most popular open source platforms for automating the deployment, scaling, and operations of application containers across clusters of hosts, providing a container-centric infrastructure. Hands-On Microservices with Kubernetes starts by providing you with in-depth insights into the synergy between Kubernetes and microservices. You will learn how to use Delinkcious, which will serve as a live lab throughout the book to help you understand microservices and Kubernetes concepts in the context of a real-world application. Next, you will get up to speed with setting up a CI/CD pipeline and configuring microservices using Kubernetes ConfigMaps. As you cover later chapters, you will gain hands-on experience in securing microservices and implementing REST, gRPC APIs, and a Delinkcious data store. In addition to this, you’ll explore the Nuclio project, run a serverless task on Kubernetes, and manage and implement data-intensive tests. Toward the concluding chapters, you’ll deploy microservices on Kubernetes and learn to maintain a well-monitored system. Finally, you’ll discover the importance of service meshes and how to incorporate Istio into the Delinkcious cluster. By the end of this book, you’ll have gained the skills you need to implement microservices on Kubernetes with the help of effective tools and best practices.

Who is this book for?

This book is for developers, DevOps engineers, or anyone who wants to develop large-scale microservice-based systems on top of Kubernetes. If you are looking to use Kubernetes on live production projects or want to migrate existing systems to a modern containerized microservices system, then this book is for you. Coding skills, together with some knowledge of Docker, Kubernetes, and cloud concepts will be useful.

What you will learn

  • Understand the synergy between Kubernetes and microservices
  • Create a complete CI/CD pipeline for your microservices on Kubernetes
  • Develop microservices on Kubernetes with the Go kit framework using best practices
  • Manage and monitor your system using Kubernetes and open source tools
  • Expose your services through REST and gRPC APIs
  • Implement and deploy serverless functions as a service
  • Externalize authentication, authorization, and traffic shaping using a service mesh
  • Run a Kubernetes cluster in the cloud on Google Kubernetes Engine
Estimated delivery fee Deliver to Canada

Economy delivery 10 - 13 business days

Can$24.95

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 05, 2019
Length: 502 pages
Edition : 1st
Language : English
ISBN-13 : 9781789805468
Vendor :
Google
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Estimated delivery fee Deliver to Canada

Economy delivery 10 - 13 business days

Can$24.95

Product Details

Publication date : Jul 05, 2019
Length: 502 pages
Edition : 1st
Language : English
ISBN-13 : 9781789805468
Vendor :
Google
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just Can$6 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just Can$6 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total Can$ 195.97
DevOps with Kubernetes
Can$69.99
Enterprise API Management
Can$69.99
Hands-On Microservices with Kubernetes
Can$55.99
Total Can$ 195.97 Stars icon

Table of Contents

15 Chapters
Introduction to Kubernetes for Developers Chevron down icon Chevron up icon
Getting Started with Microservices Chevron down icon Chevron up icon
Delinkcious - the Sample Application Chevron down icon Chevron up icon
Setting Up the CI/CD Pipeline Chevron down icon Chevron up icon
Configuring Microservices with Kubernetes Chevron down icon Chevron up icon
Securing Microservices on Kubernetes Chevron down icon Chevron up icon
Talking to the World - APIs and Load Balancers Chevron down icon Chevron up icon
Working with Stateful Services Chevron down icon Chevron up icon
Running Serverless Tasks on Kubernetes Chevron down icon Chevron up icon
Testing Microservices Chevron down icon Chevron up icon
Deploying Microservices Chevron down icon Chevron up icon
Monitoring, Logging, and Metrics Chevron down icon Chevron up icon
Service Mesh - Working with Istio Chevron down icon Chevron up icon
The Future of Microservices and Kubernetes Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Most Recent
Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(9 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Most Recent

Filter reviews by




Andrew Harmon Aug 12, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
For a personal project, I've was looking to learn more about Kubernetes, micro-service architecture, and Go backend applications.Lucky for me, this book covered all three. While it doesn't go super in-depth for each, it does a good job of exploring each topic enough to make you more comfortable with them and giving you direction for further learning.My one complaint is that some of the code snippets in the book are heavily abstracted which can make them hard to understand on their own. The author provides a public git repo with the full code touched on in the book which gives some much needed context to many of the code snippets.If you're interested in learning how to build a web app backend in 2021, I'd recommend you start here.
Amazon Verified review Amazon
Donald E Lutz Jan 23, 2020
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Well written description on how Microservices work with Kubernetes as well as how to use cloud native concepts.
Amazon Verified review Amazon
David Sep 10, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Very in depth and interesting read. Highly recommended for both novice and experienced Kubernetes professionals. Some very good references for extending your Kubernetes knowledge.
Amazon Verified review Amazon
Bryan Cormy Sep 08, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is a great book if you don't know where to start with Kubernetes and wants to start building today. It will accompany you every step of the way. I did not read it all but gets back to it regularly when needed. This is and kubernetes.io are my K8s references.
Amazon Verified review Amazon
Ofer Mendelevitch Aug 11, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I liked the comprehensive coverage of Kubernetes for micro-services, with practical examples to demonstrate how to do it in real life. Very clear and helpful.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela