Creating a multi-node cluster with KinD
In this section, we will create a multi-node cluster using KinD. We will also repeat the deployment of the echo server we deployed on Minikube and observe the differences. Spoiler alert – everything will be faster and easier!
Quick introduction to KinD
KinD stands for Kubernetes in Docker. It is a tool for creating ephemeral clusters (no persistent storage). It was built primarily for running the Kubernetes conformance tests. It supports Kubernetes 1.11+. Under the cover, it uses kubeadm
to bootstrap Docker containers as nodes in the cluster. KinD is a combination of a library and a CLI. You can use the library in your code for testing or other purposes. KinD can create highly available clusters with multiple master nodes. Finally, KinD is a CNCF conformant Kubernetes installer. It better be if it is used for the conformance tests of Kubernetes itself :-).
KinD is super fast to start, but it has some limitations too: no persistent storage and no support for alternative runtimes yet, only Docker.
Let's install KinD and get going.
Installing KinD
You must have Docker installed as KinD is literally running as a Docker container. If you have Go 1.11+ installed, you can install the KinD CLI via:
$ GO111MODULE="on" go get sigs.k8s.io/kind@v0.8.1
Otherwise, on macOS, type:
$ curl -Lo ./kind-darwin-amd64 https://github.com/kubernetes-sigs/kind/releases/download/v0.8.1/kind-darwin-amd64
$ chmod +x ./kind-darwin-amd64
$ mv ./kind-darwin-amd64 /usr/local/bin/kind
On Windows, type (in PowerShell):
c:\> curl.exe -Lo kind-windows-amd64.exe https://github.com/kubernetes-sigs/kind/releases/download/v0.8.1/kind-windows-amd64
c:\> Move-Item .\kind-windows-amd64.exe c:\windows\kind.exe
Creating the cluster with KinD
Creating a cluster is super easy:
$ kind create cluster
Creating cluster "kind" ...
Ensuring node image (kindest/node:v1.16.3)
Preparing nodes
Creating kubeadm config
Starting control-plane
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info
KinD suggests that you export KUBECONFIG, but as I mentioned earlier, I prefer to copy the config file to ~/.kube/config
so I do not have to export again if I want to access the cluster from another terminal window:
$ cp $(kind get kubeconfig-path --name="kind") ~/.kube/config
Now, we can access the cluster using kubectl:
$ k cluster-info
Kubernetes master is running at https://localhost:58560
KubeDNS is running at https://localhost:58560/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump.'
However, this creates a single-node cluster:
$ k get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready master 11m v1.16.3
Let us delete it and create a multi-node cluster:
$ kind delete cluster
Deleting cluster "kind" ...
To create a multi-node cluster, we need to provide a configuration file with the specification of our nodes. Here is a configuration file that will create a cluster with one control-plane
node and two worker
nodes:
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
- role: worker
- role: worker
Let us save the configuration file as kind-multi-node-config.yaml
and create the cluster:
$ kind create cluster --config kind-multi-node-config.yaml
Creating cluster "kind" ...
Ensuring node image (kindest/node:v1.16.3)
Preparing nodes
Creating kubeadm config
Starting control-plane
Joining worker nodes
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info
Yeah, it works! We have a local three-node cluster now:
$ k get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready master 12m v1.16.3
kind-worker NotReady <none> 11m v1.16.3
kind-worker2 NotReady <none> 11m v1.16.3
KinD is also kind enough (see what I did there) to let us create highly available (HA) clusters with multiple control plane nodes for redundancy. Let us give it a try and see what it looks like with two control-plane
nodes and two worker
nodes:
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
- role: control-plane
- role: worker
- role: worker
Let us save the configuration file as kind-ha-multi-node-config.yaml
, delete the current cluster, and create a new HA cluster:
$ kind delete cluster
Deleting cluster "kind" ...
$ kind create cluster --config kind-ha-multi-node-config.yaml
Creating cluster "kind" ...
Ensuring node image (kindest/node:v1.16.3)
Preparing nodes
Starting the external load balancer
Creating kubeadm config
Starting control-plane
Joining more control-plane nodes
Joining worker nodes
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info
Hmmm... there is something new here. Now, KinD creates an external load balancer, as well as join more control-plane nodes before joining the worker nodes. The load balancer is necessary to distribute requests across all the control-plane nodes.
Note that the external load balancer does not show as a node using kubectl:
$ k get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready master 8m31s v1.16.3
kind-control-plane2 Ready master 8m14s v1.16.3
kind-worker Ready <none> 7m35s v1.16.3
kind-worker2 Ready <none> 7m35s v1.16.3
However, KinD has its own get nodes command, where you can see the load balancer:
$ kind get nodes
kind-control-plane2
kind-worker
kind-control-plane
kind-worker2
kind-external-load-balancer
Doing work with KinD
Let us deploy our echo service on the KinD cluster. It starts the same:
$ k create deployment echo --image=gcr.io/google_containers/echoserver:1.8 deployment.apps/echo created
$ k expose deployment echo --type=NodePort --port=8080
service/echo exposed
Checking our services, we can see the echo service front and center:
$ k get svc echo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo NodePort 10.105.48.21 <none> 8080:31550/TCP 3m5s
However, there is no external IP to the service. With Minikube, we got the IP of the Minikube node itself via $(minikube ip)
, and we can use it in combination with the node port to access the service. That is not an option with KinD clusters. Let us see how to use a proxy to access the echo service.
Accessing Kubernetes services locally though a proxy
In this section, we will go into a lot of detail about networking, services, and how to expose them outside the cluster.
Here, I am just showing how to get it done and keeping you in suspense for now. First, we need to run the kubectl proxy
command, which exposes the API server, pods, and services on localhost:
$ k proxy &
[1] 10653
Starting to serve on 127.0.0.1:8001
Then, we can access the echo service though a specially crafted proxy URL that includes the exposed port (8080) and NOT the node port.
I use Httpie here. You can use curl too. To install Httpie, follow the instructions here: https://httpie.org/doc#installation:
$ http http://localhost:8001/api/v1/namespaces/default/services/echo:8080/proxy/
HTTP/1.1 200 OK
Content-Length: 534
Content-Type: text/plain
Date: Thu, 28 May 2020 21:27:56 GMT
Server: echoserver
Hostname: echo-74545d499-wqkn9
Pod Information:
-no pod information available-
Server values:
server_version=nginx: 1.13.3 - lua: 10008
Request Information:
client_address=10.40.0.0
method=GET
real path=/
query=
request_version=1.1
request_uri=http://localhost:8080/
Request Headers:
accept=*/*
accept-encoding=gzip, deflate
host=localhost:8001
user-agent=HTTPie/0.9.9
x-forwarded-for=127.0.0.1, 172.17.0.1
x-forwarded-uri=/api/v1/namespaces/default/services/echo:8080/proxy/
Request Body:
-no body in request-
We will deep dive into exactly what is going on in a future chapter (Chapter 12, Serverless Computing on Kubernetes). Let us check out my favorite local cluster solution: k3d.