Creating a multi-node cluster with k3d
In this section, we’ll create a multi-node cluster using k3d from Rancher. We will not repeat the deployment of the echo server because it’s identical to the KinD cluster including accessing it through a proxy. Spoiler alert – creating clusters with k3d is even faster and more user-friendly than KinD!
Quick introduction to k3s and k3d
Rancher created k3s, which is a lightweight Kubernetes distribution. Rancher says that k3s is 5 less than k8s if that makes any sense. The basic idea is to remove features and capabilities that most people don’t need such as:
- Non-default features
- Legacy features
- Alpha features
- In-tree storage drivers
- In-tree cloud providers
K3s removed Docker completely and uses containerd instead. You can still bring Docker back if you depend on it. Another major change is that k3s stores its state in an SQLite DB instead of etcd. For networking and DNS, k3s uses Flannel and CoreDNS.
K3s also added a simplified installer that takes care of SSL and certificate provisioning.
The end result is astonishing – a single binary (less than 40MB) that needs only 512MB of memory.
Unlike Minikube and KinD, k3s is actually designed for production. The primary use case is for edge computing, IoT, and CI systems. It is optimized for ARM devices.
OK. That’s k3s, but what’s k3d? K3d takes all the goodness that is k3s and packages it in Docker (similar to KinD) and adds a friendly CLI to manage it.
Let’s install k3d and see for ourselves.
Installing k3d
Installing k3d on macOS is as simple as:
brew install k3d
And on Windows, it is just:
choco install -y k3d
On Windows, optionally add this alias to your WSL .bashrc
file:
alias k3d='k3d.exe'
Let’s see what we have:
$ k3d version
k3d version v5.4.1
k3s version v1.22.7-k3s1 (default)
As you see, k3d reports its version, which shows all is well. Now, we can create a cluster with k3d.
Creating the cluster with k3d
Are you ready to be amazed? Creating a single-node cluster with k3d takes less than 20 seconds!
$ time k3d cluster create
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default'
INFO[0000] Created image volume k3d-k3s-default-images
INFO[0000] Starting new tools node...
INFO[0000] Starting Node 'k3d-k3s-default-tools'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0002] Using the k3d-tools node to gather environment information
INFO[0002] HostIP: using network gateway 172.19.0.1 address
INFO[0002] Starting cluster 'k3s-default'
INFO[0002] Starting servers...
INFO[0002] Starting Node 'k3d-k3s-default-server-0'
INFO[0008] All agents already running.
INFO[0008] Starting helpers...
INFO[0008] Starting Node 'k3d-k3s-default-serverlb'
INFO[0015] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap...
INFO[0017] Cluster 'k3s-default' created successfully!
INFO[0018] You can now use it like this:
kubectl cluster-info
real 0m18.154s
user 0m0.005s
sys 0m0.000s
Without a load balancer, it takes less than 8 seconds!
What about multi-node clusters? We saw that KinD was much slower, especially when creating a HA cluster with multiple control plane nodes and an external load balancer.
Let’s delete the single-node cluster first:
$ k3d cluster delete
INFO[0000] Deleting cluster 'k3s-default'
INFO[0000] Deleting cluster network 'k3d-k3s-default'
INFO[0000] Deleting 2 attached volumes...
WARN[0000] Failed to delete volume 'k3d-k3s-default-images' of cluster 'k3s-default': failed to find volume 'k3d-k3s-default-images': Error: No such volume: k3d-k3s-default-images -> Try to delete it manually
INFO[0000] Removing cluster details from default kubeconfig...
INFO[0000] Removing standalone kubeconfig file (if there is one)...
INFO[0000] Successfully deleted cluster k3s-default!
Now, let’s create a cluster with 3 worker nodes. That takes a little over 30 seconds:
$ time k3d cluster create --agents 3
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default'
INFO[0000] Created image volume k3d-k3s-default-images
INFO[0000] Starting new tools node...
INFO[0000] Starting Node 'k3d-k3s-default-tools'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Creating node 'k3d-k3s-default-agent-0'
INFO[0002] Creating node 'k3d-k3s-default-agent-1'
INFO[0002] Creating node 'k3d-k3s-default-agent-2'
INFO[0002] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0002] Using the k3d-tools node to gather environment information
INFO[0002] HostIP: using network gateway 172.22.0.1 address
INFO[0002] Starting cluster 'k3s-default'
INFO[0002] Starting servers...
INFO[0002] Starting Node 'k3d-k3s-default-server-0'
INFO[0008] Starting agents...
INFO[0008] Starting Node 'k3d-k3s-default-agent-0'
INFO[0008] Starting Node 'k3d-k3s-default-agent-2'
INFO[0008] Starting Node 'k3d-k3s-default-agent-1'
INFO[0018] Starting helpers...
INFO[0019] Starting Node 'k3d-k3s-default-serverlb'
INFO[0029] Injecting records for hostAliases (incl. host.k3d.internal) and for 5 network members into CoreDNS configmap...
INFO[0032] Cluster 'k3s-default' created successfully!
INFO[0032] You can now use it like this:
kubectl cluster-info
real 0m32.512s
user 0m0.005s
sys 0m0.000s
Let’s verify the cluster works as expected:
$ k cluster-info
Kubernetes control plane is running at https://0.0.0.0:60490
CoreDNS is running at https://0.0.0.0:60490/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:60490/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Here are the nodes. Note that there is just one control plane node called k3d-k3s-default-server-0
:
$ k get nodes
NAME STATUS ROLES AGE VERSION
k3d-k3s-default-server-0 Ready control-plane,master 5m33s v1.22.7+k3s1
k3d-k3s-default-agent-0 Ready <none> 5m30s v1.22.7+k3s1
k3d-k3s-default-agent-2 Ready <none> 5m30s v1.22.7+k3s1
k3d-k3s-default-agent-1 Ready <none> 5m29s v1.22.7+k3s1
You can stop and start clusters, create multiple clusters, and list existing clusters using the k3d CLI. Here are all the commands. Feel free to explore further:
$ k3d
Usage:
k3d [flags]
k3d [command]
Available Commands:
cluster Manage cluster(s)
completion Generate completion scripts for [bash, zsh, fish, powershell | psh]
config Work with config file(s)
help Help about any command
image Handle container images.
kubeconfig Manage kubeconfig(s)
node Manage node(s)
registry Manage registry/registries
version Show k3d and default k3s version
Flags:
-h, --help help for k3d
--timestamps Enable Log timestamps
--trace Enable super verbose output (trace logging)
--verbose Enable verbose output (debug logging)
--version Show k3d and default k3s version
Use "k3d [command] --help" for more information about a command.
You can repeat the steps for deploying, exposing, and accessing the echo service on your own. It works just like KinD.
OK. We created clusters using minikube, KinD and k3d. Let’s compare them, so you can decide which one works for you.