Implementing taints and tolerations
Taints and tolerations in Kubernetes work like reverse node selectors. Rather than nodes attracting Pods due to having the proper labels, which are then consumed by a selector, we taint nodes, which repels all Pods from being scheduled on the node, and then mark our Pods with tolerations, which allow them to be scheduled on the tainted nodes.
As mentioned at the beginning of the chapter, Kubernetes uses system-created taints to mark nodes as unhealthy and prevent new workloads from being scheduled on them. For instance, the out-of-disk
taint will prevent any new pods from being scheduled to a node with that taint.
Let's take the same example use case that we had with node selectors and apply it using taints and tolerations. Since this is basically the reverse of our previous setup, let's first give our node a taint using the kubectl taint
command:
> kubectl taint nodes node2 cpu_speed=slow:NoSchedule
Let's pick apart...