Summary
All in all, in this chapter, we’ve learned a few important aspects of running and operating workloads with K8s. We’ve seen how pod scheduling works with Kubernetes, its stages (filtering and scoring), and how we can control placement decisions with nodeSelector
, nodeName
, and affinity and anti-affinity settings, as well as topology spread constraints. Extensive features of the Kubernetes scheduler allow us to cover all imaginable scenarios of controlling how a certain workload will be placed within the nodes of a cluster. With those controls, we can spread Pods of one application between nodes in multiple AZs for HA; schedule Pods that require specialized hardware (for example, a GPU) only to nodes that have it available; magnet multiple applications together to run on the same nodes with affinity; and much more.
Next, we’ve seen that resource requests help Kubernetes make better scheduling decisions, and resource limits are there to protect the cluster...