Chapter 10: Running Production-Grade Kubernetes Workloads
In the previous chapters, we have focused on containerization concepts and the fundamental Kubernetes building blocks, such as Pods, Jobs, and ConfigMaps. Our journey so far has covered mostly single-machine scenarios, where the application requires only one container host or Kubernetes Node. For production-grade Kubernetes container workloads, you have to consider different aspects, such as scalability, high availability (HA), and load balancing, and this always requires orchestrating containers running on multiple hosts.
Briefly, container orchestration is a way of managing multiple containers' life cycles in large, dynamic environments—this can range from provisioning and deploying containers to managing networks, providing redundancy and HA of containers, automatically scaling up and down container instances, automated health checks, and telemetry (log and metrics) gathering. Solving the problem of efficient...