Summary
Let's reflect a bit on how far we've come from Chapter 11, Build Your Own HA Cluster, when we started to talk about running Kubernetes in a highly available manner. We covered how to set up a production cluster that was secure in the cloud and created using infrastructure as code tools such as Terraform, as well as secured the workloads that it runs. We also looked at necessary modifications to our applications in order to scale them well—both for the stateful and stateless versions of the application.
Then, in this chapter, we looked at how we can extend the management of our application runtimes using data specifically when introducing Prometheus, Grafana, and the Kubernetes Metrics server. We then used that information to leverage the HPA and the ClusterAutoscaler so that we can rest assured that our cluster is always appropriately sized and ready to respond to spikes in demand automatically without having to pay for hardware that is overprovisioned...