Implementing autoscaling for Kubernetes services
Kubernetes itself is quite a powerful orchestration platform that allows you to control how few or how many system resources a given bit of executable code can have access to. Since the implementation of Kubernetes can be done on-premises, in the cloud on virtual machines, or through a managed service, there are several options for autoscaling configuration. These can range from patterns in Kubernetes itself to certain features of cloud-managed services to third-party plugins that are purpose-built for specific scenarios.
Native Kubernetes options
As an orchestrator, Kubernetes offers a rich ecosystem that allows you to use as little or as much of the cluster’s compute power as needed, in a variety of ways. Some features allow you to control how applications can scale out, depending on some of the primary indicators we covered in the previous section. In this section, we will start with a couple of native options that can...