Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
The DevOps 2.5 Toolkit

You're reading from   The DevOps 2.5 Toolkit Monitoring, Logging, and Auto-Scaling Kubernetes: Making Resilient, Self-Adaptive, And Autonomous Kubernetes Clusters

Arrow left icon
Product type Paperback
Published in Nov 2019
Publisher Packt
ISBN-13 9781838647513
Length 322 pages
Edition 1st Edition
Concepts
Arrow right icon
Author (1):
Arrow left icon
Viktor Farcic Viktor Farcic
Author Profile Icon Viktor Farcic
Viktor Farcic
Arrow right icon
View More author details
Toc

Table of Contents (9) Chapters Close

1. Autoscaling Deployments and StatefulSets Based on Resource Usage FREE CHAPTER 2. Auto-scaling Nodes of a Kubernetes Cluster 3. Collecting and Querying Metrics and Sending Alerts 4. Debugging Issues Discovered Through Metrics and Alerts 5. Extending HorizontalPodAutoscaler with Custom Metrics 6. Visualizing Metrics and Alerts 7. Collecting and Querying Logs 8. What Did We Do? 9. Other Books You May Enjoy

Autoscaling Deployments and StatefulSets Based on Resource Usage

Change is the essential process of all existence.

- Spock

By now, you probably understood that one of the critical aspects of a system based on Kubernetes is a high level of dynamism. Almost nothing is static. We define Deployments or StatefulSets, and Kubernetes distributes the Pods across the cluster. In most cases, those Pods are rarely sitting in one place for a long time. Rolling updates result in Pods being re-created and potentially moved to other nodes. Failure of any kind provokes rescheduling of the affected resources. Many other events cause the Pods to move around. A Kubernetes cluster is like a beehive. It's full of life, and it's always in motion.

Dynamic nature of a Kubernetes cluster is not only due to our (human) actions or rescheduling caused by failures. Autoscaling is to be blamed as well. We should fully embrace Kubernetes' dynamic nature and move towards autonomous and self-sufficient clusters capable of serving the needs of our applications without (much) human involvement. To accomplish that, we need to provide sufficient information that will allow Kubernetes' to scale the applications as well as the nodes that constitute the cluster. In this chapter, we'll focus on the former case. We'll explore commonly used and basic ways to auto-scale Pods based on memory and CPU consumption. We'll accomplish that using HorizontalPodAutoscaler.

HorizontalPodAutoscaler's only function is to automatically scale the number of Pods in a Deployment, a StatefulSet, or a few other types of resources. It accomplishes that by observing CPU and memory consumption of the Pods and acting when they reach pre-defined thresholds.

HorizontalPodAutoscaler is implemented as a Kubernetes API resource and a controller. The resource determines the behavior of the controller. The controller periodically adjusts the number of replicas in a StatefulSet or a Deployment to match the observed average CPU utilization to the target specified by a user.

We'll see HorizontalPodAutoscaler in action soon and comment on its specific features through practical examples. But, before we get there, we need a Kubernetes cluster as well as a source of metrics.

Next Section arrow right
You have been reading a chapter from
The DevOps 2.5 Toolkit
Published in: Nov 2019
Publisher: Packt
ISBN-13: 9781838647513
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime