Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Becoming KCNA Certified

You're reading from   Becoming KCNA Certified Build a strong foundation in cloud native and Kubernetes and pass the KCNA exam with ease

Arrow left icon
Product type Paperback
Published in Feb 2023
Publisher Packt
ISBN-13 9781804613399
Length 306 pages
Edition 1st Edition
Arrow right icon
Author (1):
Arrow left icon
Dmitry Galkin Dmitry Galkin
Author Profile Icon Dmitry Galkin
Dmitry Galkin
Arrow right icon
View More author details
Toc

Table of Contents (22) Chapters Close

Preface 1. Part 1: The Cloud Era
2. Chapter 1: From Cloud to Cloud Native and Kubernetes FREE CHAPTER 3. Chapter 2: Overview of CNCF and Kubernetes Certifications 4. Part 2: Performing Container Orchestration
5. Chapter 3: Getting Started with Containers 6. Chapter 4: Exploring Container Runtimes, Interfaces, and Service Meshes 7. Part 3: Learning Kubernetes Fundamentals
8. Chapter 5: Orchestrating Containers with Kubernetes 9. Chapter 6: Deploying and Scaling Applications with Kubernetes 10. Chapter 7: Application Placement and Debugging with Kubernetes 11. Chapter 8: Following Kubernetes Best Practices 12. Part 4: Exploring Cloud Native
13. Chapter 9: Understanding Cloud Native Architectures 14. Chapter 10: Implementing Telemetry and Observability in the Cloud 15. Chapter 11: Automating Cloud Native Application Delivery 16. Part 5: KCNA Exam and Next Steps
17. Chapter 12: Practicing for the KCNA Exam with Mock Papers 18. Chapter 13: The Road Ahead 19. Assessments 20. Index 21. Other Books You May Enjoy

Summary

All in all, in this chapter, we’ve learned a few important aspects of running and operating workloads with K8s. We’ve seen how pod scheduling works with Kubernetes, its stages (filtering and scoring), and how we can control placement decisions with nodeSelector, nodeName, and affinity and anti-affinity settings, as well as topology spread constraints. Extensive features of the Kubernetes scheduler allow us to cover all imaginable scenarios of controlling how a certain workload will be placed within the nodes of a cluster. With those controls, we can spread Pods of one application between nodes in multiple AZs for HA; schedule Pods that require specialized hardware (for example, a GPU) only to nodes that have it available; magnet multiple applications together to run on the same nodes with affinity; and much more.

Next, we’ve seen that resource requests help Kubernetes make better scheduling decisions, and resource limits are there to protect the cluster...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime