Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Becoming KCNA Certified
Becoming KCNA Certified

Becoming KCNA Certified: Build a strong foundation in cloud native and Kubernetes and pass the KCNA exam with ease

eBook
€17.99 €25.99
Paperback
€31.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Table of content icon View table of contents Preview book icon Preview Book

Becoming KCNA Certified

From Cloud to Cloud Native and Kubernetes

In this chapter, you’ll see how computing has evolved over the past 20 or so years, what the cloud is and how it appeared, and how IT landscapes have changed with the introduction of containers. You’ll learn about fundamentals such as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), Software-as-a-Service (SaaS), and Function-as-a-Service (FaaS), as well as learning about the transition from monolithic to microservice architectures and getting a first glimpse at Kubernetes.

This chapter does not map directly to a specific KCNA exam objective, but these topics are crucial for anyone who’d like to tie their career to modern infrastructures. If you are already familiar with the basic terms, feel free to quickly verify your knowledge by going directly to the recap questions. If not, don’t be surprised that things are not covered in great detail, as this is an introductory chapter, and we’ll dive deeper into all of the topics in later chapters.

We’re going to cover the following topics in this chapter:  

  • The cloud and Before Cloud (B.C.)
  • Evolution of the cloud and cloud-native
  • Containers and container orchestration
  • Monolithic versus microservices applications
  • Kubernetes and its origins

Let’s get started!

The cloud and Before Cloud (B.C.)

The cloud has triggered a major revolution and accelerated innovation, but before we learn about the cloud, let’s see how things were done before the era of the cloud.

In the times before the term cloud computing was used, one physical server would only be able to run a single operating system (OS) at a time. These systems would typically host a single application, meaning two things:

  • If an application was not used, the computing resources of the server where it ran were wasted
  • If an application was used very actively and needed a larger server or more servers, it would take days or even weeks to get new hardware procured, delivered, cabled, and installed

Moving on, let’s have a look at an important aspect of computing – virtualization.

Virtualization

Virtualization technology and virtual machines (VMs) first appeared back in the 1960s, but it was not until the early 2000s that virtualization technologies such as XEN and Kernel-based Virtual Machines (KVMs) started to become mainstream.

Virtualization would allow us to run multiple VMs on a single physical server using hypervisors, where a hypervisor is a software that acts as an emulator of the hardware resources, such as the CPU and RAM. Effectively, it allows you to share the processor time and memory of the underlying physical server by slicing it between multiple VMs.

It means that each VM will be very similar to the physical server, but with a virtual CPU, memory, disks, and network cards instead of physical ones. Each VM will also have an OS on which you can install applications. The following figure demonstrates a virtualized deployment with two VMs running on the same physical server:

Figure 1.1 – Comparison of traditional and virtualized deployments

Figure 1.1 – Comparison of traditional and virtualized deployments

This concept of sharing hardware resources between the so-called guest VMs is what made it possible to utilize hardware more effectively and reduce any waste of computing resources. It means we might not need to purchase a whole new server in order to run another application.

The obvious benefits that came along with virtualization are as follows:

  • Less physical hardware required
  • Fewer data center personnel required
  • Lower acquisition and maintenance costs
  • Lower power consumption

Besides, provisioning a new VM would take minutes and not days or weeks of waiting for new hardware. However, to scale beyond the capacities of the hardware already installed in the corporate data center, we would still need to order, configure, and cable new physical servers and network equipment – and that has all changed with the introduction of cloud computing.

The cloud

At a very basic level, the cloud is virtualization on demand. It allows us to spawn VMs accessible over the network as a service, when requested by the customers.

Cloud computing

This is the delivery of computational resources as a service, where the actual hardware is owned and managed by the cloud provider rather than a corporate IT department.

The cloud has ignited a major revolution in computing. It became unnecessary to buy and manage your own hardware anymore to build and run applications and VMs. The cloud provider takes full care of hardware procurement, installation, and maintenance and ensures the efficient utilization of resources by serving hundreds and thousands of customers on shared hardware securely. Each customer will only pay for the resources they use. Today, it is common to distinguish the following three cloud types:

  • Public – The most popular type. A public cloud is operated by a third-party company and available for use by any paying customer. Public clouds are typically used by thousands of organizations at the same time. Examples of public cloud providers include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).
  • Private – Used by one typically large organization or an enterprise. The operations and maintenance might be done by the organization itself or a private cloud provider. Examples include Rackspace Private Cloud and VMware Private Cloud.
  • Hybrid – This is the combination of a public and private cloud, in a case where an organization has a private cloud but uses some of the services from a public cloud at the same time.

However, the cloud is not just VMs reachable over the network. There are tens and hundreds of services offered by cloud providers. Today, you can request and use network-attached storage, virtual network devices, firewalls, load balancers, VMs with GPUs or specialized hardware, managed databases, and more almost immediately.

Now, let’s see in more detail how cloud services can be delivered and consumed.

Evolution of the cloud and cloud-native

Besides the huge variety of cloud services you can find today, there is also a difference in how the services are offered. It is common to distinguish between four cloud service delivery models that help meet different needs:

  • IaaS – The most flexible model with the basic services provided: VMs, virtual routers, block devices, load balancers, and so on. This model also assumes the most customer responsibility. Users of IaaS have access to their VMs and must configure their OS, install updates, and set up, manage, and secure their applications. AWS Elastic Compute Cloud (EC2), AWS Elastic Block Store (EBS), and Google Compute Engine VMs are all examples of IaaS.
  • PaaS – This helps to focus on the development and management of applications by taking away the need to install OS upgrades or do any lower-level maintenance. As a PaaS customer, you are still responsible for your data, identity and access, and your application life cycle. Examples include Heroku and Google App Engine.
  • SaaS – Takes the responsibilities even further away from the customers. Typically, these are fully managed applications that just work, such as Slack or Gmail.
  • FaaS – A newer delivery model that appeared around 2010. It is also known as Serverless today. A FaaS customer is responsible for defining the functions that are triggered by the events. Functions can be written in one of the popular programming languages and customers don’t have to worry about server or OS management, deployment, or scaling. Examples of FaaS include AWS Lambda, Google Cloud Functions, and Microsoft Azure Functions.

These models might sound a bit complicated, so let’s draw a simple analogy with cars and transportation.

On-premises, traditional data centers are like having your own car. You are buying it, and you are responsible for its insurance and maintenance, the replacement of broken parts, passing regular inspections, and so on.

IaaS is more like leasing a car for some period of time. You pay monthly lease payments, you drive it, you fill it with gas, and you wash it, but you don’t actually own the car and you can give it back when you don’t need it anymore.

PaaS can be compared with car-sharing. You don’t own the car, you don’t need to wash it, do any maintenance, or even refill it most of the time, but you still drive it yourself.

Following the analogy, SaaS is like calling a taxi. You don’t need to own the car or even drive it.

Finally, Serverless or FaaS can be compared to a bus from a user perspective. You just hop on and ride to your destination – no maintenance, no driving, and no ownership.

Hopefully, this makes things clearer. The big difference between traditional on-premises setups where a company is solely responsible for the organization, hardware maintenance, data security, and more is that a so-called shared responsibility model applies in the cloud.

Shared responsibility model

Defines the obligations of the cloud provider and the cloud customer. These responsibilities depend on the service provided – in the case of an IaaS service, the customer has more responsibility compared to PaaS or SaaS. For example, the cloud provider is always responsible for preventing unauthorized access to data center facilities and the stability of the power supply and underlying network connectivity.

The following figure visually demonstrates the difference between the responsibilities:

Figure 1.2 – Comparison of cloud delivery models

Figure 1.2 – Comparison of cloud delivery models

As cloud technologies and providers evolved over the past 20 years, so did the architectures of the applications that run on the cloud; a new term has emerged – cloud-native. Most of the time, it refers to the architectural approach, but you will often encounter cloud-native applications or cloud-native software as well.

Cloud-native

Is an approach to building and running applications on modern, dynamic infrastructures such as clouds. It is emphasizing application workloads with high resiliency, scalability, high degree of automation, ease of management, and observability.

Despite the presence of the word cloud, it does not mean that a cloud-native application must run strictly in a public, private, or hybrid cloud. You can develop a cloud-native application and run it on-premises with Kubernetes as an example.

Cloud-native should not be confused with Cloud Service Providers (CSPs), or simply cloud providers, and cloud-native is also not the same as cloud-first, so remember the following:

Cloud-native ≠ CSP ≠ cloud-first

For the sake of completeness, let’s define the other two.

A CSP is a third-party company offering cloud computing services such as IaaS, PaaS, SaaS, or FaaS. Cloud-first simply stands for a strategy where the cloud is the default choice for either optimizing existing IT infrastructure or for launching new applications.

Don’t worry if those definitions do not make total sense just yet – we will dedicate a whole section to cloud-native that explains all its aspects in detail. For now, let’s have a quick introduction to containers and their orchestration.

Containers and container orchestration

At a very high level, containers are another form of lightweight virtualization, also known as OS-level virtualization. However, containers are different from VMs with their own advantages and disadvantages.

The major difference is that with VMs, we can slice and share one physical server between many VMs, each running their own OS. With containers, we can slice and share an OS kernel between multiple containers and each container will have its own virtual OS. Let’s see this in more detail.

Containers

These are portable units of software that include application code with runtimes, dependencies, and system libraries. Containers share one OS kernel, but each container can have its own isolated OS environment with different packages, system libraries, tools, its own storage, networking, users, processes, and groups.

Portable is important and needs to be elaborated. An application packaged into a container image is guaranteed to run on another host because the container includes its own isolated environment. Starting a container on another host does not interfere with its environment or the application containerized.

A major advantage is also that containers are a lot more lightweight and efficient compared to VMs. They consume less resources (the CPU and RAM) than VMs and start almost instantly because they don’t need to bootstrap a complete OS with a kernel. For example, if a physical server is capable of running 10 VMs, then the same physical server might be able to run 30, 40, or possibly even more containers, each with its own application (the exact number depends on many factors, including the type of workload, so those values are for demonstration purposes only and do not represent any formula).

Containers are also much smaller than VMs in disk size, because they don’t package a full OS with thousands of libraries. Only applications with dependencies and a minimal set of OS packages are included in container images. That makes container images small, portable, and easy to download or share.

Container images

These are essentially templates of container OS environments that we can use to create multiple containers with the same application and environment. Every time we execute an image, a container is created.

Speaking in numbers, a container image of a popular Linux distribution such as Ubuntu Server 20.04 weighs about 70 MB, whereas a KVM QCOW2 virtual machine image of the same Ubuntu Server will weigh roughly 500 MB. Specialized Linux container images such as Alpine can be as small as 5 to 10 MB and provide the bare minimum functionality to install and run applications.

Containers are also agnostic to where they run – whether on physical servers, on-premises VMs, or the cloud, containers can run in any of these locations with the help of container runtimes.

Container runtimes

A container runtime is a special software needed to run containers on a host OS. It is responsible for creating, starting, stopping, and deleting containers based on the container images it downloads. Examples of container runtimes include containerd, CRI-O, and Docker Engine.

Figure 1.3 demonstrates the differences between virtualized and containerized deployments:

Figure 1.3 – Comparison of virtualized and container deployments

Figure 1.3 – Comparison of virtualized and container deployments

Now, a question you might be asking yourself is if containers are so great, why would anyone use VMs and why do cloud providers still offer so many VM types?

Here is the scenario where VMs have an advantage over containers: they provide better security due to stronger isolation because they don’t directly share the same host kernel. That means if an application running in a container has been breached by a hacker, the chances that they can get to all the other containers on the same host are much higher than compared to regular VMs.

We will dive deeper into the technology behind OS-level virtualization and explore the low-level differences between VMs and containers in later chapters.

As containers gained momentum and received wider adoption over the years, it quickly became apparent that managing containers on a large scale can be quite a challenge. The industry needed tools to orchestrate and manage the life cycle of container-based applications.

This had to do with the increasing number of containers that companies and teams had to operate because as the infrastructure tools evolved, so did the application architectures too, transforming from large monolithic architectures into small, distributed, and loosely coupled microservices.

Monolithic versus microservices applications

To understand the difference between monolithic and microservice-based applications, let us reflect on a real-world example. Imagine that a company runs an online hotel booking business. All reservations are made and paid for by the customers via a corporate web service.

The traditional monolithic architecture for this kind of web application would have bundled all the functionality into one single, complex software that might have included the following:

  • Customer dashboard
  • Customer identity and access management
  • Search engine for hotels based on criteria
  • Billing and integration with payment providers
  • Reservation system for hotels
  • Ticketing and support chat

A monolithic application will be tightly coupled (bundled) with all the business and user logic and must be developed and updated at once. That means if a change to a billing code has to be made, the entire application will have to be updated with the changes. After that, it should be carefully tested and released to the production environment. Each (even a small) change could potentially break the whole application and impact business by making it unavailable for a longer time.

With a microservices architecture, this very same application could be split into several smaller pieces communicating with each other over the network and fulfilling its own purpose. Billing, for example, can be performed by four smaller services:

  • Currency converter
  • Credit card provider integration
  • Bank wire transfer processing
  • Refund processing

Essentially, microservices are a group of small applications where each is responsible for its own small task. These small applications communicate with each other over the network and work together as a part of a larger application.

The following figure demonstrates the differences between monolithic and microservice architectures:

Figure 1.4 – Comparison of monolithic and microservice architectures

Figure 1.4 – Comparison of monolithic and microservice architectures

This way, all other parts of the web application can also be split into multiple smaller independent applications (microservices) communicating over the network. The advantages of this approach include the following:

  • Each microservice can be developed by its own team
  • Each microservice can be released and updated separately
  • Each microservice can be deployed and scaled independently of others
  • A single microservice outage will only impact a small part of the overall functionality of the app

Microservices are an important part of cloud-native architectures, and we will review in detail the benefits as well as the challenges associated with microservices in Chapter 9, Understanding Cloud Native Architectures. For the moment, let’s get back to containers and why they need to be orchestrated.

When each microservice is packaged into a container, the total number of containers can easily reach tens or even hundreds for especially large and complex applications. In such a complex distributed environment, things can quickly get out of our control.

A container orchestration system is what helps us to keep control over a large number of containers. It simplifies the management of containers by grouping application containers into deployments and automating operations such as the following:

  • Scaling microservices depending on the workload
  • Releasing new versions of microservices and their updates
  • Scheduling containers based on host utilizations and requirements
  • Automatically restarting containers that fail or failing over the traffic

As of today, there are many container and workload orchestration systems available, including these:

  • Kubernetes
  • OpenShift (also known as Open Kubernetes Distribution (OKD))
  • Hashicorp Nomad
  • Docker Swarm
  • Apache Mesos

As you already know from the book title, we will only focus on Kubernetes and there won’t be any sort of comparison made between these five. In fact, Kubernetes has overwhelmingly higher market shares and over the years, has become the de facto platform for orchestrating containers. With a high degree of confidence, you can concentrate on learning about Kubernetes and forget about the others, at least for the moment.

Kubernetes and its origins

Let’s start with a brief history first. The name Kubernetes originates from Greek and means pilot or helmsman – a person steering a ship (that is why there is a steering wheel in the logo). The steering wheel has seven bars and the number seven has a special meaning for Kubernetes. The team originally working on Kubernetes called it Project Seven – named after seven of nine characters from the well-known TV series, Star Trek.

Figure 1.5 – The Kubernetes logo

Figure 1.5 – The Kubernetes logo

Kubernetes was initially developed by Google and released as an open source project in 2014. Google has been a pioneer, having run its services in containers already for more than a decade by that time, and the release of Kubernetes triggered another small revolution in the industry. By that time, many businesses had realized the benefits of using containers and were in need of a solution that would simplify container orchestration at scale. Kubernetes turned out to be this solution, as we will see soon.

Kubernetes (K8s)

Kubernetes is an open source platform for container orchestration. Kubernetes features an extensible and declarative API that allows you to automatically reach the desired state of resources. It allows flexible scheduling, autoscaling, rolling update, and self-healing of container-based payloads.

(Online and in documentation, a shorter abbreviation, K8s, can often be encountered – where eight is the number of letters between “K” and “s”.)

Kubernetes has inherited many of its features and best ideas from Borg – an internal container cluster management system powering thousands of different applications at Google. Many Borg engineers participated in the development of Kubernetes and were able to address relevant pain points based on their experience of operating a huge fleet of containers over the years.

Soon after its initial release, Kubernetes rapidly gained the attention of the open source community and attracted many talented contributors from all over the world. Today, Kubernetes is among the top three biggest open source projects on GitHub (https://github.com/kubernetes) with more than 80,000 stars and 3,000 contributors. It was also the first project to graduate from the Cloud Native Computing Foundation (CNCF), a non-profit organization that split off from the Linux Foundation created with the goal of advancing container and cloud-native technologies.

One of the most important features of Kubernetes is the concept of the desired state. Kubernetes operates in a way where we define the state of the application containers we want to have, and Kubernetes will automatically ensure the state is reached. Kubernetes constantly observes the state of all deployed containers and makes sure this state matches what we’ve requested.

Let’s consider the following example. Imagine that we run a simple microservice-based application on Kubernetes cluster with three hosts. We define a specification that requires Kubernetes to run these:

  • Two identical containers for the frontend
  • Three identical containers for the backend
  • Two containers with volumes serving the data persistence

Unexpectedly, one of the three hosts fails, and two containers running on the frontend and backend become unavailable. Kubernetes observes the changed number of hosts in the cluster and reduced number of containers responsible for the frontend and the backend. Kubernetes automatically starts one frontend and one backend container on the other two operational hosts to bring the system back to its desired state. This process is known as self-healing.

Kubernetes can do way more than scheduling and restarting failed containers – we can also define a Kubernetes specification that requires the number of microservice containers to automatically increase based on the current demand. For example, in the preceding example, we can specify that with an increased workload, we want to run five replicas of the frontend and five replicas of the backend. Alternatively, in case of low application demand, we can automatically decrease the number of each microservice containers to two. This process is known as autoscaling.

This example demonstrates the basic capabilities of Kubernetes. In Part 3, we will explore more Kubernetes features and try some of them firsthand.

Important note

While being a container orchestrator, Kubernetes does not have its own container runtime. Instead, it has integration with popular container runtimes such as containerd and can work with multiple runtimes within a Kubernetes cluster.

You often see references to Kubernetes clusters because a typical Kubernetes installation will be used to manage hundreds of containers spread across multiple hosts. Single-host Kubernetes installations are only suitable for learning or local development, but not for production usage.

To sum up, Kubernetes has laid down the path for massive container adoption and is a thriving open source ecosystem that is still growing with new projects graduating from the CNCF every year. In this book, we will cover the Kubernetes API, components, resources, features, and operational aspects in depth, and learn more about projects that can be used with Kubernetes to extend its functionality.

Summary

In this chapter, we learned about the concepts of the cloud and containers, and the evolution of computing over the last 20 to 30 years. In the era before the cloud, traditional deployments with one or a few applications per physical server caused a lot of inefficiency and wasted resources with underutilized hardware and high costs of ownership.

When virtualization technologies came along, it became possible to run many applications per physical server using VMs. This addressed the pitfalls of traditional deployments and allowed us to deliver new applications more quickly and with significantly lower costs.

Virtualization paved the way for the cloud services that are delivered via four different models today: IaaS, PaaS, SaaS, and FaaS or Serverless. Customer responsibilities differ by cloud service and delivery model.

This progress never stopped – now, cloud-native as an approach to building and running applications has emerged. Cloud-native applications are designed and built with an emphasis on scalability, resilience, ease of management, and a high degree of automation.

Over recent years, container technology has developed and gained momentum. Containers use virtualization at the OS level and each container represents a virtual OS environment. Containers are faster, more efficient, and more portable compared to VMs.

Containers enabled us to develop and manage modern applications based on a microservices architecture. Microservices were a step ahead compared to traditional monoliths – all-in-one, behemoth applications.

While containers are one of the most efficient ways to run cloud-native applications, it becomes hard to manage large numbers of containers. Therefore, containers are best managed using an orchestrator such as Kubernetes.

Kubernetes is an open source container orchestration system that originated from Google and automates many operational aspects of containers. Kubernetes will schedule, start, stop, and restart containers and increase or decrease the number of containers based on the provided specification automatically. Kubernetes makes it possible to implement self-healing and autoscaling based on the current demand.

Questions

At the end of each chapter, you’ll find recap questions that allow to test your understanding. Questions might have multiple correct answers. Correct answers can be found in the Assessment section of the Appendix:

  1. Which of the following describes traditional deployments on physical servers (pick two)?
    1. Easy maintenance
    2. Underutilized hardware
    3. Low energy consumption
    4. High upfront costs
  2. Which advantages do VMs have compared to containers?
    1. They are more reliable
    2. They are more portable
    3. They are more secure
    4. They are more lightweight
  3. What describes the difference between VMs and containers (pick two)?
    1. VM images are small and container images are large
    2. VM images are large and container images are small
    3. VMs share the OS kernel and containers don’t
    4. Containers share the OS kernel and VMs don’t
  4. At which level do containers operate?
    1. The orchestrator level
    2. The hypervisor level
    3. The programming language level
    4. The OS level
  5. What is typically included in a container image (pick two)?
    1. An OS kernel
    2. A minimal set of OS libraries and packages
    3. A graphical desktop environment
    4. A packaged microservice
  6. Which advantages do containers have compared to VMs (pick multiple)?
    1. They are more secure
    2. They are more lightweight
    3. They are more portable
    4. They are faster to start
  7. Which software is needed to start and run containers?
    1. A container runtime
    2. A hypervisor
    3. Kubernetes
    4. VirtualBox
  8. Which of the following can be used to orchestrate containers?
    1. containerd
    2. CRI-O
    3. Kubernetes
    4. Serverless
  9. Which of the following is a cloud service delivery model (pick multiple)?
    1. IaaS, PaaS
    2. SaaS, FaaS
    3. DBaaS
    4. Serverless
  10. Which of the following statements about cloud-native is true?
    1. It is an architectural approach
    2. It is the same as a cloud provider
    3. It is similar to cloud-first
    4. It is software that only runs in the cloud
  11. Which of the following descriptors applies to cloud-native applications (pick two)?
    1. High degree of automation
    2. High scalability and resiliency
    3. Can only run in a private cloud
    4. Can only run in a public cloud
  12. Which of the following statements is true about monolithic applications?
    1. They are easy to update
    2. Their components communicate with each other over the network
    3. They include all the business logic and interfaces
    4. They can be scaled easily
  13. Which of the following statements is true for microservices (pick multiple)?
    1. They can only be used for the backend
    2. They work together as a part of a bigger application
    3. They can be developed by multiple teams
    4. They can be deployed independently
  14. Which of the following can be done with Kubernetes (pick multiple)?
    1. Self-healing in case of failure
    2. Autoscaling containers
    3. Spawning VMs
    4. Scheduling containers on different hosts
  15. Which project served as an inspiration for Kubernetes?
    1. OpenStack
    2. Docker
    3. Borg
    4. OpenShift
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Gain an in-depth understanding of cloud-native computing and Kubernetes concepts
  • Prepare for the KCNA exam with the help of practical examples and mock exams
  • Manage your applications better with Kubernetes container orchestration

Description

The job market related to the cloud and cloud-native technologies is both growing and becoming increasingly competitive, making certifications like KCNA a great way to stand out from the crowd and learn about the latest advancements in cloud technologies. Becoming KCNA Certified doesn't just give you the practical skills needed to deploy and connect applications in Kubernetes, but it also prepares you to pass the Kubernetes and Cloud Native Associate (KCNA) exam on your first attempt. The book starts by introducing you to cloud-native computing, containers, and Kubernetes through practical examples, allowing you to test the theory out for yourself. You'll learn how to configure and provide storage for your Kubernetes-managed applications and explore the principles of modern cloud-native architecture and application delivery, giving you a well-rounded view of the subject. Once you've been through the theoretical and practical aspects of the book, you'll get the chance to test what you’ve learnt with two mock exams, with explanations of the answers, so you'll be well-prepared to appear for the KCNA exam. By the end of this Kubernetes book, you'll have everything you need to pass the KCNA exam and forge a career in Kubernetes and cloud-native computing.

Who is this book for?

This book is for DevOps engineers, system administrators, developers, fresh IT graduates, or anyone interested in cloud native architecture, applications, and technologies. Those with relevant work experience looking to upskill themselves in order to manage their applications with Kubernetes in a better way will also find this book helpful. Familiarity with IT fundamentals, networks, and command line interface (CLI) is required, but no prior knowledge of Kubernetes, docker, or cloud services is needed to get started with this book.

What you will learn

  • Get to grips with Cloud Native Computing Foundation (CNCF) and its projects
  • Build, configure, and run containers with Docker
  • Bootstrap minimal Kubernetes clusters for learning
  • Manage and encrypt container traffic with Service Mesh
  • Deploy, configure, and update applications on Kubernetes
  • Control and connect the applications that run on Kubernetes
  • Manage storage and provide observability on Kubernetes
  • Automate software development with CI/CD and GitOps
Estimated delivery fee Deliver to Slovenia

Premium delivery 7 - 10 business days

€25.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Feb 10, 2023
Length: 306 pages
Edition : 1st
Language : English
ISBN-13 : 9781804613399
Concepts :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Estimated delivery fee Deliver to Slovenia

Premium delivery 7 - 10 business days

€25.95
(Includes tracking information)

Product Details

Publication date : Feb 10, 2023
Length: 306 pages
Edition : 1st
Language : English
ISBN-13 : 9781804613399
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 107.97
Becoming KCNA Certified
€31.99
Terraform Cookbook
€37.99
Practical Cybersecurity Architecture
€37.99
Total 107.97 Stars icon

Table of Contents

21 Chapters
Part 1: The Cloud Era Chevron down icon Chevron up icon
Chapter 1: From Cloud to Cloud Native and Kubernetes Chevron down icon Chevron up icon
Chapter 2: Overview of CNCF and Kubernetes Certifications Chevron down icon Chevron up icon
Part 2: Performing Container Orchestration Chevron down icon Chevron up icon
Chapter 3: Getting Started with Containers Chevron down icon Chevron up icon
Chapter 4: Exploring Container Runtimes, Interfaces, and Service Meshes Chevron down icon Chevron up icon
Part 3: Learning Kubernetes Fundamentals Chevron down icon Chevron up icon
Chapter 5: Orchestrating Containers with Kubernetes Chevron down icon Chevron up icon
Chapter 6: Deploying and Scaling Applications with Kubernetes Chevron down icon Chevron up icon
Chapter 7: Application Placement and Debugging with Kubernetes Chevron down icon Chevron up icon
Chapter 8: Following Kubernetes Best Practices Chevron down icon Chevron up icon
Part 4: Exploring Cloud Native Chevron down icon Chevron up icon
Chapter 9: Understanding Cloud Native Architectures Chevron down icon Chevron up icon
Chapter 10: Implementing Telemetry and Observability in the Cloud Chevron down icon Chevron up icon
Chapter 11: Automating Cloud Native Application Delivery Chevron down icon Chevron up icon
Part 5: KCNA Exam and Next Steps Chevron down icon Chevron up icon
Chapter 12: Practicing for the KCNA Exam with Mock Papers Chevron down icon Chevron up icon
Chapter 13: The Road Ahead Chevron down icon Chevron up icon
Assessments Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(6 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Garrett Sep 16, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Got the cert from this.
Subscriber review Packt
Amanda Moore Oct 05, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Amazing book! Plenty of great study material, hands-on labs, and exam tips! Highly recommend if you’re looking to become a Kubernetes guru, look no further. Also, for easy setup, navigate to the AWS console login, and practice/practice/practice. Thank you to the author and looking forward to purchasing more books in the future!
Amazon Verified review Amazon
PHANI KIRAN MULLAPUDI Apr 13, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is perfect guide for even novice users of K8s or newbie on who is planning to learning Containerization and getting certified in K8s as KCNA.It covers from the basics of Cloud Era and navigates through the basics and higher detail on how K8s works.The sample exam papers is definitely very helpful and can give insight on what to expect in the exam.I would go get this book if you are looking to get certified.
Amazon Verified review Amazon
Nikhil kumar Apr 02, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I am impressed by the depth of knowledge and practical insights that this book provides on Kubernetes.The author has done an excellent job of breaking down complex Kubernetes concepts into easy-to-understand explanations, making it an ideal resource for anyone looking to build a solid foundation in Kubernetes. The book is well-structured, with a clear progression of topics, and the hands-on exercises and real-world examples make it easier to follow along and apply the knowledge.Overall, I found the book to be a comprehensive guide to Kubernetes, perfect for both beginners and intermediate-level users who are looking to upskill in this area. I highly recommend this book to anyone serious about mastering Kubernetes.
Amazon Verified review Amazon
Product. Oct 15, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Wonderful book. I went to Nigel book but it was not Good enough. This book is best in marker. Thanks
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela