Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
The Kubernetes Bible

You're reading from   The Kubernetes Bible The definitive guide to deploying and managing Kubernetes across cloud and on-prem environments

Arrow left icon
Product type Paperback
Published in Nov 2024
Publisher Packt
ISBN-13 9781835464717
Length 720 pages
Edition 2nd Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Gineesh Madapparambath Gineesh Madapparambath
Author Profile Icon Gineesh Madapparambath
Gineesh Madapparambath
Russ McKendrick Russ McKendrick
Author Profile Icon Russ McKendrick
Russ McKendrick
Arrow right icon
View More author details
Toc

Table of Contents (24) Chapters Close

Preface 1. Kubernetes Fundamentals FREE CHAPTER 2. Kubernetes Architecture – from Container Images to Running Pods 3. Installing Your First Kubernetes Cluster 4. Running Your Containers in Kubernetes 5. Using Multi-Container Pods and Design Patterns 6. Namespaces, Quotas, and Limits for Multi-Tenancy in Kubernetes 7. Configuring Your Pods Using ConfigMaps and Secrets 8. Exposing Your Pods with Services 9. Persistent Storage in Kubernetes 10. Running Production-Grade Kubernetes Workloads 11. Using Kubernetes Deployments for Stateless Workloads 12. StatefulSet – Deploying Stateful Applications 13. DaemonSet – Maintaining Pod Singletons on Nodes 14. Working with Helm Charts and Operators 15. Kubernetes Clusters on Google Kubernetes Engine 16. Launching a Kubernetes Cluster on Amazon Web Services with Amazon Elastic Kubernetes Service 17. Kubernetes Clusters on Microsoft Azure with Azure Kubernetes Service 18. Security in Kubernetes 19. Advanced Techniques for Scheduling Pods 20. Autoscaling Kubernetes Pods and Nodes 21. Advanced Kubernetes: Traffic Management, Multi-Cluster Strategies, and More 22. Other Books You May Enjoy 23. Index

Understanding monoliths and microservices

Let’s put Kubernetes and Docker to one side for the moment, and instead, let’s talk a little bit about how the internet and software development evolved together over the past 20 years. This will help you to gain a better understanding of where Kubernetes sits and the problems it solves.

Understanding the growth of the internet since the late 1990s

Since the late 1990s, the popularity of the internet has grown rapidly. Back in the 1990s, and even in the early 2000s, the internet was only used by a few hundred thousand people in the world. Today, almost 2 billion people are using the internet for email, web browsing, video games, and more.

There are now a lot of people on the internet, and we’re using it for tons of different needs, and these needs are addressed by dozens of applications deployed on dozens of devices.

Additionally, the number of connected devices has increased, as each person can now have several devices of a different nature connected to the internet: laptops, computers, smartphones, TVs, tablets, and more.

Today, we can use the internet to shop, to work, to entertain, to read, or to do whatever. It has entered almost every part of our society and has led to a profound paradigm shift in the last 20 years. All of this has given the utmost importance to software development.

Understanding the need for more frequent software releases

To cope with this ever-increasing number of users who are always demanding more in terms of features, the software development industry had to evolve in order to make new software releases faster and more frequent.

Indeed, back in the 1990s, you could build an application, deploy it to production, and simply update it once or twice a year. Today, companies must be able to update their software in production, sometimes several times a day, whether to deploy a new feature, integrate with a social media platform, support the resolution of the latest fashionable smartphone, or even release a patch to a security breach identified the day before. Everything is far more complex today, and you must go faster than before.

We constantly need to update our software, and in the end, the survival of many companies directly depends on how often they can offer releases to their users. But how do we accelerate software development life cycles so that we can deliver new versions of our software to our users more frequently?

IT departments of companies had to evolve, both in an organizational sense and a technical sense. Organizationally, they changed the way they managed projects and teams in order to shift to agile methodologies, and technically, technologies such as cloud computing platforms, containers, and virtualization were adopted widely and helped a lot to align technical agility with organizational agility. All of this is to ensure more frequent software releases! So, let’s focus on this evolution next.

Understanding the organizational shift to agile methodologies

From a purely organizational point of view, agile methodologies such as Scrum, Kanban, and DevOps became the standard way to organize IT teams.

Typical IT departments that do not apply agile methodologies are often made of three different teams, each of them having a single responsibility in the development and release process life cycle.

Rest assured, even though we are currently discussing agile methodologies and the history of the internet, this book is really about Kubernetes! We just need to explain some of the problems that we have faced before introducing Kubernetes for real!

Before the adoption of agile methodologies, development and operations often worked in separate silos. This could lead to inefficiency and communication gaps. Agile methodologies helped bridge these gaps and foster collaboration. The three isolated teams are shown below.

  • The business team: They’re like the voice of the customer. Their job is to explain what features are needed in the app to meet user needs. They translate business goals into clear instructions for the developers.
  • The development team: These are the engineers who bring the app to life. They translate the business team’s feature requests into code, building the functionalities and features users will interact with. Clear communication from the business team is crucial. If the instructions aren’t well defined, it can be like a game of telephone – misunderstandings lead to delays and rework.
  • The operation team: They’re the keepers of the servers. Their main focus is keeping the app running smoothly. New features can be disruptive because they require updates, which can be risky. In the past, they weren’t always aware of what new features were coming because they weren’t involved in the planning.

These are what we call silos, as illustrated in Figure 1.1:

Figure 1.1: Isolated teams in a typical IT department

The roles are clearly defined, people from the different teams do not work together that much, and when something goes wrong, everyone loses time finding the right information from the right person.

This kind of siloed organization has led to major issues:

  • A significantly longer development time
  • Greater risk in the deployment of a release that might not work at all in production

And that’s essentially what agile methodologies and DevOps fixed. The change agile methodologies made was to make people work together by creating multidisciplinary teams.

DevOps is a collaborative culture and set of practices that aims to bridge the gap between development (Dev) and operations (Ops) teams. DevOps promotes collaboration and automation throughout the software lifecycle, from development and testing to deployment and maintenance.

An agile team consists of a product owner describing concrete features by writing them as user stories that are readable by the developers who are working in the same team as them. Developers should have visibility of the production environment and the ability to deploy on top of it, preferably using a continuous integration and continuous deployment (CI/CD) approach. Testers should also be part of agile teams in order to write tests.

With the collaborative approach, the teams will get better and clearer visibility of the full picture, as illustrated in the following diagram.

Figure 1.2: Team collaboration breaks silos

Simply understand that, by adopting agile methodologies and DevOps, these silos were broken and multidisciplinary teams capable of formalizing a need, implementing it, testing it, releasing it, and maintaining it in the production environment were created. Table 1.1 presents a shift from traditional development to agile and DevOps methodology.

Feature

Traditional Development

Agile & DevOps

Team Structure

Siloed departments (Development, Operations)

Cross-functional, multi-disciplinary teams

Work Style

Isolated workflows, limited communication

Collaborative, iterative development cycles

Ownership

Development hands off to Operations for deployment and maintenance

“You Build It, You Run It” - Teams own the entire lifecycle

Focus

Features and functionality

Business value, continuous improvement

Release Cycle

Long release cycles, infrequent deployments

Short sprints, frequent releases with feedback loops

Testing

Separate testing phase after development

Integrated testing throughout the development cycle

Infrastructure

Static, manually managed infrastructure

Automated infrastructure provisioning and management (DevOps)

Table 1.1: DevOps vs traditional development – a shift in collaboration

So, we’ve covered the organizational transition brought about by the adoption of agile methodologies. Now, let’s discuss the technical evolution that we’ve gone through over the past several years.

Understanding the shift from on-premises to the cloud

Having agile teams is very nice, but agility must also be applied to how software is built and hosted.

With the aim to always achieve faster and more recurrent releases, agile software development teams had to revise two important aspects of software development and release:

  • Hosting
  • Software architecture

Today, apps are not just for a few hundred users but potentially for millions of users concurrently. Having more users on the internet also means having more computing power capable of handling them. And, indeed, hosting an application became a very big challenge.

In the early days of web hosting, businesses primarily relied on two main approaches to housing their applications: one of these approaches is on-premises hosting. This method involved physically owning and managing the servers that ran their applications. There are two main ways to achieve on-premises hosting:

  1. Dedicated Servers: Renting physical servers from established data center providers: This involved leasing dedicated server hardware from a hosting company. The hosting provider would manage the physical infrastructure (power, cooling, security) but the responsibility for server configuration, software installation, and ongoing maintenance fell to the business. This offered greater control and customization compared to shared hosting, but still required significant in-house technical expertise.
  2. Building Your Own Data Center: Constructing and maintaining a private data center: This option involved a massive investment by the company to build and maintain its own physical data center facility. This included purchasing server hardware, networking equipment, and storage solutions, and implementing robust power, cooling, and security measures. While offering the highest level of control and security, this approach was very expensive and resource-intensive and was typically only undertaken by large corporations with significant IT resources.

Also note that on-premises hosting also encompasses managing the operating system, security patches, backups, and disaster recovery plans for the servers. Companies often needed a dedicated IT staff to manage and maintain their on-premises infrastructure, adding to the overall cost.

When your user base grows, you need to get more powerful machines to handle the load. The solution is to purchase a more powerful server and install your app on it from the start or to order and rack new hardware if you manage your data center. This is not very flexible. Today, a lot of companies are still using an on-premises solution, and often, it’s not very flexible.

The game-changer was the adoption of the other approach, which is the public cloud, which is the opposite of on-premises. The idea behind cloud computing is that big companies such as Amazon, Google, and Microsoft, which own a lot of datacenters, decided to build virtualization on top of their massive infrastructure to ensure the creation and management of virtual machines was accessible by APIs. In other words, you can get virtual machines with just a few clicks or just a few commands.

The following table provides high-level information about why cloud computing is good for organizations.

Feature

On-Premises

Cloud

Scalability

Limited – requires purchasing new hardware when scaling up

Highly scalable – easy to add or remove resources on demand

Flexibility

Inflexible – changes require physical hardware adjustments

Highly flexible – resources can be provisioned and de-provisioned quickly

Cost

High upfront cost for hardware, software licenses, and IT staff

Low upfront cost – pay-as-you-go model for resources used

Maintenance

Requires dedicated IT staff for maintenance and updates

Minimal maintenance required – cloud provider manages infrastructure

Security

High level of control over security, but requires significant expertise

Robust security measures implemented by cloud providers

Downtime

Recovery from hardware failures can be time-consuming

Cloud providers offer high availability and disaster recovery features

Location

Limited to the physical location of datacenter

Access from anywhere with an internet connection

Table 1.2: Importance of cloud computing for organizations

We will learn how cloud computing technology has helped organizations scale their IT infrastructure in the next section.

Understanding why the cloud is well suited for scalability

Today, virtually anyone can get hundreds or thousands of servers, in just a few clicks, in the form of virtual machines or instances created on physical infrastructure maintained by cloud providers such as Amazon Web Services, Google Cloud Platform, and Microsoft Azure. A lot of companies decided to migrate their workloads from on-premises to a cloud provider, and their adoption has been massive over the last few years.

Thanks to that, now, computing power is one of the simplest things you can get.

Cloud computing providers are now typical hosting solutions that agile teams possess in their arsenal. The main reason for this is that the cloud is extremely well suited to modern development.

Virtual machine configurations, CPUs, OSes, network rules, and more are publicly displayed and fully configurable, so there are no secrets for your team in terms of what the production environment is made of. Because of the programmable nature of cloud providers, it is very easy to replicate a production environment in a development or testing environment, providing more flexibility to teams, and helping them face their challenges when developing software. That’s a useful advantage for an agile development team built around the DevOps philosophy that needs to manage the development, release, and maintenance of applications in production.

Cloud providers have provided many benefits, as follows:

  • Elasticity and scalability
  • Helping to break up silos and enforcing agile methodologies
  • Fitting well with agile methodologies and DevOps
  • Low costs and flexible billing models
  • Ensuring there is no need to manage physical servers
  • Allowing virtual machines to be destroyed and recreated at will
  • More flexible compared to renting a bare-metal machine monthly

Due to these benefits, the cloud is a wonderful asset in the arsenal of an agile development team. Essentially, you can build and replicate a production environment over and over again without the hassle of managing the physical machine by yourself. The cloud enables you to scale your app based on the number of users using it or the computing resources they are consuming. You’ll make your app highly available and fault tolerant. The result is a better experience for your end users.

IMPORTANT NOTE

Please note that Kubernetes can run both on the cloud and on-premises. Kubernetes is very versatile, and you can even run it on a Raspberry Pi. Kubernetes and the public cloud are a good match, but you are not required or forced to run it on the cloud.

Now that we have explained the changes the cloud produced, let’s move on to software architecture because, over the years, a few things have also changed there.

Exploring the monolithic architecture

In the past, applications were mostly composed of monoliths. A typical monolith application consists of a simple process, a single binary, or a single package, as shown in Figure 1.3.

This unique component is responsible for the entire implementation of the business logic, to which the software must respond. Monoliths are a good choice if you want to develop simple applications that might not necessarily be updated frequently in production. Why? Well, because monoliths have one major drawback. If your monolith becomes unstable or crashes for some reason, your entire application will become unavailable:

Figure 1.3: A monolith application consists of one big component that contains all your software

The monolithic architecture can allow you to gain a lot of time during your development and that’s perhaps the only benefit you’ll find by choosing this architecture. However, it also has many disadvantages. Here are a few of them:

  • A failed deployment to production can break your whole application.
  • Scaling activities become difficult to achieve; if you fail to scale, all your applications might become unavailable.
  • A failure of any kind on a monolith can lead to a complete outage of your app.

In the 2010s, these drawbacks started to cause real problems. With the increase in the frequency of deployments, it became necessary to think of a new architecture that would be capable of supporting frequent deployments and shorter update cycles, while reducing the risk or general unavailability of the application. This is why the microservices architecture was designed.

Exploring the microservices architecture

The microservices architecture consists of developing your software application as a suite of independent micro-applications. Each of these applications, which is called a microservice, has its own versioning, life cycle, environment, and dependencies. Additionally, it can have its own deployment life cycle. Each of your microservices must only be responsible for a limited number of business rules, and all your microservices, when used together, make up the application. Think of a microservice as real full-featured software on its own, with its own life cycle and versioning process.

Since microservices are only supposed to hold a subset of all the features that the entire application has, they must be accessible in order to expose their functions. You must get data from a microservice, but you might also want to push data into it. You can make your microservice accessible through widely supported protocols such as HTTP or AMQP, and they need to be able to communicate with each other.

That’s why microservices are generally built as web services that expose their functionality through well-defined APIs. While HTTP (or HTTPS) REST APIs are a popular choice due to their simplicity and widespread adoption, other protocols, such as GraphQL, AMQP, and gRPC, are gaining traction and are used commonly.

The key requirement is that a microservice provides a well-documented and discoverable API endpoint, regardless of the chosen protocol. This allows other microservices to seamlessly interact and exchange data.

This is something that greatly differs from the monolithic architecture:

Figure 1.4: A microservice architecture where different microservices communicate via the HTTP protocol

Another key aspect of the microservice architecture is that microservices need to be decoupled: if a microservice becomes unavailable or unstable, it must not affect the other microservices or the entire application’s stability. You must be able to provision, scale, start, update, or stop each microservice independently without affecting anything else. If your microservices need to work with a database engine, bear in mind that even the database must be decoupled. Each microservice should have its own database and so on. So, if the database of microservice A crashes, it won’t affect microservice B:

Figure 1.5: A microservice architecture where different microservices communicate with each other and with a dedicated database server; this way, the microservices are isolated and have no common dependencies

The key rule is to decouple as much as possible so that your microservices are fully independent. Because they are meant to be independent, microservices can also have completely different technical environments and be implemented in different languages. You can have one microservice implemented in Go, another one in Java, and another one in PHP, and all together they form one application. In the context of a microservice architecture, this is not a problem. Because HTTP is a standard, they will be able to communicate with each other even if their underlying technologies are different.

Microservices must be decoupled from other microservices, but they must also be decoupled from the operating system running them. Microservices should not operate at the host system level but at the upper level. You should be able to provision them, at will, on different machines without needing to rely on a strong dependency on the host system; that’s why microservice architectures and containers are a good combination.

If you need to release a new feature in production, you simply deploy the microservices that are impacted by the new feature version. The others can remain the same.

As you can imagine, the microservice architecture has tremendous advantages in the context of modern application development:

  • It is easier to enforce recurring production deliveries with minimal impact on the stability of the whole application.
  • You can only upgrade to a specific microservice each time, not the whole application.
  • Scaling activities are smoother since you might only need to scale specific services.

However, on the other hand, the microservice architecture has a couple of disadvantages too:

  • The architecture requires more planning and is hard to develop.
  • There are problems in managing each microservice’s dependencies.

Microservice applications are considered hard to develop. This approach might be hard to understand, especially for junior developers. Dependency management can also become complex since all microservices can potentially have different dependencies.

Choosing between monolithic and microservices architectures

Building a successful software application requires careful planning, and one of the key decisions you’ll face is which architecture to use. Two main approaches dominate the scene: monoliths and microservices:

  • Monoliths: Imagine a compact, all-in-one system. That’s the essence of a monolith. Everything exists in a single codebase, making development and initial deployment simple for small projects or teams with limited resources. Additionally, updates tend to be quick for monoliths because there’s only one system to manage.
  • Microservices: Think of a complex application broken down into independent, modular components. Each service can be built, scaled, and deployed separately. This approach shines with large, feature-rich projects and teams with diverse skillsets. Microservices provide flexibility and potentially fast development cycles. However, they also introduce additional complexity in troubleshooting and security management.

Ultimately, the choice between a monolith and microservices hinges on your specific needs. Consider your project’s size, team structure, and desired level of flexibility. Don’t be swayed by trends – pick the architecture that empowers your team to develop and manage your application efficiently.

Kubernetes provides flexibility. It caters to both fast-moving monoliths and microservices, allowing you to choose the architecture that best suits your project’s needs.

In the next section, we will learn about containers and how they help microservice software architectures.

You have been reading a chapter from
The Kubernetes Bible - Second Edition
Published in: Nov 2024
Publisher: Packt
ISBN-13: 9781835464717
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image