Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Bootstrapping Service Mesh Implementations with Istio
Bootstrapping Service Mesh Implementations with Istio

Bootstrapping Service Mesh Implementations with Istio: Build reliable, scalable, and secure microservices on Kubernetes with Service Mesh

eBook
€17.99 €26.99
Paperback
€33.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Table of content icon View table of contents Preview book icon Preview Book

Bootstrapping Service Mesh Implementations with Istio

Introducing Service Meshes

Service Mesh are an advanced and complex topic. If you have experience using the cloud, Kubernetes, and developing and building an application using microservices architecture, then certain benefits of Service Mesh will be obvious to you. In this chapter, we will familiarize ourselves with and refresh some key concepts without going into too much detail. We will look at the problems you experience when you are deploying and operating applications built using microservices architecture and deployed on containers in the cloud, or even traditional data centers. Subsequent chapters will focus on Istio, so it is good to take some time to read through this chapter to prepare yourself for the learning ahead.

In this chapter, we’re going to cover the following main topics:

  • Cloud computing and its advantages
  • Microservices architecture
  • Kubernetes and how it influences design thinking
  • An introduction to Service Mesh

The concepts in the chapter will help you build an understanding of Service Mesh and why they are needed. It will also provide you guidance on identifying some of the signals and symptoms in your IT environment that indicate you need to implement Service Mesh. If you don’t have hands-on experience in dealing with large-scale deployment architecture using Kubernetes, cloud, and microservices architecture, then this chapter will familiarize you with these concepts and give you a good start toward understanding more complex subjects in subsequent chapters. Even if you are already familiar with these concepts, it will still be a good idea to read this chapter to refresh your memory and experiences.

Revisiting cloud computing

In this section, we will look at what cloud computing is in simple terms, what benefits it provides, how it influences design thinking, as well software development processes.

Cloud computing is utility-style computing with a business model similar to what is provided by businesses selling utilities such as LPG and electricity to our homes. You don’t need to manage the production, distribution, or operation of electricity. Instead. you focus on consuming it effectively and efficiently by just plugging in your device to the socket on the wall, using the device, and paying for what you consume. Although this example is very simple, it is still very relevant as an analogy. Cloud computing providers provide access to compute, storage, databases, and a plethora of other services, including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) over the internet.

Figure 1.1 – Cloud computing options

Figure 1.1 – Cloud computing options

Figure 1.1 illustrates the cloud computing options most commonly used:

  • IaaS provides infrastructure such as networking to connect your application with other systems in your organization, as well as everything else you would like to connect to. IaaS gives you access to computational infrastructure to run your application, equivalent to Virtual Machines (VMs) or bare-metal servers in traditional data centers. It also provides storage for host data for your applications to run and operate. Some of the most popular IaaS providers are Amazon EC2, Azure virtual machines, Google Compute Engine, Alibaba E-HPC (which is very popular in China and the Greater China region), and VMware vCloud Air.
  • PaaS is another kind of offering that provides you with the flexibility to focus on building applications rather than worrying about how your application will be deployed, monitored, and so on. PaaS includes all that you get from IaaS but also middleware to deploy your applications, development tools to help you build applications, databases to store data, and so on. PaaS is especially beneficial for companies adopting microservices architecture. When adopting microservices architecture, you also need to build an underlying infrastructure to support microservices. The ecosystem required to support microservices architecture is expensive and complex to build. Making use of PaaS to deploy microservices makes microservices architecture adoption much faster and easier. There are many examples of popular PaaS services from cloud providers. However, we will be using Amazon Elastic Kubernetes Service (EKS) as a PaaS to deploy the sample application we will explore hands-on with Istio.
  • SaaS is another kind of offering that provides a complete software solution that you can use as a service. It is easy to get confused between PaaS and SaaS services, so to make things simple, you can think of SaaS as services that you can consume without needing to write or deploy any code. For example, it’s highly likely that you are using an email service as SaaS with the likes of Gmail. Moreover, many organizations use productivity software that is SaaS, and popular examples are services such as Microsoft Office 365. Other examples include CRM systems such as Salesforce and enterprise resource planning (ERP) systems. Salesforce also provides a PaaS offering where Salesforce apps can be built and deployed. Salesforce Essentials for small businesses, Sales Cloud, Marketing Cloud, and Service Cloud are SaaS offerings, whereas Salesforce Platform, which is a low-code service for users to build Salesforce applications, is a PaaS offering. Other popular examples of SaaS are Google Maps, Google Analytics, Zoom, and Twilio.

Cloud services providers also provide different kinds of cloud offerings, with varying business models, access methods, and target audiences. Out of many such offerings, the most common are a public cloud, a private cloud, a hybrid cloud, and a community cloud:

  • A public cloud is the one you most probably are familiar with. This offering is available over the internet and is accessible to anyone and everyone with the ability to subscribe, using a credit card or similar payment mechanism.
  • A private cloud is a cloud offering that can be accessed over the internet or a restricted private network to a restricted set of users. A private cloud can be an organization providing IaaS or PaaS to its IT users; there are also service providers who provide a private cloud to organizations. The private cloud delivers a high level of security and is widely used by organizations that have highly sensitive data.
  • A hybrid cloud refers to an environment where public and private clouds are collectively used. Also, a hybrid cloud is commonly used when more than one cloud offering is in use – for example, an organization using both AWS and Azure with applications deployed and data flowing across the two. A hybrid cloud is a good option when there are data and applications that are required to be hosted in a private cloud due to security reasons. Conversely, there may be other applications that don’t need to reside in the private cloud and can benefit from the scalability and elasticity features of a public cloud. Rather than restricting yourself to a public or private cloud, or one cloud provider or another, you should reap the benefit of the strengths of various cloud providers and create an IT landscape that is secure, resilient, elastic, and cost-effective.
  • A community cloud is another cloud offering available to a set of organizations and users. Some good examples are AWS GovCloud in the US, which is a community cloud for the US government. This kind of cloud restricts who can use it – for example, AWS GovCloud can only be used by US government departments and agencies.

Now that you understand the true crux of cloud computing, let’s look at some of its key advantages in the following section.

Advantages of cloud computing

Cloud computing enables organizations to easily access all kinds of technologies without going through high upfront investment in expensive hardware and software procurement. By utilizing cloud computing, organizations achieve agility, as they can innovate faster by having access to high-end compute power and infrastructure (such as a load balancer, compute instances, and so on) and also to software services (such as machine learning, analytics, messaging infrastructure, AI, databases, and so on) that can be integrated as building blocks in a plug-and-play style to build software applications.

For example, if you’re building a software application, then most probably it will need the following:

  • Load balancers
  • Databases
  • Servers to run and compute servers to host an application
  • Storage to host the application binaries, logs, and so on
  • A messaging system for asynchronous communication

You will need to procure, set up, and configure this infrastructure in an on-premises data center. This activity, though important for launching and operationalizing your applications in production, does not produce any business differentiators between you and your competition. High availability and resiliency of your software application infrastructure is a requirement that is required to sustain and survive in the digital world. To compete and beat your competition, you need to focus on customer experience and constantly delivering benefits to your consumers.

When deploying on-premises, you need to factor in all upfront costs of procuring infrastructure, which include the following:

  • Network devices and bandwidth
  • Load balancers
  • A firewall
  • Servers and storage
  • Rack space
  • Any new software required to run the application

All the preceding costs will incur Capital Expenditures (CapEx) for the project. You will also need to factor in the setup cost, which includes the following:

  • Network, compute servers, and cabling
  • Virtualization, operating systems, and base configuration
  • Setup of middleware such as application servers and web servers (if using containerization, then the setup of container platforms, databases, and messaging)
  • Logging, auditing, alarming, and monitoring components

All the preceding will incur CapEx for the project but may fall under the organization’s Operating Expenses (OpEx).

On top of the aforementioned additional costs, the most important factor to consider is the time and human resources required to procure, set up, and make the infrastructure ready for use. This significantly impacts your ability to launch features and services on the market (also called agility and time to market).

When using the cloud, these costs can be procured with a pay-as-you-go model. Where you need compute and storage, it can be procured in the form of IaaS, and where you need middleware, it can be procured in the form of PaaS. You will realize that some of the functionality you need to build might be already available as SaaS. This expedites your software delivery and time to market. On the cost front, some of the costs will still incur CapEx for your project, but your organization can claim it as OpEx, which has certain benefits from a tax point of view. Whereas it previously took months of preparation to set up all that you needed to deploy your application, it can now be done in days or weeks.

Cloud computing also changes the way you design, develop, and operate IT systems. In Chapter 4, we will look at cloud-native architecture and how it differs from traditional architecture.

Cloud computing makes it easier to build and ship software applications with low upfront investments. The following section describes microservices architecture and how it is used to build and deliver highly scalable and resilient applications.

Understanding microservices architecture

Before we discuss microservices architecture, let’s first discuss monolithic architecture. It’s highly likely that you will have encountered or probably even participated in building one. To understand it better, let’s take a scenario and see how it has been traditionally solved using monolithic architecture.

Let’s imagine a book publisher who wants to start an online bookstore. The online bookstore needs to provide the following functionalities to its readers:

  • Readers should be able to browse all the books available for purchase.
  • Readers should be able to select the books they want to order and save them to a shopping cart. They should also be able to manage their shopping cart.
  • Readers should be able to then authorize payment for the book order using a credit card.
  • Readers should have the books delivered to their shipping address once payment is complete.
  • Readers should be able to sign up, store details including their shipping address, and bookmark favorite books.
  • Readers should be able to sign in, check what books they have purchased, download any purchased electronic copies, and update shipping details and any other account information.

There will be many more requirements for an online bookstore, but for the purpose of understanding monolithic architecture, let’s try to keep it simple by limiting the scope to these requirements.

It is worth mentioning Conway’s law, where he stated that, often, the design of monolithic systems reflects the communication structure of an organization:

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.

– Melvin E. Conway

There are various ways to design this system; we can follow traditional design patterns such as model-view-controller (MVC), but to do a fair comparison with microservices architecture, let’s make use of hexagonal architecture. We will also be using hexagonal architecture in microservices architecture.

With a logical view of hexagonal architecture, business logic sits in the center. Then, there are adaptors to handle requests coming from outside as well as to send requests outside, which are called inbound and outbound adaptors respectively. The business logic has one or more ports, which are basically a defined set of operations that define how adaptors can interact with business logic as well as how business logic can invoke external systems. The ports through which external systems interact with business logic are called inbound ports, whereas the ports through which business logic interacts with external systems are called outbound ports.

We can summarize the execution flow in a hexagonal architecture in the following two points:

  • User interface and REST API adaptors for web and mobile invoke business logic via inbound adaptors
  • Business logic invokes external-facing adaptors such as databases and external systems via outbound adaptors

One last but very important point to make about hexagonal architecture is that business logic is made up of modules that are a collection of domain objects. To know more about domain-driven design definitions and patterns, you can read the reference guide written by Eric Evans at https://domainlanguage.com/wp-content/uploads/2016/05/DDD_Reference_2015-03.pdf.

Returning to our online bookstore application, the following will be the core modules:

  • Order management: Managing customer orders, shopping carts, and updates on order progress
  • Customer management: Managing customer accounts, including sign-up, sign-in, and subscriptions
  • Payment management: Managing payments
  • Product catalog: Managing all the products available
  • Shipping: Managing the delivery of orders
  • Inventory: Managing up-to-date information on inventory levels

With these in mind, let’s draw the hexagonal architecture for this system.

Figure 1.2 – The online book store application monolith

Figure 1.2 – The online book store application monolith

Though the architecture follows hexagonal architecture and some principles of domain-driven design, it is still packaged as one deployable or executable unit, depending on the underlying programming language you are using to write it. For example, if you are using Java, the deployable artifact will be a WAR file, which will then be deployed on an application server.

The monolithic application looks awesome when it’s greenfield but nightmarish when it becomes brownfield, in which case it would need to be updated or extended to incorporate new features and changes.

Monolithic architectures are difficult to understand, evolve, and enhance because the code base is big and, with time, gets humongous in size and complexity. This means it takes a long time to make code changes and to ship the code to production. Code changes are expensive and require thorough regression testing. The application is difficult and expensive to scale, and there is no option to allocate dedicated computing resources to individual components of the application. All resources are allocated holistically to the application and are consumed by all parts of it, irrespective of their importance in its execution.

The other issue is lock-in to one technology for the whole code base. What this basically means is that you need to constrain yourself to one or a few technologies to support the whole code base. Code lock-in is detrimental to efficient outcomes, including performance, reliability, as well as the amount of effort required to achieve an outcome. You should be using technologies that are the best fit to solve a problem. For example, you can use TypeScript for the UI, Node.js for the API, Golang for modules needing concurrency or maybe for writing the core modules, and so on. Using a monolithic architecture, you are stuck with technologies you used in the past, which might not be the right fit to solve the current problem.

So, how does microservices architecture solve this problem? Microservices is an overloaded term, and there are many definitions of it; in other words, there is no single definition of microservices. A few well-known personalities have contributed their own definitions of microservices architecture:

The term Microservices architecture has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services. While there is no precise definition of this architectural style, there are certain common characteristics around organization around business capability, automated deployment, intelligence in the endpoints, and decentralized control of languages and data.

– Martin Fowler and James Lewis

The definition was published on https://martinfowler.com/articles/microservices.html and is dated March 25, 2014, so you can ignore “sprung up over the last few years” in the description, as microservices architecture has becoming mainstream and pervasive.

Another definition of microservices is from Adam Cockcroft: “Loosely coupled service-oriented architecture with bounded contexts.

In microservices architecture, the term micro is a topic of intense debate, and often the question asked is, “How micro should microservices be?” or “How should I decompose my application?”. There is no easy answer to this; you can follow various decomposing strategies by following domain-driven design and decomposing applications into services based on business capability, functionality, the responsibility or concern of each service or module, scalability, bounded context, and blast radius. There are numerous articles and books written on the topic of microservices and decomposition strategies, so I am sure you can find enough to read about strategies for sizing your application in microservices.

Let’s get back to the online bookstore application and redesign it using a microservices architecture. The following diagram represents the online bookstore applications built using microservices architecture principles. The individual services are still following hexagonal architecture, and for brevity, we have not represented the inbound and outbound ports and adaptors. You can assume that ports, adaptors, and containers are within the hexagon itself.

Figure 1.3 – The online bookstore microservices architecture

Figure 1.3 – The online bookstore microservices architecture

Microservices architecture provides several benefits over monolithic architecture. Having independent modules segregated based on functionality and decoupled from each other unlocks the monolithic shackles that drag the software development process. Microservices can be built faster at a comparatively lower cost than a monolith and are well adept for continuous deployment processes and, thus, have faster time to production. With microservices architecture, developers can release code to production as frequently as they want. The smaller code base of microservices is easy to understand, and developers only need to understand microservices and not the whole application. Also, multiple developers can work on microservices within the application without any risk of code being overwritten or impacting each other’s work. Your application, now made up of microservices, can leverage polyglot programming techniques to deliver performance efficiency, with less effort for more outcomes, and best-of-breed technologies to solve a problem.

Microservices as self-contained independent deployable units provide you with fault isolation and a reduced blast radius – for example, assume that one of the microservices starts experiencing exceptions, performance degradation, memory leakage, and so on. In this case, because the service is deployed as a self-contained unit with its own resource allocation, this problem will not affect other microservices. Other microservices will not get impacted by overconsumption of memory, CPU, storage, network, and I/O.

Microservices are also easier to deploy because you can use varying deployment options, depending on microservices requirements and what is available to you – for example, you can have a set of microservices deployed on a serverless platform and, at the same time, another set on a container platform along with another set on virtual machines. Unlike monolithic applications, you are not bounded by one deployment option.

While microservices provide numerous benefits, they also come with added complexity. This added complexity is because now you have too much to deploy and manage. Not following correct decomposition strategies can also create micro-monoliths that are nightmarish to manage and operate. Another important aspect is communication between microservices. As there will be lots of microservices that need to talk to each other, it is very important that communication between microservices is swift, performant, reliable, resilient, and secure. In the Getting to know Service Mesh section, we will dig deeper into what we mean by these terms.

For now, with a good understanding of microservices architecture, it’s time to look at Kubernetes, which is also the de facto platform for deploying microservices.

Understanding Kubernetes

When designing and deploying microservices, it is easy to manage a small number of microservices. As the number of microservices grows, so does the complexity of managing them. The following list showcases some of the complexities caused by the adoption of microservices architecture:

  • Microservices will have specific deployment requirements in terms of the kind of base operating systems, middleware, database, and compute/memory/storage. Also, the number of microservices will be large, which, in turn, means that you will need to provide resources to every microservice. Moreover, to keep the cost down, you will need to be efficient with the allocation of resources and their utilization.
  • Every microservice will have a different deployment frequency. For example, any updates to payment microservices might be on a monthly basis, whereas updates to frontend UI microservices might be on a weekly or daily basis.
  • Microservices need to communicate with each other, for which they need to know about each other’s existence, and they should have application networking in place to communicate efficiently.
  • Developers who are building microservices need to have consistent environments for all stages of the development life cycle so that there are no unknowns, or near-unknowns, about the behavior of microservices when deployed in a production environment.
  • There should be a continuous deployment process in place to build and deploy microservices. If you don’t have an automated continuous deployment process, then you will need an army of people to support microservices deployments.
  • With so many microservices deployed, it is inevitable that there will be failures, but you cannot burden the microservices developer to solve those problems. Cross-cutting concerns such as resiliency, deployment orchestration, and application networking should be easy to implement and should not distract the focus of microservice developers. These cross-cutting concerns should be facilitated by the underlying platform and should not be incorporated into the microservices code.

Kubernetes, also abbreviated as K8S, is an open source system that originated from Google. Kubernetes provides automated deployment, scaling, and management of containerized applications. It provides scalability without you needing to hire an army of DevOps engineers. It fits and suits all kinds of complexities – that is, it works on a small scale as well as an enterprise scale. Google, as well as many other organizations, runs a huge number of containers on the Kubernetes platform.

Important note

A container is a self-contained deployment unit that contains all code and associated dependencies, including operating system, system, and application libraries packaged together. Containers are instantiated from images, which are lightweight executable packages. A Pod is a deployable unit in Kubernetes and is comprised of one or more containers, with each one in the Pod sharing the resources, such as storage and network. A Pod’s contents are always co-located and co-scheduled and run in a shared context.

The following are some of the benefits of the Kubernetes platform:

  • Kubernetes provides automated and reliable deployments by taking care of rollouts and rollbacks. During deployments, Kubernetes progressively rolls out changes while monitoring microservices’ health to ensure that there is no disruption to the processing of a request. If there is a risk to the overall health of microservices, then Kubernetes will roll back the changes to bring the microservices back to a healthy state.
  • If you are using the cloud, then different cloud providers have different storage types. When running in data centers, you will be using various network storage types. When using Kubernetes, you don’t need to worry about underlying storage, as it takes care of it. It abstracts the complexity of underlying storage types and provides an API-driven mechanism for developers to allocate storage to the containers.
  • Kubernetes takes care of DNS and IP allocation for the Pods; it also provides a mechanism for microservices to discover each other using simple DNS conventions. When more than one copy of services is running, then Kubernetes also takes care of load balancing between them.
  • Kubernetes automatically takes care of the scalability requirements of Pods. Depending on resource utilization, Pods are automatically scaled up, which means that the number of running Pods is increased, or scaled down, which means that the number of running Pods is reduced. Developers don’t have to worry about how to implement scalability. Instead, they just need average utilization of CPU, memory, and various other custom metrics along with scalability limits.
  • In a distributed system, failures are bound to happen. Similarly, in microservices deployments, Pods and containers will become unhealthy and unresponsive. Such scenarios are handled by Kubernetes by restarting the failed containers, rescheduling containers to other worker nodes if underlying nodes are having issues, and replacing containers that have become unhealthy.
  • As discussed earlier, microservices architecture being resource-hungry is one of its challenges, and a resource should be allocated efficiently and effectively. Kubernetes takes care of that responsibility by maximizing the allocation of resources without impairing availability or sacrificing the performance of containers.
Figure 1.4 – The online bookstore microservice deployed on Kubernetes

Figure 1.4 – The online bookstore microservice deployed on Kubernetes

The preceding diagram is a visualization of the online bookstore application built using microservices architecture and deployed on Kubernetes.

Getting to know Service Mesh

In the previous section, we read about monolithic architecture, its advantages, and disadvantages. We also read about how microservices solve the problem of scalability and provide flexibility to rapidly deploy and push software changes to production. The cloud makes it easier for an organization to focus on innovation without worrying about expensive and lengthy hardware procurement and expensive CapEx cost. The cloud also facilitates microservices architecture not only by facilitating on-demand infrastructure but also by providing various ready-to-use platforms and building blocks, such as PaaS and SaaS. When organizations are building applications, they don’t need to reinvent the wheel every time; instead, they can leverage ready-to-use databases, various platforms including Kubernetes, and Middleware as a Service (MWaaS).

In addition to the cloud, microservice developers also leverage containers, which makes microservices development much easier by providing a consistent environment and compartmentalization to help achieve modular and self-contained architecture of microservices. On top of containers, the developer should also use a container orchestration platform such as Kubernetes, which simplifies the management of containers and takes care of concerns such as networking, resource allocation, scalability, reliability, and resilience. Kubernetes also helps to optimize the infrastructure cost by providing better utilization of underlying hardware. When you combine the cloud, Kubernetes, and microservices architecture, you have all the ingredients you need to deliver potent software applications that not only do the job you want them to do but also do it cost-effectively.

So, the question on your mind must be, “Why do I need a Service Mesh?” or “Why do I need Service Mesh if I am using the cloud, Kubernetes, and microservices?” It is a great question to ask and think about, and the answer becomes evident once you have reached a stage where you are confidently deploying microservices on Kubernetes, and then you reach a certain tipping point where networking between microservices just becomes too complex to address by using Kubernetes’ native features.

Fallacies of distributed computing

Fallacies of a distributed system are a set of eight assertions made by L Peter Deutsch and others at Sun Microsystems. These assertions are false assumptions often made by software developers when designing distributed applications. The assumptions are that a network is reliable, latency is zero, bandwidth is infinite, the network is secure, the topology doesn’t change, there is one administrator, the transport cost is zero, and the network is homogenous.

At the beginning of the Understanding Kubernetes section, we looked at the challenges developers face when implementing microservices architecture. Kubernetes provides various features for the deployment of containerized microservices as well as container/Pod life cycle management through declarative configuration, but it falls short of solving communication challenges between microservices. When talking about the challenges of microservices, we used terms such as application networking to describe communication challenges. So, let’s try to first understand what application networking is and why it is so important for the successful operations of microservices.

Application networking is also a loosely used term; there are various interpretations of it depending on the context it is being used in. In the context of microservices, we refer to application networking as the enabler of distributed communication between microservices. The microservice can be deployed in one Kubernetes cluster or multiple clusters over any kind of underlying infrastructure. A microservice can also be deployed in a non-Kubernetes environment in the cloud, on-premises, or both. For now, let’s keep our focus on Kubernetes and application networking within Kubernetes.

Irrespective of where microservices are deployed, you need a robust application network in place for microservices to talk to each other. The underlying platform should not just facilitate communication but also resilient communication. By resilient communication, we mean the kind of communication where it has a large probability of being successful even when the ecosystem around it is in adverse conditions.

Apart from the application network, you also need visibility of the communication happening between microservices; this is also called observability. Observability is important in microservices communication in knowing how the microservices are interacting with each other. It is also important that microservices communicate securely with each other. The communication should be encrypted and defended against man-in-the-middle attacks. Every microservice should have an identity and be able to prove that they are authorized to communicate with other microservices.

So, why Service Meshes? Why can’t these requirements be addressed in Kubernetes? The answer lies in Kubernetes architecture and what it was designed to do. As mentioned before, Kubernetes is application life cycle management software. It provides application networking, observability, and security, but at a very basic level that is not sufficient to meet the requirements of modern and dynamic microservices architecture. This doesn’t mean that Kubernetes is not modern software. Indeed, it is a very sophisticated and cutting-edge technology, but only for serving container orchestration.

Traffic management in Kubernetes is handled by the Kubernetes network proxy, also called kube-proxy. kube-proxy runs on each node in the Kubernetes cluster. kube-proxy communicates with the Kubernetes API server and gets information about Kubernetes services. Kubernetes services are another level of abstraction to expose a set of Pods as a network service. kube-proxy implements a form of virtual IP for services that sets iptables rules, defining how any traffic for that service will be routed to the endpoints, which are essentially the underlying Pods hosting the application.

To understand it better, let’s look at the following example. To run this example, you will need minikube and kubectl on your computing device. If you don’t have this software installed, then I suggest you hold off from installing it, as we will be going through the installation steps in Chapter 2.

We will create a Kubernetes deployment and service by following the example in https://minikube.sigs.k8s.io/docs/start/:

$ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
deployment.apps/hello-minikube created

We just created a deployment object named hello-minikube. Let’s execute the kubectl describe command:

$ kubectl describe deployment/hello-minikube
Name:                   hello-minikube
…….
Selector:               app=hello-minikube
…….
Pod Template:
  Labels:  app=hello-minikube
  Containers:
   echoserver:
    Image:        k8s.gcr.io/echoserver:1.4
    ..

From the preceding code block, you can see that a Pod has been created, containing a container instantiated from the k8s.gcr.io/echoserver:1.4 image. Let’s now check the Pods:

$ kubectl get po
hello-minikube-6ddfcc9757-lq66b   1/1     Running   0          7m45s

The preceding output confirms that a Pod has been created. Now, let’s create a service and expose it so that it is accessible on a cluster-internal IP on a static port, also called NodePort:

$ kubectl expose deployment hello-minikube --type=NodePort --port=8080
service/hello-minikube exposed

Let’s describe the service:

$ kubectl describe services/hello-minikube
Name:                     hello-minikube
Namespace:                default
Labels:                   app=hello-minikube
Annotations:              <none>
Selector:                 app=hello-minikube
Type:                     NodePort
IP:                       10.97.95.146
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31286/TCP
Endpoints:                172.17.0.5:8080
Session Affinity:         None
External Traffic Policy:  Cluster

From the preceding output, you can see that a Kubernetes service named hello-minikube has been created and is accessible on port 31286, also called NodePort. We also see that there is an Endpoints object with the 172.17.0.5:8080 value. Soon, we will see the connection between NodePort and Endpoints.

Let’s dig deeper and look at what is happening to iptables. If you would like to see what the preceding service returns, then you can simply type minikube service. We are using macOS, where minikube is running itself as a VM. We will need to use ssh on minikube to see what’s happening with iptables. On Unix host machines, the following steps are not required:

$ minikube ssh

Let’s check the iptables:

$ sudo iptables -L KUBE-NODEPORTS -t nat
Chain KUBE-NODEPORTS (1 references)
target     prot opt source               destination
KUBE-MARK-MASQ  tcp  --  anywhere             anywhere             /* default/hello-minikube */ tcp dpt:31286
KUBE-SVC-MFJHED5Y2WHWJ6HX   tcp  --  anywhere             anywhere             /* default/hello-minikube */ tcp dpt:31286

We can see that there are two iptables rules associated with the hello-minikube service. Let’s look further into these iptables rules:

$ sudo iptables -L KUBE-MARK-MASQ -t nat
Chain KUBE-MARK-MASQ (23 references)
target     prot opt source               destination
MARK       all  --  anywhere             anywhere             MARK or 0x4000
$ sudo iptables -L KUBE-SVC-MFJHED5Y2WHWJ6HX -t nat
Chain KUBE-SVC-MFJHED5Y2WHWJ6HX (2 references)
target     prot opt source               destination
KUBE-SEP-EVPNTXRIBDBX2HJK   all  --  anywhere             anywhere             /* default/hello-minikube */

The first rule, KUBE-MARK-MASQ, is simply adding an attribute called packet mark, with a 0x400 value for all traffic destined for port 31286.

The second rule, KUBE-SVC-MFJHED5Y2WHWJ6HX, is routing the traffic to another rule, KUBE-SEP-EVPNTXRIBDBX2HJK. Let’s look further into it:

$ sudo iptables -L KUBE-SEP-EVPNTXRIBDBX2HJK -t nat
Chain KUBE-SEP-EVPNTXRIBDBX2HJK (1 references)
target     prot opt source               destination
KUBE-MARK-MASQ  all  --  172.17.0.5           anywhere             /* default/hello-minikube */
DNAT       tcp  --  anywhere             anywhere             /* default/hello-minikube */ tcp to:172.17.0.5:8080

Note that this rule has a destination network address translation (DNAT) to 172.17.0.5:8080, which is the address of the endpoints when we created the service.

Let’s scale the number of Pod replicas:

$ kubectl scale deployment/hello-minikube --replicas=2
deployment.apps/hello-minikube scaled

Describe the service to find any changes:

$ kubectl describe services/hello-minikube
Name:                     hello-minikube
Namespace:                default
Labels:                   app=hello-minikube
Annotations:              <none>
Selector:                 app=hello-minikube
Type:                     NodePort
IP:                       10.97.95.146
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31286/TCP
Endpoints:                172.17.0.5:8080,172.17.0.7:8080
Session Affinity:         None
External Traffic Policy:  Cluster

Note that the value of the endpoint has changed; let’s also describe the hello-minikube endpoint:

$ kubectl describe endpoints/hello-minikube
Name:         hello-minikube
…
Subsets:
  Addresses:          172.17.0.5,172.17.0.7
  NotReadyAddresses:  <none>
  Ports:
    Name     Port  Protocol
    ----     ----  --------
    <unset>  8080  TCP

Note that the endpoint is now also targeting 172.17.0.7 along with 172.17.0.5. 172.17.0.7, the new Pod that has been created as a result of increasing the number of replicas to 2.

Figure 1.5 – Service, endpoints, and Pods

Figure 1.5 – Service, endpoints, and Pods

Let’s check the iptables rules now:

$ sudo iptables -t nat -L KUBE-SVC-MFJHED5Y2WHWJ6HX
Chain KUBE-SVC-MFJHED5Y2WHWJ6HX (2 references)
target     prot opt source               destination
KUBE-SEP-EVPNTXRIBDBX2HJK  all  --  anywhere              anywhere             /* default/hello-minikube */ statistic mode random probability 0.50000000000
KUBE-SEP-NXPGMUBGGTRFLABG  all  --  anywhere              anywhere             /* default/hello-minikube */

You will find that an additional rule, KUBE-SEP-NXPGMUBGGTRFLABG, has been added, and because of the statistic mode random probability, 0.5, each packet handled by KUBE-SVC-MFJHED5Y2WHWJ6HX is then distributed 50–50 between KUBE-SEP-EVPNTXRIBDBX2HJK and KUBE-SEP-NXPGMUBGGTRFLABG.

Let’s also quickly examine the new chain added after we changed the number of replicas to 2:

$ sudo iptables -t nat -L KUBE-SEP-NXPGMUBGGTRFLABG
Chain KUBE-SEP-NXPGMUBGGTRFLABG (1 references)
target     prot opt source               destination
KUBE-MARK-MASQ  all  --  172.17.0.7           anywhere             /* default/hello-minikube */
DNAT       tcp  --  anywhere             anywhere             /* default/hello-minikube */ tcp to:172.17.0.7:8080

Note that another DNAT entry has been added for 172.17.0.7. So, essentially, the new chain and the previous one are now routing traffic to corresponding Pods.

So, if we summarize everything, kube-proxy runs on every Kubernetes node and keeps a watch on service and endpoint resources. Based on service and endpoint configurations, kube-proxy then creates iptables rules to take care of routing data packets between the consumer/client and the Pod.

The following diagram depicts the creation of iptables rules via kube-proxy and how consumers connect with Pods.

Figure 1.6 – The client connecting to a Pod based on the iptables rule chain

Figure 1.6 – The client connecting to a Pod based on the iptables rule chain

kube-proxy can also run in another mode called IP Virtual Server (IPVS). For ease of reference, here’s how this term is defined on the official Kubernetes website:

In IPVS mode, kube-proxy watches Kubernetes Services and Endpoints calls net link interface to create IPVS rules accordingly and synchronizes IPVS rules with Kubernetes Services and Endpoints periodically. This control loop ensures that IPVS status matches the desired state. When accessing a service, IPVS directs traffic to one of the backend Pods.

Tip

To find out the mode in which kube-proxy is running, you can use $ curl localhost:10249/proxyMode. On Linux, you can curl directly, but, on macOS, you need to curl from the minikube VM itself.

So, what is wrong with kube-proxy using iptables or IPVS?

kube-proxy doesn’t provide any fine-grained configuration; all settings are applied to all traffic on that node. kube-proxy can only do simple TCP, UDP, and SCTP stream forwarding or round-robin TCP, UDP, and SCTP forwarding across a set of backends. As the number of Kubernetes services grows, so does the number of rulesets in iptables. As the iptables rules are processed sequentially, it causes performance degradation with growth in microservice numbers. Also, iptables only supports the use of simple probability to support traffic distribution, which is very rudimentary. Kubernetes delivers a few other tricks but not enough to cater to resilient communication between microservices. For microservice communication to be resilient, you need more than iptables-based traffic management.

Let’s now talk about a couple of capabilities required to have resilient, fault-tolerant communication.

Retry mechanism, circuit breaking, timeouts, and deadlines

If one Pod is not functioning, then the traffic should automatically be sent to another Pod. Also, a retry needs to be done under constraints so as to not make the communication worse. For example, if a call fails, then maybe the system needs to wait before retrying. If a retry is not successful, then maybe it’s better to increase the wait time. Even then. If it is not successful, maybe it’s worth abandoning retry attempts and breaking the circuit for subsequent connection.

Circuit breaking is a mechanism that usually involves an electric circuit breaker. When there is a fault in a system where it is not safe to operate, the electric circuit breaker automatically trips. Similarly, consider microservices communications where one service is calling another service and the called service is not responding, is responding so slowly that it is detrimental to the calling service, or the occurrence of this behavior has reached a predefined threshold. In such a case, it is better to trip (stop) the circuit (communication) so that when the calling service (downstream) calls the underlying service (upstream), the communication fails straight away. The reason it makes sense to stop the downstream system from calling the upstream system is to stop resources such as network bandwidth, thread, IO, CPU, and memory from being wasted on an activity that has a significantly high probability of failing. Circuit breaking doesn’t resolve the communication problem; instead, it stops it from jumping boundaries and impacting other systems. Timeouts are also important during microservices communication so that downstream services wait for a response from the upstream system for a duration in which the response would be valid or worth waiting for. Deadlines build further on timeouts; you can see them as timeouts for the whole request, not just one connection. By specifying a deadline, a downstream system tells the upstream system about the overall maximum time permissible for processing the request, including subsequent calls to other upstream microservices involved in processing the request.

Important note

In a microservices architecture, downstream systems are the ones that rely on the upstream system. If service A calls service B, then service A will be called downstream and service B will be called upstream. When drawing a north–south architecture diagram to show a data flow between A and B, you will usually draw A at the top with an arrow pointing down toward B, which makes it confusing to call A downstream and B upstream. To make it easy to remember, you can draw the analogy that a downstream system depends on an upstream system. This way, microservice A depends on microservice B; hence, A is downstream and B is upstream.

Blue/green and canary deployments

Blue/green deployments are scenarios where you would like to deploy a new (green) version of a service side by side with the previous/existing (blue) version of a service. You make stability checks to ensure that the green environment can handle live traffic, and if it can, then you transfer the traffic from a blue to a green environment.

Blue and green can be different versions of a service in a cluster or services in an independent cluster. If something goes wrong with the green environment, you can switch the traffic back to the blue environment. Transfer of traffic from blue to green can also happen gradually (canary deployment) in various ways – for example, at a certain rate, such as 90:10 in the first 10 minutes, 70:30 in the next 10 minutes, 50:50 in the next 20 minutes, and 0:100 after that. Another example can be to apply the previous example to certain traffic, such as transferring the traffic at a previous rate with all traffic with a certain HTTP header value – that is, a certain class of traffic. While in blue/green deployment you deploy like-for-like deployments side by side, in canary deployment you can deploy a subset of what you deploy in green deployment. These features are difficult to achieve in Kubernetes due to it not supporting the fine-grained distribution of traffic.

The following diagram depicts blue/green and canary deployments.

Figure 1.7 – Blue/green deployment

Figure 1.7 – Blue/green deployment

To handle concerns such as blue/green and canary deployments, we need something that can handle the traffic at layer 7 rather than layer 4. There are frameworks such as Netflix Open Source Software (OSS) and a few others to solve distributed system communication challenges, but in doing so, they shift the responsibility of solving application networking challenges to microservice developers. Solving these concerns in application code is not only expensive and time-consuming but also not conducive to the overall outcome, which is to deliver business outcomes. Frameworks and libraries such as Netflix OSS are written in certain programming languages, which then constrain developers to use only compatible languages for building microservices. These constrain developers to use technologies and programming languages supported by a specific framework, going against the polyglot concept.

What is needed is a kind of proxy that can work alongside an application without requiring the application to have any knowledge of the proxy itself. The proxy should not just proxy the communication but also have intricate knowledge of the services doing the communication, along with the context of the communication. The application/service can then focus on business logic and let the proxy handle all concerns related to communication with other services. ss is one such proxy working at layer 7, designed to run alongside microservices. When it does so, it forms a transparent communication mesh with other Envoy proxies running alongside respective microservices. The microservice communicates only with nvoy as localhost, and Envoy takes care of the communication with the rest of the mesh. In this communication model, the microservices don’t need to know about the network. Envoy is extensible because it has a pluggable filter chain mechanism for network layers 3, 4, and 7, allowing new filters to be added as needed to perform various functions, such as TLS client certificate authentication and rate limiting.

So, how are Service Meshes related with Envoy? A service Mesh is an infrastructure responsible for application networking. The following diagram depicts the relationship between the Service Mesh control plane, the Kubernetes API server, the Service Mesh sidecar, and other containers in the Pod.

Figure 1.6 – Service Mesh sidecars, data, and the control plane

Figure 1.6 – Service Mesh sidecars, data, and the control plane

A Service Mesh provides a data plane, which is basically a collection of application-aware proxies such as Envoy that are then controlled by a set of components called the control plane. In a Kubernetes-based environment, the service proxies are inserted as a sidecar to Pods without needing any modification to existing containers within the Pod. A Service Mesh can be added to Kubernetes and traditional environments, such as virtual machines, as well. Once added to the runtime ecosystem, the Service Mesh takes care of the application networking concerns we discussed earlier, such as load balancing, timeouts, retries, canary and blue-green deployment, security, and observability.

Summary

In this chapter, we started with monolithic architecture and discussed the drag it causes in being able to expand with new capabilities as well as time to market. Monolithic architectures are brittle and expensive to change. We read about how microservices architecture breaks that inertia and provides the momentum required to meet the ever-changing and never-ending appetite of digital consumers. We also saw how microservices architecture is modular, with every module being self-contained, and can be built and deployed independently of each other. Applications built using microservices architecture make use of best-of-breed technologies that are suitable for solving individual problems.

We then discussed the cloud and Kubernetes. The cloud provides utility-style computing with a pay-as-you-go model. Common cloud services include IaaS, PaaS, and SaaS. The cloud provides access to all infrastructure you may need without you needing to worry about the procurement of expensive hardware, data center costs, and so on. The cloud also provides you with software building blocks with which you can reduce your software development cycle. In microservices architecture, containers are the way to package application code. They provide consistency of environments and isolation between services, solving the noisy neighbor problem.

Kubernetes, on the other hand, makes the usage of containers easier by providing container life cycle management and solving many of the challenges of running containers in production. As the number of microservices grows, you start facing challenges regarding traffic management between microservices. Kubernetes does provide traffic management based on kube-proxy and iptables-based rules, but it falls short of providing application networking.

We finally discussed Service Mesh, an infrastructure layer on top of Kubernetes that is responsible for application networking. The way it works is by providing a data plane, which is basically a collection of application-aware service proxies, such as Envoy, that are then controlled by a set of components called the control plane.

In the next chapter, we will read about Istio, one of the most popular Service Mesh implementations.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Learn the design, implementation, and troubleshooting of Istio in a clear and concise format
  • Grasp concepts, ideas, and solutions that can be readily applied in real work environments
  • See Istio in action through examples that cover Terraform, GitOps, AWS, Kubernetes, and Go

Description

Istio is a game-changer in managing connectivity and operational efficiency of microservices, but implementing and using it in applications can be challenging. This book will help you overcome these challenges and gain insights into Istio's features and functionality layer by layer with the help of easy-to-follow examples. It will let you focus on implementing and deploying Istio on the cloud and in production environments instead of dealing with the complexity of demo apps.  You'll learn the installation, architecture, and components of Istio Service Mesh, perform multi-cluster installation, and integrate legacy workloads deployed on virtual machines. As you advance, you'll understand how to secure microservices from threats, perform multi-cluster deployments on Kubernetes, use load balancing, monitor application traffic, implement service discovery and management, and much more. You’ll also explore other Service Mesh technologies such as Linkerd, Consul, Kuma, and Gloo Mesh. In addition to observing and operating Istio using Kiali, Prometheus, Grafana and Jaeger, you'll perform zero-trust security and reliable communication between distributed applications. After reading this book, you'll be equipped with the practical knowledge and skills needed to use and operate Istio effectively.

Who is this book for?

The book is for DevOps engineers, SREs, cloud and software developers, sysadmins, and architects who have been using microservices in Kubernetes-based environments. It addresses challenges in application networking during microservice communications. Working experience on Kubernetes, along with knowledge of DevOps, application networking, security, and programming languages like Golang, will assist with understanding the concepts covered.

What you will learn

  • Get an overview of Service Mesh and the problems it solves
  • Become well-versed with the fundamentals of Istio, its architecture, installation, and deployment
  • Extend the Istio data plane using WebAssembly (Wasm) and learn why Envoy is used as a data plane
  • Understand how to use OPA Gatekeeper to automate Istio's best practices
  • Manage communication between microservices using Istio
  • Explore different ways to secure the communication between microservices
  • Get insights into traffic flow in the Service Mesh
  • Learn best practices to deploy and operate Istio in production environments

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Apr 21, 2023
Length: 418 pages
Edition : 1st
Language : English
ISBN-13 : 9781803235967
Concepts :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning

Product Details

Publication date : Apr 21, 2023
Length: 418 pages
Edition : 1st
Language : English
ISBN-13 : 9781803235967
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 113.97
The Ultimate Docker Container Book
€37.99
Bootstrapping Service Mesh Implementations with Istio
€33.99
Mastering Kubernetes
€41.99
Total 113.97 Stars icon

Table of Contents

17 Chapters
Part 1: The Fundamentals Chevron down icon Chevron up icon
Chapter 1: Introducing Service Meshes Chevron down icon Chevron up icon
Chapter 2: Getting Started with Istio Chevron down icon Chevron up icon
Chapter 3: Understanding Istio Control and Data Planes Chevron down icon Chevron up icon
Part 2: Istio in Practice Chevron down icon Chevron up icon
Chapter 4: Managing Application Traffic Chevron down icon Chevron up icon
Chapter 5: Managing Application Resiliency Chevron down icon Chevron up icon
Chapter 6: Securing Microservices Communication Chevron down icon Chevron up icon
Chapter 7: Service Mesh Observability Chevron down icon Chevron up icon
Part 3: Scaling, Extending,and Optimizing Chevron down icon Chevron up icon
Chapter 8: Scaling Istio to Multi-Cluster Deployments Across Kubernetes Chevron down icon Chevron up icon
Chapter 9: Extending Istio Data Plane Chevron down icon Chevron up icon
Chapter 10: Deploying Istio Service Mesh for Non-Kubernetes Workloads Chevron down icon Chevron up icon
Chapter 11: Troubleshooting and Operating Istio Chevron down icon Chevron up icon
Chapter 12: Summarizing What We Have Learned and the Next Steps Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.6
(10 Ratings)
5 star 60%
4 star 40%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Brodie May 23, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I've read two books on Istio and have been using Service Mesh the last two years. This book really conceptualize the inception of architecture along with building blocks of istio with helpful resources and demos included. I was able to review this pre-release and was ecstatic to finish this book. Highly recommend.
Amazon Verified review Amazon
rahulsoni Apr 25, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I was amazed by the thorough and practical approach of Anand Rai in this book. This book provides an in-depth overview of Istio and its various components, while also offering a clear and concise guide to implementing and managing service mesh on Kubernetes.The book is well-structured, easy to read, and includes practical examples that helped me understand the concepts better. Overall, I highly recommend this book.Anand Rai has done an excellent job of making a complex topic accessible to anyone with an interest in microservices architecture.
Amazon Verified review Amazon
arunvel arunachalam Oct 29, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
What an excellent book, author has taken much efforts to explain complex topics lucidly.I would recomment this book to anyone who wants to implement or is planning to implement Service Mesh (Istio) in their organization. Also helpful for implementing zero-trust network.Excellent ReadTake a bow Anand.
Amazon Verified review Amazon
Piyush Khare Apr 22, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book provides a comprehensive guide to Istio, the popular service mesh technology used to manage cloud-based applications. The book is written in a very well structured manner, with each chapter building upon the previous one to provide a clear understanding of the technology.Overall it is an excellent resource for anyone looking to gain a comprehensive understanding of Istio and its implementation. The book covers all aspects of Istio, from installation to troubleshooting, and provides clear explanations and practical examples, making it an invaluable resource for both beginners and experts alike.
Amazon Verified review Amazon
Darpan Apr 23, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is an excellent book that provides comprehensive and practical insights into implementing Istio Service Mesh on Kubernetes. The book begins with a brief introduction to Service Mesh and the problems it solves in microservices architecture. The author dives deep into Istio control plane components followed by Envoy proxy. The book covers Istio's architecture and how it works in a Kubernetes environment, including deploying Istio on Kubernetes.The book is well-structured and easy to read, making it accessible to both beginners and experienced professionals. With its practical examples, the book enables readers to implement Istio Service Mesh and build reliable, scalable, and secure microservices on Kubernetes with ease. I highly recommend this book to anyone looking to improve their knowledge of Istio Service Mesh.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.