Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Mastering Service Mesh
Mastering Service Mesh

Mastering Service Mesh: Enhance, secure, and observe cloud-native applications with Istio, Linkerd, and Consul

Arrow left icon
Profile Icon Anjali Khatri Profile Icon Khatri
Arrow right icon
€18.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4 (5 Ratings)
Paperback Mar 2020 626 pages 1st Edition
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Anjali Khatri Profile Icon Khatri
Arrow right icon
€18.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4 (5 Ratings)
Paperback Mar 2020 626 pages 1st Edition
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Mastering Service Mesh

Monolithic Versus Microservices

The purpose of this book is to walk you through the service mesh architecture. We will cover three main open source service mesh providers: Istio, Linkerd, and Consul. First of all, we will talk about how the evolution of technology led to Service Mesh. In this chapter, we will cover the application development journey from monolithic to microservices.

The technology landscape that fueled the growth of the monolithic framework is based on the technology stack that became available 20+ years ago. As hardware and software virtualization improved significantly, a new wave of innovation started with the adoption of microservices in 2011 by Netflix, Amazon, and other companies. This trend started by redesigning monolithic applications into small and independent microservices.

Before we get started on monolithic versus microservices, let's take a step back and review what led to where we are today before the inception of microservices. This chapter will go through the brief evolution of early computer machines, hardware virtualization, software virtualization, and transitioning from monolithic to microservices-based applications. We will try to summarize the journey from the early days to where we are today.

In this chapter, we will cover the following topics:

  • Early computer machines
  • Monolithic applications
  • Microservices applications

Early computer machines

IBM launched its first commercial computer (https://ibm.biz/Bd294n), the IBM 701, in 1953, which was the most powerful high-speed electronic calculator of that time. Further progression of the technology produced mainframes, and that revolution was started in the mid-1950s (https://ibm.biz/Bd294p).

Even before co-founding Intel in 1968 with Robert Noyce, Gordon Moore espoused his theory of Moore's Law (https://intel.ly/2IY5qLU) in 1965, which states that the number of transistors incorporated in a chip will approximately double every 24 months. Exponential growth still continues to this day, though this trend may not continue for long.

IBM created its first official VM product called VM/370 in 1972 (http://www.vm.ibm.com/history), followed by hardware virtualization on the Intel/AMD platform in 2005 and 2006. Monolithic applications were the only choice on early computing machines.

Early machines ran only one operating system. As time passed and machines grew in size, a need to run multiple operating systems by slicing the machines into smaller virtual machines led to the virtualization of hardware.

Hardware virtualization

Hardware virtualization led to the proliferation of virtual machines in data centers. Greg Kalinsky, EVP and CIO of Geico, in his keynote address to the IBM Think 2019 conference, mentioned the use of 70,000 virtual machines. The management of virtual machines required a different set of tools. In this area, VMware was very successful in the Intel market, whereas IBM's usage of the Hardware Management Console (HMC) was prolific in POWER for creating Logical Partitions (LPARs), or the PowerVM. Hardware virtualization had its own overheads, and it has been very popular for running multiple operating systems machines on the same physical machine.

Multiple monolithic applications have different OS requirements and languages, and it was possible to run the runtime on the same hardware but using multiple virtual machines. During this period of hardware virtualization, work on enterprise applications using the Service-Oriented Architecture (SOA) and the Enterprise Service Bus (ESB) started to evolve, which led to large monolithic applications.

Software virtualization

The next wave of innovation started with software virtualization with the use of containerization technology. Though not new, software virtualization started to get serious traction when it became easier to start adopting through tools. Docker was an early pioneer in this space in order to make software virtualization available to general IT professionals.

Solomon Hykes started dotCloud in 2010 and renamed it Docker in 2013. Software virtualization became possible due to advances in technology to provide namespace, filesystem, and processes isolation while still using the same kernel running in a bare-metal environment or in a virtual machine.

Software virtualization using containers provides better resource utilization compared to running multiple virtual machines. This leads to 30% to 40% effective resource utilization. Usually, a virtual machine takes seconds to minutes to initialize, whereas containerization shares the same kernel space, so the start up time is a lot quicker than it is with a virtual machine.

As a matter of fact, Google used software virtualization at a very large scale and used containerization for close to 10 years. This revealed the existence of their project, known as Borg. When Google published a research paper in 2015 in the EuroSys conference (https://goo.gl/Ez99hu) about its approach in managing data centers using containerization technology, it piqued interest among many technologists and, at the very same time, Docker exploded in popularity during 2014 and 2015, which made software virtualization simple enough to use.

One of the main benefits of software virtualization (also known as containerization) was to eliminate the dependency problem for a particular piece of software. For example, the Linux glibc is the main building block library, and there are hundreds of libraries that have dependencies on a particular version of glibc. We could build a Docker container that has a particular version of glibc, and it could run on a machine that has a later version of glibc. Normally, these kinds of deep dependencies have a very complex way of maintaining two different software stacks that have been built using different versions of glibc, but containers made this very simple. Docker is credited for making a simple user interface that made software packaging easy and accessible to developers.

Software virtualization made it possible to run different monolithic applications that can run within the same hardware (bare metal) or within the same virtual machine. This also led to the birth of smaller services (a complete business function) being packaged as independent software units. This is when the era of microservices started.

Container orchestration

It is easy to manage a few containers and their deployment. When the number of containers increases, a container orchestration platform makes deployment and management simpler and easier through declarative prescriptions. As containerization proliferated in 2015, the orchestration platform for containerization also evolved. Docker came with its own open source container orchestration platform known as Docker Swarm, which was a clustering and scheduling tool for Docker containers.

Apache Mesos, though not exactly similar to Docker Swarm, was built using the same principles as the Linux kernel. It was an abstract layer between applications and the Linux kernel. It was meant for distributed computing and acts as a cluster manager with an API for resource management and scheduling.

Kubernetes was the open source evolution of Google's Borg project, and its first version was released in 2015 through the Cloud Native Computing Foundation (https://cncf.io) as its first incubator project.

Major companies such as Google, Red Hat, Huawei, ZTE, VMware, Cisco, Docker, AWS, IBM, and Microsoft are contributing to the Kubernetes open source platform, and it has become a modern cluster manager and container orchestration platform. It's not a surprise that Kubernetes has become the de facto platform and is now used by all major cloud providers, with 125 companies working on it and more than 2,800+ contributors adding to it (https://www.stackalytics.com/cncf?module=kubernetes).

As container orchestration began to simplify cluster management, it became easy to run microservices in a distributed environment, which made microservices-based applications loosely coupled systems with horizontal scale-out possibilities.

Horizontal scale-out distributed computing is not new, with IBM's shared-nothing architecture for the Db2 database (monolithic application) being in use since 1998. What's new is the loosely coupled microservices that can run and scale out easily using a modern cluster manager.

Monolithic applications that used a three-tier architecture, such as Model, View, Controller (MVC) or SOA, were one of the architectural patterns on bare metal or virtualized machines. This type of pattern was adopted well in static data center environments where machines could be identified through IP addresses, and the changes were managed through DNS. This started to change with the use of distributed applications that could run on any machine (which meant the IP address could change) in the case of failures. This shift slowly started from a static data center approach to a dynamic data center approach, where identification is now done through the name of the microservice and not the IP address of the machine or container pod where the workload runs.

This fundamental shift from static to dynamic infrastructure is the basis for the evolution from monolithic to a microservices architecture. Monolithic applications are tightly coupled and have a single code base that is released in one instance for the entire application stack. Changing a single component without affecting others is a very difficult process, but it provides simplicity. On the other hand, microservices applications are loosely coupled and multiple code bases can be released independently of each other. Changing a single component is easy, but it does not provide simplicity, as was the case with monolithic applications.

We will cover a brief history of monolithic and microservices applications in the next section in order to develop a context. This will help us transition to the specific goals of this book.

Monolithic applications

The application evolution journey from monolithic to microservices can be seen in the following diagram:

Monolithic applications were created from small applications and then built up to create a tiered architecture that separated the frontend from the backend, and the backend from the data sources. In this architecture, the frontend manages user interaction, the middle tier manages the business logic, and the backend manages data access. This can be seen in the following diagram:

In the preceding diagram, the middle tier, also known as the business logic, is tightly bound to the frontend and the backend. This is a one-dimensional monolithic experience where all the tiers are in one straight line.

The three-tier modular architecture of the client-server, consisting of a frontend tier, an application tier, and a database tier, is almost 20+ years old now. It served its purpose of allowing people to build complex enterprise applications with known limitations regarding complexity, software upgrades, and zero downtime.

A large development team commits its code to a source code repository such as GitHub. The deployment process from code commits to production used to be manual before the CICD pipeline came into existence. The releases needed to be manually tested, although there were some automated test cases. Organizations used to declare a code freeze while moving the code into production. The application became overly large, complex, and very difficult to maintain in the long term. When the original code developers were no longer available, it became very difficult and time-consuming to add enhancements.

To overcome the aforementioned limitations, the concept of SOA started to evolve around 2002 onward and the Enterprise Service Bus (ESB) evolved to establish a communication link between different applications in SOA.

Brief history of SOA and ESB

The one-dimensional model of the three-tier architecture was split into a multi-dimensional SOA, where inter-service communication was enabled through ESB using the Simple Object Access Protocol (SOAP) and other web services standards.

SOA, along with ESB, could be used to break down a large three-tier application into services, where applications were built using these reusable services. The services could be dynamically discovered using service metadata through a metadata repository. With SOA, each functionality is built as a coarse-grained service that's often deployed inside an application server.

Multiple services need to be integrated to create composite services that are exposed through the ESB layer, which becomes a centralized bus for communication. This can be seen in the following diagram:

The preceding diagram shows the consumer and provider model connected through the ESB. The ESB also contains significant business logic, making it a monolithic entity where the same runtime is shared by developers in order to develop or deploy their service integrations.

In the next section, we'll talk about API gateways. The concept of the API gateway evolved around 2008 with the advent of smartphones, which provide rich client applications that need easy and secure connectivity to the backend services.

API Gateway

The SOA/web services were not ideal for exposing business functionality as APIs. This was due to the complex nature of web service-related technologies in which SOAP is used as a message format for service-to-service communication. SOAP was also used for securing web services and service-to-service communication, as well as for defining service discovery metadata. SOAP lacked a self-service model, which hindered the development of an ecosystem around it.

We use application programming interface (API), as a term, to expose a service over REST (HTTP/JSON) or a web service (SOAP/HTTP). An API gateway was typically built on top of existing SOA/ESB implementations for APIs that could be used to expose business functionality securely as a managed service. This can be seen in the following diagram:

In the preceding diagram, the API gateway is used to expose the three-tier and SOA/ESB-based services in which the business logic contained in the ESB still hinders the development of the independent services.

With containerization availability, the new paradigm of microservices started to evolve from the SOA/ESB architecture in 2012 and seriously took off in 2015.

Drawbacks of monolithic applications

Monolithic applications are simple to develop, deploy, and scale as long as they are small in nature.

As the size and complexity of monoliths grow, various disadvantages arise, such as the following:

  • Development is slow.
  • Large monolithic code bases intimidate new developers.
  • The application is difficult to understand and modify.
  • Software releases are painful and occur infrequently.
  • Overloaded IDE, web container.
  • Continuous deployment is difficult Code Freeze period to deploy.
  • Scaling the application can be difficult due to an increase in data volume.
  • Scaling development can be difficult.
  • Requires long-term commitment to a technology stack.
  • Lack of reliability due to difficulty in testing the application thoroughly.

Enterprise application development is coordinated among many smaller teams that can work independently of each other. As an application grows in size, the aforementioned complexities lead to them looking for better approaches, resulting in the adoption of microservices.

Microservices applications

A very small number of developers recognized the need for new thinking very early on and started working on the evolution of a new architecture, called microservices, early in 2014.

Early pioneers

A few individuals took a forward leap in moving away from monolithic to small manageable services adoption in their respective companies. Some of the most notable of these people include Jeff Bezos, Amazon's CEO, who famously implemented a mandate for Amazon (https://bit.ly/2Hb3NI5) in 2002. It stated that all employees have to adopt a service interface methodology where all communication calls would happen over the network. This daring initiative replaced the monolith with a collection of loosely coupled services. One nugget of wisdom from Jeff Bezos was two-pizza teams individual teams shouldn't be larger than what two pizzas can feed. This colloquial wisdom is at the heart of shorter development cycles, increased deployment frequency, and faster time to market.

Netflix adopted microservices early on. It's important to mention Netflix's Open Source Software Center (OSS) contribution through https://netflix.github.io. Netflix also created a suite of automated open source tools, the Simian Army (https://github.com/Netflix/SimianArmy), to stress-test its massive cloud infrastructure. The rate at which Netflix has adopted new technologies and implemented them is phenomenal.

Lyft adopted microservices and created an open source distributed proxy known as Envoy (https://www.envoyproxy.io/) for services and applications, and would later go on to become a core part of one of the most popular service mesh implementations, such as Istio and Consul.

Though this book is not about developing microservices applications, we will briefly discuss the microservices architecture so that it is relevant from the perspective of a service mesh.

Since early 2000, when machines were still used as bare metal, three-tier monolithic applications ran on more than one machine, leading to the concept of distributed computing that was very tightly coupled. Bare metal evolved into VMs and monolithic applications into SOA/ESB with an API gateway. This trend continued until 2015 when the advent of containers disrupted the SOA/ESB way of thinking toward a self-contained, independently managed service. Due to this, the term microservice was coined.

The first mention of microservice as a term was used in a workshop of software architects in 2011 (https://bit.ly/1KljYiZ) when they used the term microservice to describe a common architectural style as a fine-grained SOA.

Chris Richardson created https://microservices.io in January 2014 to document architecture and design patterns.

James Lewis and Martin Fowler published their blog post (https://martinfowler.com/articles/microservices.html) about microservices in March 2014, and this blog post popularized the term microservices.

The microservices boom started with easy containerization that was made possible by Docker and through a de facto container orchestration platform known as Kubernetes, which was created for distributed computing.

What is a microservice?

The natural transition of SOA/ESB is toward microservices, in which services are decoupled from a monolithic ESB. Let's go over the core points of microservices:

  • Each service is autonomous, which is developed and deployed independently.
  • Each microservice can be scaled independently in relation to others if it receives more traffic without having to scale other microservices.
  • Each microservice is designed based on the business capabilities at hand so that each service serves a specific business goal with a simple time principle that it does only one thing, and does it well.
  • Since services do not share the same execution runtime, each microservice can be developed in different languages or in a polyglot fashion, providing agility in which developers pick the best programming language to develop their own service.
  • The microservices architecture eliminated the need for a centralized ESB. The business logic, including inter-service communication, is done through smart endpoints and dumb pipes. This means that the centralized business logic of ESBs is now distributed among the microservices through smart endpoints, and a primitive messaging system or a dumb pipe is used for service-to-service communication using a lightweight protocol such as REST or gRPC.

The evolution of SOA/ESB to the microservices pattern was mainly influenced by the idea of being able to adapt to smaller teams that are independent of each other and to provide a self-service model for the consumption of services that were created by smaller teams. At the time of writing, microservices is a winning pattern that is being adopted by many enterprises to modernize their existing monolithic application stack.

Evolution of microservices

The following diagram shows the evolution of the application architecture from a three-tier architecture to SOA/ESB and then to microservices in terms of flexibility toward scalability and decoupling:

Microservices have evolved from being tiered and the SOA architecture and are becoming the accepted pattern for building modern applications. This is due to the following reasons:

  • Extreme scalability
  • Extreme decoupling
  • Extreme agility

These are key points regarding the design of a distributed scalable application where developers can pick the best programming language of their choice to develop their own service.

A major differentiation between monolithic and microservices is that, with microservices, the services are loosely coupled, and they communicate using dumb pipe or low-level REST or gRPC protocols. One way to achieve loose coupling is through the use of a separate data store for each service. This helps services isolate themselves from each other since a particular service is not blocked due to another service holding a data lock. Separate data stores allow the microservices to scale up and down, along with their data stores, independently of all the other services.

It is also important to point out the early pioneers in microservices, which we will discuss in the next section.

Microservices architecture

The aim of a microservice architecture is to completely decouple app components from one another so that they can be maintained, scaled, and more. It's an evolution of the app architecture, SOA, and publishing APIs:

  • SOA: Focuses on reuse, technical integration issues, and technical APIs
  • Microservices: Focus on functional decomposition, business capabilities, and business APIs

In Martin Fowler's paper, he states that the microservice architecture would have been better named the micro-component architecture because it is really about breaking apps up into smaller pieces (micro-components). For more information, see Microservices, by Martin Fowler, at https://martinfowler.com/articles/microservices.html. Also, check out Kim Clark's IBM blog post on microservices at https://developer.ibm.com/integration/blog/2017/02/09/microservices-vs-soa, where he argues microservices as micro-components.

The following diagram shows the microservice architecture in which different clients consume the same services. Each service can use the same/different language and can be deployed/scaled independently of each other:

Each microservice runs its own process. Services are optimized for a single function and they must have one, and only one, reason to change. The communication between services is done through REST APIs and message brokers. The CICD is defined per service. The services evolve at a different pace. The scaling policy for each service can be different.

Benefits and drawbacks of microservices

The explosion of microservices is not an accident, and it is mainly due to rapid development and scalability:

  • Rapid development: Develop and deploy a single service independently. Focus only on the interface and the functionality of the service and not the functionality of the entire system.
  • Scalability: Scale a service independently without affecting others. This is simple and easy to do in a Kubernetes environment.

The other benefits of microservices are as follows:

  • Each service can use a different language (better polyglot adaptability).
  • Services are developed on their own timetables so that the new versions are delivered independently of other services.
  • The development of microservices is suited for cross-functional teams.
  • Improved fault isolation.
  • Eliminates any long-term commitment to a technology stack.

However, the microservice is not a panacea and comes with drawbacks:

  • The complexity of a distributed system.
  • Increased resource consumption.
  • Inter-service communication.
  • Testing dependencies in a microservices-based application without a tool can be very cumbersome.
  • When a service fails, it becomes very difficult to identify the cause of a failure.
  • A microservice can't fetch data from other services through simple queries. Instead, it must implement queries using APIs.
  • Microservices lead to more Ops (operations) overheads.

There is no perfect silver bullet, and technology continues to emerge and evolve. Next, we'll discuss the future of microservices.

Future of microservices

Microservices can be deployed in a distributed environment using a container orchestration platform such as Kubernetes, Docker Swarm, or an on-premises Platform as a Service (PaaS), such as Pivotal Cloud Foundry or Red Hat OpenShift.

Service mesh helps reduce/overcome the aforementioned challenges and overheads on Ops, such as the operations overhead for manageability, serviceability, metering, and testing. This can be made simple by the use of service mesh providers such as Istio, Linkerd, or Consul.

As with every technology, there is no perfect solution, and each technology has its own benefits and drawbacks regarding an individual's perception and bias toward a particular technology. Sometimes, the drawbacks of a particular technology outweigh the benefits they accrue.

In the last 20 years, we have seen the evolution of monolithic applications to three-tier ones, to the adoption of the SOA/ESB architecture, and then the transition to microservices. We are already witnessing a framework evolution around microservices using service mesh, which is what this book is based on.

Summary

In this chapter, we gleaned over the evolution of computers and running multiple virtual machines on a single computer, which was possible through hardware virtualization. We learned about the tiered application journey that started 20+ years ago on bare metal machines. We witnessed the transition of three-tiered applications to the SOA/ESB architecture. The evolution of software virtualization drove the explosion of containerization, which led to the evolution of the SOA/ESB architecture to microservices. Then, we learned about the benefits and drawbacks of microservices. You can apply this knowledge of microservices to drive a business's need for rapid development and scalability to achieve time-to-market goals.

In the next chapter, we will move on to cloud-native applications and understand what is driving the motivation of various enterprises to move from monolithic to cloud-native applications. The purpose of this book is to go into the details of the service mesh architecture, and this can't be done without learning about the cloud-native architecture.

Questions

  1. Microservices applications are difficult to test.

A) True
B) False

  1. Monolithic/microservices applications are related to dynamic infrastructures.

A) True
B) False

  1. Monolithic applications are best if they are small in size.

A) True
B) False

  1. When a microservice fails, debugging becomes very difficult.

A) True
B) False

  1. Large monolithic applications are very difficult to maintain and patch in the long term.

A) True
B) False

Further reading

  • Microservices Patterns, Richardson, Chris (2018). Shelter Island, NY: Manning
  • Microservices Resource Guide, Fowler, M. (2019), martinfowler.com. Available at https://martinfowler.com/microservices, accessed March 3, 2019
  • Microservices for the Enterprise, Indrasiri., K., and Siriwardena, P. (2018). [S.l.]: Apress.
  • From Monolithic Three-tiers Architectures to SOA versus Microservices, Maresca, P. (2015), TheTechSolo, available at https://bit.ly/2GYhYk,accessed March 3, 2019
  • Retire the Three-Tier Application Architecture to Move Toward Digital Business, Thomas, A., and Gupta, A. (2016), Gartner.com, available at https://gtnr.it/2Fl787w, accessed March 3, 2019
  • Microservices Lead the New Class of Performance Management Solutions, LightStep. (2019), available at https://lightstep.com/blog/microservices-trends-report-2018, accessed March 3, 2019
  • What year did Bezos issue the API Mandate at Amazon?, Schroeder, G. (2016), available at https://bit.ly/2Hb3NI5, accessed March 3, 2019
  • Kubernetes Components, Kubernetes.io. (2019), available at https://bit.ly/2JyhIGt, accessed March 3, 2019
  • Microservices implementation Netflix stack – Tharanga Thennakoon – Medium, Thennakoon, T. (2017), available at https://bit.ly/2NCDzPZ, accessed March 3, 2019
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Manage your cloud-native applications easily using service mesh architecture
  • Learn about Istio, Linkerd, and Consul – the three primary open source service mesh providers
  • Explore tips, techniques, and best practices for building secure, high-performance microservices

Description

Although microservices-based applications support DevOps and continuous delivery, they can also add to the complexity of testing and observability. The implementation of a service mesh architecture, however, allows you to secure, manage, and scale your microservices more efficiently. With the help of practical examples, this book demonstrates how to install, configure, and deploy an efficient service mesh for microservices in a Kubernetes environment. You'll get started with a hands-on introduction to the concepts of cloud-native application management and service mesh architecture, before learning how to build your own Kubernetes environment. While exploring later chapters, you'll get to grips with the three major service mesh providers: Istio, Linkerd, and Consul. You'll be able to identify their specific functionalities, from traffic management, security, and certificate authority through to sidecar injections and observability. By the end of this book, you will have developed the skills you need to effectively manage modern microservices-based applications.

Who is this book for?

This book is for solution architects and network administrators, as well as DevOps and site reliability engineers who are new to the cloud-native framework. You will also find this book useful if you’re looking to build a career in DevOps, particularly in operations. Working knowledge of Kubernetes and building microservices that are cloud-native is necessary to get the most out of this book.

What you will learn

  • Compare the functionalities of Istio, Linkerd, and Consul
  • Become well-versed with service mesh control and data plane concepts
  • Understand service mesh architecture with the help of hands-on examples
  • Work through hands-on exercises in traffic management, security, policy, and observability
  • Set up secure communication for microservices using a service mesh
  • Explore service mesh features such as traffic management, service discovery, and resiliency

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Mar 30, 2020
Length: 626 pages
Edition : 1st
Language : English
ISBN-13 : 9781789615791
Vendor :
Google
Languages :
Concepts :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Mar 30, 2020
Length: 626 pages
Edition : 1st
Language : English
ISBN-13 : 9781789615791
Vendor :
Google
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 99.97
The Kubernetes Workshop
€32.99
Learn Kubernetes Security
€29.99
Mastering Service Mesh
€36.99
Total 99.97 Stars icon

Table of Contents

30 Chapters
Section 1: Cloud-Native Application Management Chevron down icon Chevron up icon
Monolithic Versus Microservices Chevron down icon Chevron up icon
Cloud-Native Applications Chevron down icon Chevron up icon
Section 2: Architecture Chevron down icon Chevron up icon
Service Mesh Architecture Chevron down icon Chevron up icon
Service Mesh Providers Chevron down icon Chevron up icon
Service Mesh Interface and SPIFFE Chevron down icon Chevron up icon
Section 3: Building a Kubernetes Environment Chevron down icon Chevron up icon
Building Your Own Kubernetes Environment Chevron down icon Chevron up icon
Section 4: Learning about Istio through Examples Chevron down icon Chevron up icon
Understanding the Istio Service Mesh Chevron down icon Chevron up icon
Installing a Demo Application Chevron down icon Chevron up icon
Installing Istio Chevron down icon Chevron up icon
Exploring Istio Traffic Management Capabilities Chevron down icon Chevron up icon
Exploring Istio Security Features Chevron down icon Chevron up icon
Enabling Istio Policy Controls Chevron down icon Chevron up icon
Exploring Istio Telemetry Features Chevron down icon Chevron up icon
Section 5: Learning about Linkerd through Examples Chevron down icon Chevron up icon
Understanding the Linkerd Service Mesh Chevron down icon Chevron up icon
Installing Linkerd Chevron down icon Chevron up icon
Exploring the Reliability Features of Linkerd Chevron down icon Chevron up icon
Exploring the Security Features of Linkerd Chevron down icon Chevron up icon
Exploring the Observability Features of Linkerd Chevron down icon Chevron up icon
Section 6: Learning about Consul through Examples Chevron down icon Chevron up icon
Understanding the Consul Service Mesh Chevron down icon Chevron up icon
Installing Consul Chevron down icon Chevron up icon
Exploring the Service Discovery Features of Consul Chevron down icon Chevron up icon
Exploring Traffic Management in Consul Chevron down icon Chevron up icon
Assessment Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4
(5 Ratings)
5 star 80%
4 star 0%
3 star 0%
2 star 20%
1 star 0%
harish Apr 19, 2020
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book is very well written and provides hands on guidance for operations professionals. If you are a devops professional, or any developer who'd use microservices to deploy their application, then this book should be part of your learning toolkit. Learning and mastering service mesh will alleviate your need to build traffic management, telemetry or securlity features in your applications. I am a beginer in the cloud native applications space and i was able to easliy follow, learn and practice the concepts in this book. Its a great time to buy a book like this and learn new skills.
Amazon Verified review Amazon
Vladi Sep 18, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Very nice intro in service mesh world, i like especially links to other materials and balance of practice and theory
Amazon Verified review Amazon
Rick Jan 26, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Liked the way the topics were put together for cloud native applications. Definitely recommended to deep dive into the world of cloud security.
Amazon Verified review Amazon
Yudha Herwono Dec 08, 2020
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I was looking for a book that can provide more in depth knowledge about Service Mesh. This book certainly fits the bill. It is well written and provide the necessary background information to understand the concepts. Highly recommended.
Amazon Verified review Amazon
S. Desrosiers Dec 30, 2020
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
Certaines fonctions d'istio ne sont carrément pas expliquées. Ce n'est pas la faute des auteurs, mais le livre était déjà obsolète à sa sortie.L'anglais est épouvantablement mauvais, ce qui rend la lecture pénible. Une caricature d'indien qui écrit en anglais.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.