Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering Service Mesh

You're reading from   Mastering Service Mesh Enhance, secure, and observe cloud-native applications with Istio, Linkerd, and Consul

Arrow left icon
Product type Paperback
Published in Mar 2020
Publisher Packt
ISBN-13 9781789615791
Length 626 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Vikram Khatri Vikram Khatri
Author Profile Icon Vikram Khatri
Vikram Khatri
Anjali Khatri Anjali Khatri
Author Profile Icon Anjali Khatri
Anjali Khatri
Arrow right icon
View More author details
Toc

Table of Contents (31) Chapters Close

Preface 1. Section 1: Cloud-Native Application Management
2. Monolithic Versus Microservices FREE CHAPTER 3. Cloud-Native Applications 4. Section 2: Architecture
5. Service Mesh Architecture 6. Service Mesh Providers 7. Service Mesh Interface and SPIFFE 8. Section 3: Building a Kubernetes Environment
9. Building Your Own Kubernetes Environment 10. Section 4: Learning about Istio through Examples
11. Understanding the Istio Service Mesh 12. Installing a Demo Application 13. Installing Istio 14. Exploring Istio Traffic Management Capabilities 15. Exploring Istio Security Features 16. Enabling Istio Policy Controls 17. Exploring Istio Telemetry Features 18. Section 5: Learning about Linkerd through Examples
19. Understanding the Linkerd Service Mesh 20. Installing Linkerd 21. Exploring the Reliability Features of Linkerd 22. Exploring the Security Features of Linkerd 23. Exploring the Observability Features of Linkerd 24. Section 6: Learning about Consul through Examples
25. Understanding the Consul Service Mesh 26. Installing Consul 27. Exploring the Service Discovery Features of Consul 28. Exploring Traffic Management in Consul 29. Assessment 30. Other Books You May Enjoy

Early computer machines

IBM launched its first commercial computer (https://ibm.biz/Bd294n), the IBM 701, in 1953, which was the most powerful high-speed electronic calculator of that time. Further progression of the technology produced mainframes, and that revolution was started in the mid-1950s (https://ibm.biz/Bd294p).

Even before co-founding Intel in 1968 with Robert Noyce, Gordon Moore espoused his theory of Moore's Law (https://intel.ly/2IY5qLU) in 1965, which states that the number of transistors incorporated in a chip will approximately double every 24 months. Exponential growth still continues to this day, though this trend may not continue for long.

IBM created its first official VM product called VM/370 in 1972 (http://www.vm.ibm.com/history), followed by hardware virtualization on the Intel/AMD platform in 2005 and 2006. Monolithic applications were the only choice on early computing machines.

Early machines ran only one operating system. As time passed and machines grew in size, a need to run multiple operating systems by slicing the machines into smaller virtual machines led to the virtualization of hardware.

Hardware virtualization

Hardware virtualization led to the proliferation of virtual machines in data centers. Greg Kalinsky, EVP and CIO of Geico, in his keynote address to the IBM Think 2019 conference, mentioned the use of 70,000 virtual machines. The management of virtual machines required a different set of tools. In this area, VMware was very successful in the Intel market, whereas IBM's usage of the Hardware Management Console (HMC) was prolific in POWER for creating Logical Partitions (LPARs), or the PowerVM. Hardware virtualization had its own overheads, and it has been very popular for running multiple operating systems machines on the same physical machine.

Multiple monolithic applications have different OS requirements and languages, and it was possible to run the runtime on the same hardware but using multiple virtual machines. During this period of hardware virtualization, work on enterprise applications using the Service-Oriented Architecture (SOA) and the Enterprise Service Bus (ESB) started to evolve, which led to large monolithic applications.

Software virtualization

The next wave of innovation started with software virtualization with the use of containerization technology. Though not new, software virtualization started to get serious traction when it became easier to start adopting through tools. Docker was an early pioneer in this space in order to make software virtualization available to general IT professionals.

Solomon Hykes started dotCloud in 2010 and renamed it Docker in 2013. Software virtualization became possible due to advances in technology to provide namespace, filesystem, and processes isolation while still using the same kernel running in a bare-metal environment or in a virtual machine.

Software virtualization using containers provides better resource utilization compared to running multiple virtual machines. This leads to 30% to 40% effective resource utilization. Usually, a virtual machine takes seconds to minutes to initialize, whereas containerization shares the same kernel space, so the start up time is a lot quicker than it is with a virtual machine.

As a matter of fact, Google used software virtualization at a very large scale and used containerization for close to 10 years. This revealed the existence of their project, known as Borg. When Google published a research paper in 2015 in the EuroSys conference (https://goo.gl/Ez99hu) about its approach in managing data centers using containerization technology, it piqued interest among many technologists and, at the very same time, Docker exploded in popularity during 2014 and 2015, which made software virtualization simple enough to use.

One of the main benefits of software virtualization (also known as containerization) was to eliminate the dependency problem for a particular piece of software. For example, the Linux glibc is the main building block library, and there are hundreds of libraries that have dependencies on a particular version of glibc. We could build a Docker container that has a particular version of glibc, and it could run on a machine that has a later version of glibc. Normally, these kinds of deep dependencies have a very complex way of maintaining two different software stacks that have been built using different versions of glibc, but containers made this very simple. Docker is credited for making a simple user interface that made software packaging easy and accessible to developers.

Software virtualization made it possible to run different monolithic applications that can run within the same hardware (bare metal) or within the same virtual machine. This also led to the birth of smaller services (a complete business function) being packaged as independent software units. This is when the era of microservices started.

Container orchestration

It is easy to manage a few containers and their deployment. When the number of containers increases, a container orchestration platform makes deployment and management simpler and easier through declarative prescriptions. As containerization proliferated in 2015, the orchestration platform for containerization also evolved. Docker came with its own open source container orchestration platform known as Docker Swarm, which was a clustering and scheduling tool for Docker containers.

Apache Mesos, though not exactly similar to Docker Swarm, was built using the same principles as the Linux kernel. It was an abstract layer between applications and the Linux kernel. It was meant for distributed computing and acts as a cluster manager with an API for resource management and scheduling.

Kubernetes was the open source evolution of Google's Borg project, and its first version was released in 2015 through the Cloud Native Computing Foundation (https://cncf.io) as its first incubator project.

Major companies such as Google, Red Hat, Huawei, ZTE, VMware, Cisco, Docker, AWS, IBM, and Microsoft are contributing to the Kubernetes open source platform, and it has become a modern cluster manager and container orchestration platform. It's not a surprise that Kubernetes has become the de facto platform and is now used by all major cloud providers, with 125 companies working on it and more than 2,800+ contributors adding to it (https://www.stackalytics.com/cncf?module=kubernetes).

As container orchestration began to simplify cluster management, it became easy to run microservices in a distributed environment, which made microservices-based applications loosely coupled systems with horizontal scale-out possibilities.

Horizontal scale-out distributed computing is not new, with IBM's shared-nothing architecture for the Db2 database (monolithic application) being in use since 1998. What's new is the loosely coupled microservices that can run and scale out easily using a modern cluster manager.

Monolithic applications that used a three-tier architecture, such as Model, View, Controller (MVC) or SOA, were one of the architectural patterns on bare metal or virtualized machines. This type of pattern was adopted well in static data center environments where machines could be identified through IP addresses, and the changes were managed through DNS. This started to change with the use of distributed applications that could run on any machine (which meant the IP address could change) in the case of failures. This shift slowly started from a static data center approach to a dynamic data center approach, where identification is now done through the name of the microservice and not the IP address of the machine or container pod where the workload runs.

This fundamental shift from static to dynamic infrastructure is the basis for the evolution from monolithic to a microservices architecture. Monolithic applications are tightly coupled and have a single code base that is released in one instance for the entire application stack. Changing a single component without affecting others is a very difficult process, but it provides simplicity. On the other hand, microservices applications are loosely coupled and multiple code bases can be released independently of each other. Changing a single component is easy, but it does not provide simplicity, as was the case with monolithic applications.

We will cover a brief history of monolithic and microservices applications in the next section in order to develop a context. This will help us transition to the specific goals of this book.

You have been reading a chapter from
Mastering Service Mesh
Published in: Mar 2020
Publisher: Packt
ISBN-13: 9781789615791
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime