Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Cloud-Native Observability with OpenTelemetry
Cloud-Native Observability with OpenTelemetry

Cloud-Native Observability with OpenTelemetry: Learn to gain visibility into systems by combining tracing, metrics, and logging with OpenTelemetry

eBook
$9.99 $33.99
Paperback
$41.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Cloud-Native Observability with OpenTelemetry

Chapter 1: The History and Concepts of Observability

The term observability has only been around in the software industry for a short time, but the concepts and goals it represents have been around for much longer. Indeed, ever since the earliest days of computing, programmers have been trying to answer the question: is the system doing what I think it should be?

For some, observability consists of buying a one-size-fits-all solution that includes logs, metrics, and traces, then configuring some off-the-shelf integrations and calling it a day. These tools can be used to increase visibility into a piece of software's behavior by providing mechanisms to produce and collect telemetry. The following are some examples of telemetry that can be added to a system:

  • Keeping a count of the number of requests received
  • Adding a log entry when an event occurs
  • Recording a value for current memory consumption on a machine
  • Tracing a request from a client all the way to a backend service

However, producing high-quality telemetry is only one part of the observability challenge. The other part is ensuring that events occurring across the different types of telemetry can be correlated in meaningful ways during analysis. The goal of observability is to answer questions that you may have about the system:

  • If a problem occurred in production, what evidence would you have to be able to identify it?
  • Why is this service suddenly overwhelmed when it was fine just a minute ago?
  • If a specific condition from a client triggers an anomaly in some underlying service, would you know it without customers or support calling you?

These are some of the questions that the domain of observability can help answer. Observability is about empowering the people who build and operate distributed applications to understand their code's behavior while running in production. In this chapter, we will explore the following:

  • Understanding cloud-native applications
  • Looking at the shift to DevOps
  • Reviewing the history of observability
  • Understanding the history of OpenTelemetry
  • Understanding the concepts of OpenTelemetry

Before we begin looking at the history of observability, it's important to understand the changes in the software industry that have led to the need for observability in the first place. Let's start with the shift to the cloud.

Understanding cloud-native applications

The way applications are built and deployed has drastically changed in the past few years with the increased adoption of the internet. An unprecedented increase in demand for services (for example, streaming media, social networks, and online shopping) powered by software has raised expectations for those services to be readily available. In addition, this increase in demand has fueled the need for developers to be able to scale their applications quickly. Cloud providers, such as Microsoft, Google, and Amazon, offer infrastructure to run applications at the click of a button and at a fraction of the cost, and reduce the risk of deploying servers in traditional data centers. This enables developers to experiment more freely and reach a wider audience. Alongside this infrastructure, these cloud providers also offer managed services for databases, networking infrastructure, message queues, and many other services that, in the past, organizations would control internally.

One of the advantages these cloud-based providers offer is freeing up organizations to focus on the code that matters to their businesses. This replaces costly and time-consuming hardware implementations, or operating services they lack expertise in. To take full advantage of cloud platforms, developers started looking at how applications that were originally developed as monoliths could be re-architected to take advantage of cloud platforms. The following are challenges that could be encountered when deploying monoliths to a cloud provider:

  • Scaling a monolith is traditionally done by increasing the number of resources available to the monolith, also known as vertical scaling. Vertically scaling applications can only go as far as the largest available resource offered by a cloud provider.
  • Improving the reliability of a monolith means deploying multiple instances to handle multiple failures, thus avoiding downtime. This is also known as horizontal scaling. Depending on the size of the monolith, this could quickly ramp up costs. This can also be wasteful if not all components of the monolith need to be replicated.

The specific challenges of building applications on cloud platforms have led developers to increasingly adopt a service-oriented architecture, or microservice architecture, that organizes applications as loosely coupled services, each with limited scope. The following figure shows a monolith architecture on the left, where all the services in the application are tightly coupled and operate within the same boundary. In contrast, the microservices architecture on the right shows us that the services are loosely coupled, and each service operates independently:

Figure 1.1 – Monolith versus microservices architecture

Figure 1.1 – Monolith versus microservices architecture

Applications built using microservices architecture provide developers with the ability to scale only the components needed to handle the additional load, meaning horizontal scaling becomes a much more attractive option. As it often does, a new architecture comes with its own set of trade-offs and challenges. The following are some of the new challenges cloud-native architecture presents that did not exist in traditional monolithic systems:

  • Latency introduced where none existed before, causing applications to fail in unexpected ways.
  • Dependencies can and will fail, so applications must be built defensively to minimize cascading failures.
  • Managing configuration and secrets across services is difficult.
  • Service orchestration becomes complex.

With this change in architecture, the scope of each application is reduced significantly, making it easier to understand the needs of scaling each component. However, the increased number of independent services and added complexity also creates challenges for traditional operations (ops) teams, meaning organizations would also need to adapt.

Looking at the shift to DevOps

The shift to microservices has, in turn, led to a shift in how development teams are organized. Instead of a single large team managing a monolithic application, many teams each manage their own microservices. In traditional software development, a software development team would normally hand off the software once it was deemed complete. The handoff would be to an operations team, who would deploy the software and operate it in a production environment. As the number of services and teams grew, organizations found themselves growing their operations teams to unmanageable sizes, and quite often, those teams were still unable to keep up with the demands of the changing software.

This, in turn, led to an explosion of development teams that began the transition from the traditional development and operations organization toward the use of new hybrid DevOps teams. Using the DevOps approach, development teams write, test, build, package, deploy, and operate the code they develop. This ownership of the code through all stages of its life cycle empowers many developers and organizations to accelerate their feature development. This approach, of course, comes with different challenges:

  • Increased dependencies across development teams mean it's possible that no one has a full picture of the entire application.
  • Keeping track of changes across an organization can be difficult. This makes the answer to the "what caused this outage?" question more challenging to find.

Individual teams must become familiar with many more tools. This can lead to too much focus on the tools themselves, rather than on their purpose. The quick adoption of DevOps creates a new problem. Without the right amount of visibility across the systems managed by an organization, teams are struggling to identify the root causes of issues encountered. This can lead to longer and more frequent outages, severely impacting the health and happiness of people across organizations. Let's look at how the methods of observing systems have evolved to adapt to this changing landscape.

Reviewing the history of observability

In many ways, being able to understand what a computer is doing is both fun and challenging when working with software. The ability to understand how systems are behaving has gone through quite a few iterations since the early 2000s. Many different markets have been created to solve this need, such as systems monitoring, log management, and application performance monitoring. As is often the case, when new challenges come knocking, the doors of opportunity open to those willing to tackle those challenges. Over the same period, countless vendors and open source projects have sprung up to help people who are building and operating services in managing their systems. The term observability, however, is a recent addition to the software industry and comes from control theory.

Wikipedia (https://en.wikipedia.org/wiki/Observability) defines observability as:

"In control theory, observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs."

Observability is an evolution of its predecessors, built on lessons learned through years of experience and trial and error. To better understand where observability is today, it's important to understand where some of the methods used today by cloud-native application developers come from, and how they have changed over time. We'll start by looking at the following:

  • Centralized logging
  • Metrics and dashboards
  • Tracing and analysis

Centralized logging

One of the first pieces of software a programmer writes when learning a new language is a form of observability: "Hello, World!". Printing some text to the terminal is usually one of the quickest ways to provide users with feedback that things are working, and that's why "Hello, World" has been a tradition in computing since the late 1960s.

One of my favorite methods for debugging is still to add print statements across the code when things aren't working. I've even used this method to troubleshoot an application distributed across multiple servers before, although I can't say it was my proudest moment, as it caused one of our services to go down temporarily because of a typo in an unfamiliar editor. Print statements are great for simple debugging, but unfortunately, this only scales so far.

Once an application is large enough or distributed across enough systems, searching through the logs on individual machines is not practical. Applications can also run on ephemeral machines that may no longer be present when we need those logs. Combined, all of this created a need to make the logs available in a central location for persistent storage and searchability, and thus centralized logging was born.

There are many available vendors that provide a destination for logs, as well as features around searching, and alerting based on those logs. There are also many open source projects that have tried to tackle the challenges of standardizing log formats, providing mechanisms for transport, and storing the logs. The following are some of these projects:

Centralized logging additionally provides the opportunity to produce metrics about the data across the entire system.

Using metrics and dashboards

Metrics are possibly the most well-known of the tools available in the observability space. Think of the temperature in a thermometer, the speed on the odometer of a car, or the time on a watch. We humans love measuring and quantifying things. From the early days of computing, being able to keep track of how resources were utilized was critical in ensuring that multi-user environments provided a good user experience for all users of the system.

Nowadays, measuring application and system performance via the collection of metrics is common practice in software development. This data is converted into graphs to generate meaningful visualizations for those in charge of monitoring the health of a system.

These metrics can also be used to configure alerting when certain thresholds have been reached, such as when an error rate becomes greater than an acceptable percentage. In certain environments, metrics are used to automate workflows as a reaction to changes in the system, such as increasing the number of application instances or rolling back a bad deployment. As with logging, over time, many vendors and projects provided their own solutions to metrics, dashboards, monitoring, and alerting. Some of the open source projects that focus on metrics are as follows:

Let's now look at tracing and analysis.

Applying tracing and analysis

Tracing applications means having the ability to run through the application code and ensure it's doing what is expected. This can often, but not always, be achieved in development using a debugger such as GDB (https://www.gnu.org/software/gdb/) or PDB (https://docs.python.org/3/library/pdb.html) in Python. This becomes impossible when debugging an application that is spread across multiple services on different hosts across a network. Researchers at Google published a white paper on a large-scale distributed tracing system built internally: Dapper (https://research.google/pubs/pub36356/). In this paper, they describe the challenges of distributed systems, as well as the approach that was taken to address the problem. This research is the basis of distributed tracing as it exists today. After the paper was published, several open source projects sprung up to provide users with the tools to trace and visualize applications using distributed tracing:

As you can imagine, with so many tools, it can be daunting to even know where to begin on the journey to making a system observable. Users and organizations must spend time and effort upfront to even get started. This can be challenging when other deadlines are looming. Not only that, but the time investment needed to instrument an application can be significant depending on the complexity of the application, and the return on that investment sometimes isn't made clear until much later. The time and money invested, as well as the expertise required, can make it difficult to change from one tool to another if the initial implementation no longer fits your needs as the system evolves.

Such a wide array of methods, tools, libraries, and standards has also caused fragmentation in the industry and the open source community. This has led to libraries supporting one format or another. This leaves it up to the user to fix any gaps within the environments themselves. This also means there is effort required to maintain feature parity across different projects. All of this could be addressed by bringing the people working in these communities together.

With a better understanding of different tools at the disposal of application developers, their evolution, and their role, we can start to better appreciate the scope of what OpenTelemetry is trying to solve.

Understanding the history of OpenTelemetry

In early 2019, the OpenTelemetry project was announced as a merger of two existing open source projects: OpenTracing and OpenCensus. Although initially, the goal of this endeavor was to bring these two projects together, its ambition to provide an observability framework for cloud-native software goes much further than that. Since OpenTelemetry combines concepts of both OpenTracing and OpenCensus, let's first look at each of these projects individually. Please refer to the following Twitter link, which announced OpenTelemetry by combining both concepts:

https://twitter.com/opencensusio/status/1111388599994318848.

Figure 1.2 - Screenshot of the aforementioned tweet

Figure 1.2 - Screenshot of the aforementioned tweet

OpenTracing

The OpenTracing (https://opentracing.io) project, started in 2016, was focused on solving the problem of increasing the adoption of distributed tracing as a means for users to better understand their systems. One of the challenges identified by the project was that adoption was difficult because of cost instrumentation and the lack of consistent quality instrumentation in third-party libraries. OpenTracing provided a specification for Application Programming Interface (APIs) to address this problem. This API could be leveraged independently of the implementation that generated distributed traces, therefore allowing application developers and library authors to embed calls to this API in their code. By default, the API would act as a no-op operation, meaning those calls wouldn't do anything unless an implementation was configured.

Let's see what this looks like in code. The call to an API to trace a specific piece of code resembles the following example. You'll notice the code is accessing a global variable to obtain a Tracer via the global_tracer method. A Tracer in OpenTracing, and in OpenTelemetry (as we'll discuss later in Chapter 2, OpenTelemetry Signals – Tracing, Metrics, and Logging, and Chapter 4, Distributed Tracing – Tracing Code Execution), is a mechanism used to generate trace data. Using a globally configured tracer means that there's no configuration required in this instrumentation code – it can be done completely separately. The next line starts aprimary building block, span. We'll discuss this further in Chapter 2, OpenTelemetry Signals – Tracing, Metrics, and Logging, but it is shown here to give you an idea of how a Tracer is used in practice:

import opentracing
tracer = opentracing.global_tracer()
with tracer.start_active_span('doWork'):
  # do work

The default no-op implementation meant that code could be instrumented without the authors having to make decisions about how the data would be generated or collected at instrumentation time. It also meant that users of instrumented libraries, who didn't want to
use distributed tracing in their applications, could still use the library without incurring a performance penalty by not configuring it. On the other hand, users who wanted to configure distributed tracing could choose how this information would be generated. The users of these libraries and applications would choose a Tracer implementation and configure it. To comply with the specification, a Tracer implementation only needed to adhere to the API defined (https://github.com/opentracing/opentracing-python/blob/master/opentracing/tracer.py) , which includes the following methods:

  • Start a new span.
  • Inject an existing span's context into a carrier.
  • Extract an existing span from a carrier.

Along with the specification for this API, OpenTracing also provides semantic conventions. These conventions describe guidelines to improve the quality of the telemetry emitted by instrumenting. We'll discuss semantic conventions further when exploring the concepts of OpenTelemetry.

OpenCensus

OpenCensus (https://opencensus.io) started as an internal project at Google, called Census, but was open sourced and gained popularity with a wider community in 2017. The project provided libraries to make the generation and collection of both traces and metrics simpler for application developers. It also provided the OpenCensus Collector, an agent run independently that acted as a destination for telemetry from applications and could be configured to process the data before sending it along to backends for storage and analysis. Telemetry being sent to the collector was transmitted using a wire format specified by OpenCensus. The collector was an especially powerful component of OpenCensus. As shown in Figure 1.3, many applications could be configured to send data to a single destination. That destination could then control the flow of the data without having to modify the application code any further:

Figure 1.3 – OpenCensus Collector data flow

Figure 1.3 – OpenCensus Collector data flow

The concepts of the API to support distributed tracing in OpenCensus were like those of OpenTracing's API. In contrast to OpenTracing, however, the project provided a tightly coupled API and Software Development Kit (SDK), meaning users could use OpenCensus without having to install and configure a separate implementation. Although this simplified the user experience for application developers, it also meant that in certain languages, the authors of third-party libraries wanting to instrument their code would depend on the SDK and all its dependencies. As mentioned before, OpenCensus also provided an API to generate application metrics. It introduced several concepts that would become influential in OpenTelemetry:

  • Measurement: This is the recorded output of a measure, or a generated metric point.
  • Measure: This is a defined metric to be recoded.
  • Aggregation: This describes how the measurements are aggregated.
  • Views: These combine measures and aggregations to determine how the data should be exported.

To collect metrics from their applications, developers defined a measure instrument to record measurements, and then configured a view with an aggregation to emit the data to a backend. The supported aggregations were count, distribution, sum, and last value.

As the two projects gained popularity, the pain for users only grew. The existence of both projects meant that it was unclear for users what project they should rely on. Using both together was not easy. One of the core components of distributed tracing is the ability to propagate context between the different applications in a distributed system, and this didn't work out of the box between the two projects. If a user wanted to collect traces and metrics, they would have to use OpenCensus, but if they wanted to use libraries that only supported OpenTracing, then they would have to use both – OpenTracing for distributed traces, and OpenCensus for metrics. It was a mess, and when there are too many standards, the way to solve all the problems is to invent a new standard!

It was a mess, and when there are too many standards, the way to solve all the problems is to invent a new standard! The following XKCD comic captures the sentiment very aptly:

Figure 1.4 – How standards proliferate comic (credit: XKCD, https://xkcd.com/927/)

Figure 1.4 – How standards proliferate comic (credit: XKCD, https://xkcd.com/927/)

Sometimes a new standard is a correct solution, especially when that solution:

  • Is built using the lessons learned from its predecessors
  • Brings together the communities behind other standards
  • Supersedes two existing competing standards

The OpenCensus and OpenTracing organizers worked together to ensure the new standard would support a migration path for existing users of both communities, allowing the projects to eventually become deprecated. This would also make the lives of users easier by offering a single standard to use when instrumenting applications. There was no longer any need to guess what project to use!

Observability for cloud-native software

OpenTelemetry aims to standardize how applications are instrumented and how telemetry data is generated, collected, and transmitted. It also aims to give users the tools necessary to correlate that telemetry across systems, languages, and applications, to allow them to better understand their software. One of the initial goals of the project involved ensuring all the functionality that was key to both OpenCensus and OpenTracing users would become part of the new project. The focus on pre-existing users also leads to the project organizers establishing a migration path to ease the transition from OpenTracing and OpenCensus to OpenTelemetry. To accomplish its lofty goals, OpenTelemetry provides the following:

  • An open specification
  • Language-specific APIs and SDKs
  • Instrumentation libraries
  • Semantic conventions
  • An agent to collect telemetry
  • A protocol to organize, transmit, and receive the data

The project kicked off with the initial commit on May 1, 2019, and brought together the leaders from OpenCensus and OpenTracing. The project is governed by a governance committee that holds elections annually, with elected representatives serving on the committee for two-year terms. The project also has a technical committee that oversees the specification, drives project-wide discussion, and reviews language-specific implementations. In addition, there are various special interest groups (SIGs) in the project, focused on features or technologies supported by the project. Each language implementation has its own SIG with independent maintainers and approvers managing separate repositories with tools and processes tailored to the language. The initial work for the project was heavily focused on the open specification. This provides guidance for the language-specific implementations. Since its first commit, the project has received contributions from over 200 organizations, including observability leaders and cloud providers, as well as end users of OpenTelemetry. At the time of writing, OpenTelemetry has implementations in 11 languages and 18 special interest or working groups.

Since the initial merger of OpenCensus and OpenTracing, communities from additional open source projects have participated in OpenTelemetry efforts, including members of the Prometheus and OpenMetrics projects. Now that we have a better understanding of how OpenTelemetry was brought to life, let's take a deeper look at the concepts of the project.

Understanding the concepts of OpenTelemetry

OpenTelemetry is a large ecosystem. Before diving into the code, having a general understanding of the concepts and terminology used in the project will help us. The project is composed of the following:

  • Signals
  • Pipelines
  • Resources
  • Context propagation

Let's look at each of these aspects.

Signals

With its goal of providing an open specification for encompassing such a wide variety of telemetry data, the OpenTelemetry project needed to agree on a term to organize the categories of concern. Eventually, it was decided to call these signals. A signal can be thought of as a standalone component that can be configured, providing value on its own. The community decided to align its work into deliverables around these signals to deliver value to its users as soon as possible. The alignment of the work and separation of concerns in terms of signals has allowed the community to focus its efforts. The tracing and baggage signals were released in early 2021, soon followed by the metrics signal. Each signal in OpenTelemetry comes with the following:

  • A set of specification documents providing guidance to implementors of the signal
  • A data model expressing how the signal is to be represented in implementations
  • An API that can be used by application and library developers to instrument their code
  • The SDK needed to allow users to produce telemetry using the APIs
  • Semantic conventions that can be used to get consistent, high-quality data
  • Instrumentation libraries to simplify usage and adoption

The initial signals defined by OpenTelemetry were tracing, metrics, logging, and baggage. Signals are a core concept of OpenTelemetry and, as such, we will become quite familiar with them.

Specification

One of the most important aspects of OpenTelemetry is ensuring that users can expect a similar experience regardless of the language they're using. This is accomplished by defining the standards for what is expected of OpenTelemetry-compliant implementations in an open specification. The process used for writing the specification is flexible, but large new features or sections of functionality are often proposed by writing an OpenTelemetry Enhancement Proposal (OTEP). The OTEP is submitted for review and is usually provided along with prototype code in multiple languages, to ensure the proposal isn't too language-specific. Once an OTEP is approved and merged, the writing of the specification begins. The entire specification lives in a repository on GitHub (https://github.com/open-telemetry/opentelemetry-specification) and is open for anyone to contribute or review.

Data model

The data model defines the representation of the components that form a specific signal. It provides the specifics of what fields each component must have and describes how all the components interact with one another. This piece of the signal definition is particularly important to give clarity as to what use cases the APIs and SDKs will support. The data model also explains to developers implementing the standard how the data should behave.

API

Instrumenting applications can be quite expensive, depending on the size of your code base. Providing users with an API allows them to go through the process of instrumenting their code in a way that is vendor-agnostic. The API is decoupled from the code that generates the telemetry, allowing users the flexibility to swap out the underlying implementations as they see fit. This interface can also be relied upon by library and frameworks authors, and only configured to emit telemetry data by end users who wish to do so. A user who instruments their code by using the API and does not configure the SDK will not see any telemetry produced by design.

SDK

The SDK does the bulk of the heavy lifting in OpenTelemetry. It implements the underlying system that generates, aggregates, and transmits telemetry data. The SDK provides the controls to configure how telemetry should be collected, where it should be transmitted, and how. Configuration of the SDK is supported via in-code configuration, as well as via environment variables defined in the specification. As it is decoupled from the API, using the SDK provided by OpenTelemetry is an option for users, but it is not required. Users and vendors are free to implement their own SDKs if doing so will better fit their needs.

Semantic conventions

Producing telemetry can be a daunting task, since you can call anything whatever you wish, but doing so would make analyzing this data difficult. For example, if server A labels the duration of an http.server.duration request and server B labels it http.server.request_length, calculating the total duration of a request across both servers requires additional knowledge of this difference, and likely additional operations. One way in which OpenTelemetry tries to make this a bit easier is by offering semantic conventions, or definitions for different types of applications and workloads to improve the consistency of telemetry. Some of the types of applications or protocols that are covered by semantic conventions include the following:

  • HTTP
  • Database
  • Message queues
  • Function-as-a-Service (FaaS)
  • Remote procedure calls (RPC)
  • Process metrics

The full list of semantic conventions is quite extensive and can be found in the specification repository. The following figure shows a sample of the semantic convention for tracing database queries:

Table 1.1 – Database semantic conventions as defined in the OpenTelemetry specification (https://

Table 1.1 – Database semantic conventions as defined in the OpenTelemetry specification (https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/database.md#connection-level-attributes)

The consistency of telemetry data reported will ultimately impact the user of that data's ability to use this information. Semantic conventions provide both the guidelines of what telemetry should be reported, as well as how to identify this data. They provide a powerful tool for developers to learn their way around observability.

Instrumentation libraries

To ensure users can get up and running quickly, instrumentation libraries are made available by OpenTelemetry SIGs in various languages. These libraries provide instrumentation for popular open source projects and frameworks. For example, in Python, the instrumentation libraries include Flask, Requests, Django, and others. The mechanisms used to implement these libraries are language-specific and may be used in combination with auto-instrumentation to provide users with telemetry with close to zero code changes required. The instrumentation libraries are supported by the OpenTelemetry organization and adhere to semantic conventions.

Signals represent the core of the telemetry data that is generated by instrumenting cloud-native applications. They can be used independently, but the real power of OpenTelemetry is to allow its users to correlate data across signals to get a better understanding of their systems. Now that we have a general understanding of what they are, let's look at the other concepts of OpenTelemetry.

Pipelines

To be useful, the telemetry data captured by each signal must eventually be exported to a data store, where storage and analysis can occur. To accomplish this, each signal implementation offers a series of mechanisms to generate, process, and transmit telemetry. We can think of this as a pipeline, as represented in the following figure:

Figure 1.5 – Telemetry pipeline

Figure 1.5 – Telemetry pipeline

The components in the telemetry pipeline are typically initialized early in the application code to ensure no meaningful telemetry is missed.

Important note

In many languages, the pipeline is configurable via environment variables. This will be explored further in Chapter 7, Instrumentation Libraries.

Once configured, the application generally only needs to interact with the generator to record telemetry, and the pipeline will take care of collecting and sending the data. Let's look at each component of the pipeline now.

Providers

The starting point of the telemetry pipeline is the provider. A provider is a configurable factory that is used to give application code access to an entity used to generate telemetry data. Although multiple providers may be configured within an application, a default global provider may also be made available via the SDK. Providers should be configured early in the application code, prior to any telemetry data being generated.

Telemetry generators

To generate telemetry at different points in the code, the telemetry generator instantiated by a provider is made available in the SDK. This generator is what most users will interact with through the instrumentation of their application and the use of the API. Generators are named differently depending on the signal: the tracing signal calls this a tracer, the metrics signal a meter. Their purpose is generally the same – to generate telemetry data. When instantiating a generator, applications and instrumenting libraries must pass a name to the provider. Optionally, users can specify a version identifier to the provider as well. This information will be used to provide additional information in the telemetry data generated.

Processors

Once the telemetry data has been generated, processors provide the ability to further modify the contents of the data. Processors may determine the frequency at which data should be processed or how the data should be exported. When instantiating a generator, applications and instrumenting libraries must pass a name to the provider. Optionally, users can specify a version identifier to the provider as well.

Exporters

The last step before telemetry leaves the context of an application is to go through the exporter. The job of the exporter is to translate the internal data model of OpenTelemetry into the format that best matches the configured exporter's understanding. Multiple export formats and protocols are supported by the OpenTelemetry project:

  • OpenTelemetry protocol
  • Console
  • Jaeger
  • Zipkin
  • Prometheus
  • OpenCensus

The pipeline allows telemetry data to be produced and emitted. We'll configure pipelines many times over the following chapters, and we'll see how the flexibility provided by the pipeline accommodates many use cases.

Resources

At their most basic, resources can be thought of as a set of attributes that are applied to different signals. Conceptually, a resource is used to identify the source of the telemetry data, whether a machine, container, or function. This information can be used at the time of analysis to correlate different events occurring in the same resource. Resource attributes are added to the telemetry data from signals at the export time before the data is emitted to a backend. Resources are typically configured at the start of an application and are associated with the providers. They tend to not change throughout the lifetime of the application. Some typical resource attributes would include the following:

  • A unique name for the service: service.name
  • The version identifier for a service: service.version
  • The name of the host where the service is running: host.name

Additionally, the specification defines resource detectors to further enrich the data. Although resources can be set manually, resource detectors provide convenient mechanisms to automatically populate environment-specific data. For example, the Google Cloud Platform (GCP) resource detector (https://www.npmjs.com/package/@opentelemetry/resource-detector-gcp) interacts with the Google API to fill in the following data:

Table 1.2 – GCP resource detector attributes

Table 1.2 – GCP resource detector attributes

Resources and resource detectors adhere to semantic conventions. Resources are a key component in making telemetry data-rich, meaningful, and consistent across an application. Another important aspect of ensuring the data is meaningful is context propagation.

Context propagation

One area of observability that is particularly powerful and challenging is context propagation. A core concept of distributed tracing, context propagation provides the ability to pass valuable contextual information between services that are separated by a logical boundary. Context propagation is what allows distributed tracing to tie requests together across multiple systems. OpenTelemetry, as OpenTracing did before it, has made this a core component of the project. In addition to tracing, context propagation allows for user-defined values (known as baggage) to be propagated. Baggage can be used to annotate telemetry across signals.

Context propagation defines a context API as part of the OpenTelemetry specification. This is independent of the signals that may use it. Some languages already have built-in context mechanisms, such as the ContextVar module in Python 3.7+ and the context package in Go. The specification recommends that the context API implementations leverage these existing mechanisms. OpenTelemetry also provides for the interface and implementation of mechanisms required to propagate context across boundaries. The following abbreviated code shows how two services, A and B, would use the context API to share context:

from opentelemetry.propagate import extract, inject
class ServiceA:
    def client_request():
        inject(headers, context=current_context)
        # make a request to ServiceB and pass in headers
class ServiceB:
    def handle_request():
        # receive a request from ServiceA
        context = extract(headers)

In Figure 1.6, we can see a comparison between two requests from service A to service B. The top request is made without propagating the context, with the result that service B has neither the trace information nor the baggage that service A does. In the bottom request, this contextual data is injected when service A makes a request to service B, and extracted by service B from the incoming request, ensuring service B now has access to the propagated data:

Figure 1.6 – Request between service A and B with and without context propagation

Figure 1.6 – Request between service A and B with and without context propagation

The propagation of context we have demonstrated allows backends to tie the two sides of the request together, but it also allows service B to make use of the dataset in service A. The challenge with context propagation is that when it isn't working, it's hard to know why. The issue could be that the context isn't being propagated correctly due to configuration issues or possibly a networking problem. This is a concept we'll revisit many times throughout the book.

Summary

In this chapter, we've looked at what observability is, and the challenges it can solve as regards the use of cloud-native applications. By exploring the different mechanisms available to generate telemetry and improve the observability of applications, we were also able to gain an understanding of how the observability landscape has evolved, as well as where some challenges remain.

Exploring the history behind the OpenTelemetry project gave us an understanding of the origin of the project and its goals. We then familiarized ourselves with the components forming tracing, metrics, logging signals, and pipelines to give us the terminology and building blocks needed to start producing telemetry using OpenTelemetry. This learning will allow us to tackle the first challenge of observability – producing high-quality telemetry. Understanding resources and context propagation will help us correlate events across services and signals to allow us to tackle the second challenge – connecting the data to better understand systems.

Let's now take a closer look at how this all works together in practice. In the next chapter, we will dive deeper into the concepts of distributed tracing, metrics, logs, and semantic conventions by launching a grocery store application instrumented with OpenTelemetry. We will then explore the telemetry generated by this distributed system.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Get to grips with OpenTelemetry, an open-source cloud-native software observability standard
  • Use vendor-neutral tools to instrument applications to produce better telemetry and improve observability
  • Understand how telemetry data can be correlated and interpreted to understand distributed systems

Description

Cloud-Native Observability with OpenTelemetry is a guide to helping you look for answers to questions about your applications. This book teaches you how to produce telemetry from your applications using an open standard to retain control of data. OpenTelemetry provides the tools necessary for you to gain visibility into the performance of your services. It allows you to instrument your application code through vendor-neutral APIs, libraries and tools. By reading Cloud-Native Observability with OpenTelemetry, you’ll learn about the concepts and signals of OpenTelemetry - traces, metrics, and logs. You’ll practice producing telemetry for these signals by configuring and instrumenting a distributed cloud-native application using the OpenTelemetry API. The book also guides you through deploying the collector, as well as telemetry backends necessary to help you understand what to do with the data once it's emitted. You’ll look at various examples of how to identify application performance issues through telemetry. By analyzing telemetry, you’ll also be able to better understand how an observable application can improve the software development life cycle. By the end of this book, you’ll be well-versed with OpenTelemetry, be able to instrument services using the OpenTelemetry API to produce distributed traces, metrics and logs, and more.

Who is this book for?

This book is for software engineers, library authors, and systems operators looking to better understand their infrastructure, services and applications by leveraging telemetry data like never before. Working knowledge of Python programming is assumed for the example applications that you’ll be building and instrumenting using the OpenTelemetry API and SDK. Some familiarity with Go programming, Linux, and Docker is preferable to help you set up additional components in various examples throughout the book.

What you will learn

  • Understand the core concepts of OpenTelemetry
  • Explore concepts in distributed tracing, metrics, and logging
  • Discover the APIs and SDKs necessary to instrument an application using OpenTelemetry
  • Explore what auto-instrumentation is and how it can help accelerate application instrumentation
  • Configure and deploy the OpenTelemetry Collector
  • Get to grips with how different open-source backends can be used to analyze telemetry data
  • Understand how to correlate telemetry in common scenarios to get to the root cause of a problem
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : May 04, 2022
Length: 386 pages
Edition : 1st
Language : English
ISBN-13 : 9781801077705

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Publication date : May 04, 2022
Length: 386 pages
Edition : 1st
Language : English
ISBN-13 : 9781801077705

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 183.97
Solutions Architect's Handbook
$89.99
Cloud-Native Observability with OpenTelemetry
$41.99
Managing Kubernetes Resources Using Helm
$51.99
Total $ 183.97 Stars icon
Banner background image

Table of Contents

16 Chapters
Section 1: The Basics Chevron down icon Chevron up icon
Chapter 1: The History and Concepts of Observability Chevron down icon Chevron up icon
Chapter 2: OpenTelemetry Signals – Traces, Metrics, and Logs Chevron down icon Chevron up icon
Chapter 3: Auto-Instrumentation Chevron down icon Chevron up icon
Section 2: Instrumenting an Application Chevron down icon Chevron up icon
Chapter 4: Distributed Tracing – Tracing Code Execution Chevron down icon Chevron up icon
Chapter 5: Metrics – Recording Measurements Chevron down icon Chevron up icon
Chapter 6: Logging – Capturing Events Chevron down icon Chevron up icon
Chapter 7: Instrumentation Libraries Chevron down icon Chevron up icon
Section 3: Using Telemetry Data Chevron down icon Chevron up icon
Chapter 8: OpenTelemetry Collector Chevron down icon Chevron up icon
Chapter 9: Deploying the Collector Chevron down icon Chevron up icon
Chapter 10: Configuring Backends Chevron down icon Chevron up icon
Chapter 11: Diagnosing Problems Chevron down icon Chevron up icon
Chapter 12: Sampling Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.8
(8 Ratings)
5 star 75%
4 star 25%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Sean Spencer Jun 13, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Before reading this book, I had tried to get involved in the OpenTelemetry project, both as a user and as a contributor, but I never really had a picture or mental model for what all the manifold pieces were really doing or what they were there for.This book brings that mental model to the fore, moreso than any other documentation or resource I've found so far. Truly indispensable.
Amazon Verified review Amazon
Phillip Carter May 31, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I really enjoyed the book overall. If you're new to OpenTelemetry and you're trying to use it to get Observability into your systems, get this book right now!The book covers all of the essentials: the main concepts, how to set up tracing/metrics/logs with the SDK, best practices for generating data, setting up and using different features of the OpenTelemetry Collector, strategies for deploying the collector in production, and an overview of Sampling strategies. In time, the OpenTelemetry documentation may be comprehensive enough to obviate the contents of this book. But until then, get this book! It might take a while before that happens.The highlights for me were:* The succinct yet complete overview of the OpenTelemetry collector* Best practices around generating telemetry data (in particular, how to set up Resources and what kinds of stuff you'd like to capture)* Really all of Section 2* Overview of the different major concepts (tracing/metrics/logs signals, semantic conventions, etc.)The areas I felt could improve:* Spending time on history of the project felt unnecessary to me, even though I can see why it would be added. As someone who came into OpenTelemetry and Observability in Summer of 2021, I feel like the only thing I need to know is that there's some legacy stuff called OpenTracing and OpenCensus and if I run into it, OpenTelemetry lets me interoperate with it. But otherwise, I'd rather not spend time learning about these older projects when I'm looking to learn OpenTelemetry instead* Second 2 had several chapters showing manual instrumentation, and had a sort of coup de grace with the autoinstrumentation. I can understand why that particular decision was made, but I feel like it could have also been inverted: lead with autoinstrumentation and how powerful it is, then enrich what's generated on my behalf. This is perhaps more of a philosophical point than anything else.I've been working in and around OpenTelemetry since Fall 2021, and it's been difficult to comprehend due to its vastness and lack of documentation. This book was very helpful, and I learned a lot from it, even though I've been contributing to OpenTelemetry for a few months.I'll reiterate my first point: if you are new to OpenTelemetry and you're trying to use it, get this book now! You won't find a better, more comprehensive guide.
Amazon Verified review Amazon
Ocelotl May 23, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Great book! Clearly written, will help anyone who is interested in OpenTelemetry get up and running. Experienced users will also enjoy the detailed explanations on how the standard works at a higher level.
Amazon Verified review Amazon
Matt W May 10, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
OpenTelemetry is becoming the go-to standard for open source, vendor neutral observability. The vast ecosystem of tools is both powerful and flexible once you can get past the learning curve. This is the most up to date and comprehensive book on the topic and it will prepare you with all the knowledge you need to put OpenTelemetry to use in your organization.
Amazon Verified review Amazon
Peter May 10, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book has a good primer on how the industry has reached the point it has with telemetry and tracing. Then dives into the specifics of open telemetry. If you're just getting started, I think this is a good place to start.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela