Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Protocol Buffers Handbook
Protocol Buffers Handbook

Protocol Buffers Handbook: Getting deeper into Protobuf internals and its usage

Arrow left icon
Profile Icon Clément Jean
Arrow right icon
Free Trial
Full star icon Full star icon Full star icon Full star icon Full star icon 5 (2 Ratings)
Paperback Apr 2024 226 pages 1st Edition
eBook
zł59.99 zł129.99
Paperback
zł161.99
Subscription
Free Trial
Arrow left icon
Profile Icon Clément Jean
Arrow right icon
Free Trial
Full star icon Full star icon Full star icon Full star icon Full star icon 5 (2 Ratings)
Paperback Apr 2024 226 pages 1st Edition
eBook
zł59.99 zł129.99
Paperback
zł161.99
Subscription
Free Trial
eBook
zł59.99 zł129.99
Paperback
zł161.99
Subscription
Free Trial

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Protocol Buffers Handbook

Serialization Primer

Welcome to the first chapter of this book: Serialization Primer. Whether you are experienced in serialization and deserialization or new to the concept, we are here to learn what these concepts are and how they relate to Protocol Buffers (Protobuf). So, let’s get started!

At its heart, Protobuf is all about serialization and deserialization. So, before starting to write our own proto files, generate code, and use it in code, we will need to understand what Protobuf is used for and why it is used. In this primer, we will lay the foundations so that we can build up our knowledge of Protobuf on top of them.

In this chapter, we are going to cover the following topics:

  • Serialization’s goals
  • Why use Protobuf?

At the end of this chapter, you will be confident about your knowledge of serialization and deserialization. You will understand what Protobuf is doing and why it is important to have such a data format in our software engineer toolbox.

Technical requirements

All the code examples that you will see in this section can be found under the directory called chapter1 in the GitHub repository (https://github.com/PacktPublishing/Protocol-Buffers-Handbook).

Serialization’s goals

Serialization and its counterpart, deserialization, enable us to handle data seamlessly across time and machines. This is highly relevant to current workloads because we have more and more data that we process in batches and we have more and more distributed systems that need to send data on the wire. In this section, we will dive a little bit deeper into serialization to understand how it can achieve encoding of data for later use.

How does it all work?

In order to process data across time and machines, we need to take data that is in volatile memory (memory that needs constant power in order to retain data) and save it to non-volatile memory (memory that retains stored information even after power is removed) or send data that is in volatile memory over the wire. And because we need to be able to make sense of the data that was saved to a file or sent over the wire, this data needs to have a certain structure.

The structure of the data is called the data format. It is a way of encoding data that is understood by the deserialization process in order to recreate the original data. Protobuf is one such data format.

Once this data is encoded in a certain way, we can save it to a file, send it across the network, and so on. And when the time comes to read this data, deserialization kicks in. It will take the data encoded following the data format rules (serialized data) and turn it back into data in volatile memory. Figure 1.1 summarizes graphically the data lifecycle during serialization and deserialization.

Figure 1.1 – Data lifecycle during serialization/deserialization

Figure 1.1 – Data lifecycle during serialization/deserialization

To summarize, in order to treat data across time and machines, we need to serialize it. This means that we need to take some sort of data and encode it with rules given by a specific data format. And after that, when we want to finally reuse the data, we deserialize the encoded data. This is the big picture of serialization and deserialization. Let us now see what the different data formats are and the metrics we can use in order to compare them.

The different data formats

As mentioned earlier in this chapter, a data format is a way of encoding data that can be interpreted by both the application engaged in serialization and the one involved in deserialization. There are multiple such formats, so it is impractical to list them all. However, we can distribute all of them into two categories:

  • Text
  • Binary

The most commonly used text data formats are JSON and XML. And for binary, there is Protobuf, but you might also know Apache Avro, Cap’n Proto, and others.

As with everything in software engineering, there are trade-offs for using one or the other. But before diving into these trade-offs, we need to understand the criteria on which we will base our comparison. Here are the criteria:

  • Serialized data size: This is the size in bytes of the data after serialization
  • Availability of data: This is the time it takes to deserialize data and use it in our code
  • Readability of data: As humans, is it possible to read the serialized data without too much effort?

On top of that, certain data formats, such as Protobuf, rely on data schema to define the structure of the data. This brings another set of criteria that can be used to determine which data format is the best for your use case:

  • Type safety: Can we make sure, at compile time, that the provided data is correct?
  • Readability of schema: As humans, is it possible to read the schema without too much effort?

Let us now dive deeper into each criterion so that, later on, we can evaluate how Protobuf is doing against the competition.

Serialized data size

As the goal of serializing data is to send it across the wire or save it in non-volatile storage, there is a direct benefit of having a smaller payload size: we save disk space and/or bandwidth. In other words, this means that with the same amount of storage or bandwidth, the smaller payloads allow us to save/send more data.

Now, it is important to mention that, depending on the context, it is not always a good thing to have the smallest serialized data size. We are going to talk about this in the next subsection, but serializing to a smaller number of bytes means going away from the domain of human readability and going more toward the domain of machine readability.

An example of this can be illustrated if we take the following JavaScript Object Notation (JSON):

{
  "person": {
    "name": "Clement"
  }
}

Let’s compare it with the hexadecimal representation of Protobuf serialized data, which is the same as the previous data:

0a 07 43 6c 65 6d 65 6e 74

In the former code block, we can see the structure of the object clearly, but the latter is serialized as a smaller number of bytes (9 instead of 39 bytes). There is a trade-off here and while our example is not showing real-life data, we can get the idea of size saving bandwidth/storage while at the same time affecting readability.

Finally, serializing to a smaller number of bytes has another benefit which is that, since there are fewer bytes, we can go over all the data in less time and thus deserialize it faster. This is linked to the concept of availability of data, and this is what we are going to see now.

Availability of data

It is not a surprise that the bigger the number of bytes we deal with, the more processing power it will take. It is important to understand that deserialization is not free and that the more human-readable data is, the more data massaging will be required and thus more processing power will be used.

The availability of data is the time it takes between deserializing data and being able to act on it in code. If we were to write pseudo code for calculating the availability of data, we would have something like the following code block:

start = timeNowMs()
deserialize(data)
end = timeNowMs()
availability = end - start

It is as simple as this. Now, I believe we can agree that there is not much of a trade-off here. As impatient and/or performance-driven programmers, we prefer to lower the time it takes to deserialize data. The faster we get our data, the less it costs in bandwidth, developer time, and so on.

Readability of data

We mentioned the readability of data when we were talking about data serialization size. Let us now dive deeper into what it means. The readability of data refers to how easy it is to understand the structure of the data after serialization. While readability and understandability are subjective, there are formats that are inherently harder than others.

Text formats are generally easier to understand because they are designed for humans. In these formats, we want to be able to quickly edit values with a simple text editor. This is the case for XML, JSON, YAML, and others. Furthermore, the ownership of objects is clearly defined by nesting objects into other ones. All of this is very intuitive, and it makes the data human readable.

On the other hand, binary formats are designed to be read by computers. While it is feasible to read them with knowledge of the format and with some extra effort, the overall goal is to make the computer serialize/deserialize the data faster. Open such serialized data with a text editor and you will see, sometimes, it is not even possible to distinguish bytes. Instead, you would have to use tools such as hexdump to see the hexadecimal representation of the binary.

To summarize, the readability of data is a trade-off that mainly focuses on who the reader of the data is. If it is humans, go with text formats, and if it is machines, choose binary since they will be able to understand the data faster.

Type safety

As mentioned, some data formats rely on us to describe the data through schemas. If you have worked with JSON Schema or any other kind of schema, you know that we can explicitly set types to fields in order to limit the possible value. For example, say we have the following Protobuf schema (note that this is not correct because it is simplified):

message Person {
  string name;
}

With this Protobuf schema, we would be only able to set strings to the name field, not integers, floats, or others. On the other hand, say we add the following JSON (without JSON Schema):

{
  "person": {
    "name": "Clément"
  }
}

Nothing could stop us from setting names to a number, an array, or anything else.

This is the idea of type safety. Generally, we are trying to make the feedback loop for developers shorter. If we are setting the wrong type of data to a field, the earlier we know about it, the better. We should not be surprised by an exception when the code is in production.

However, it is important to note that there are also benefits to having dynamic typing. The main one is considered to be productivity. It is way faster to hack and iterate with JSON than to first define the schema, generate code, and import the generated code into the user code.

Once again, this is a trade-off. But this time, this is more about the maintainability and scale of your project. If you have a small project, why incur the overhead of having the extra complexity? But if you have a medium/large size project, you want to have a more rigorous process that will shorten the feedback loop for your developers.

Readability of schema

Finally, we can talk about the readability of the schema. Once again this is a subjective matter but there are points that are worth mentioning. The readability of the schema can be described as how easy it is to understand the relationship between objects and what the data represents.

Although it may appear that this applies solely to formats reliant on schemas, that’s not the case. In reality, certain data formats are both the schema and the data they represent; JSON is an example of that. With nesting and typing of data, JSON can represent objects and their relationships, as well as what the internal data looks like. An example of that is the following JSON:

{
  "person": {
    "name": "Clément",
    "age": 100,
    "friends": [
      { "name": "Mark" },
      { "name": "John" }
    ]
  }
}

It describes a person whose name (a string) is Clement, whose age (a number) is 100, and who has friends (an array) called Mark and John. As you can see, this is very intuitive. However, such a schema is not type-safe.

If we care about type safety, then we cannot choose to use straight JSON. We could set the wrong kind of data to the name, age, and friends fields. Furthermore, nothing indicates whether the objects in the friends fields are persons. This is mostly because, in JSON, we cannot create definitions of objects and use the types. All types are implicit.

Now, consider the following Protobuf schema (note that this is simplified):

message Person {
  string name;
  uint32 age;
  repeated Person friends; // equivalent of a list
}

Here we only have the schema, no data. However, we can quickly see that we have the explicit string, uint32, and Person types. This will prevent a lot of bugs by catching them at compile time. We simply cannot set a string to age, a number to name, and so on.

There are a lot more criteria for the readability of schema (boilerplate, language concepts, etc.) but we can probably agree on the fact that, for onboarding developers and maintaining the project, the type explicitness is important and will catch bugs early in the development process.

To summarize, we got ourselves another trade-off. The added complexity of an external schema might be too much for small projects. However, for medium and large projects the benefits that come with type-safe schema are worth the trouble of spending a little bit more time on schema definition.

You probably noticed, but there are a lot of trade-offs. All of these are fueled by business requirements but also by subjectivity. In this section, we saw the five important trade-offs that you need to consider before choosing a data format for your project. We saw that the size of serialized data is important to consider in order to save resources (e.g. storage and bandwidth). Then, we saw what is the availability of data in the context of deserialization and how the size of the serialized data will impact it. After that, we talked about the readability of serialized data. We said that whether the size of serialized data matters depends on who is reading the data. And finally, we talked about the readability and type safety of the schema. By having explicit types, we can make sure that only the right data gets serialized and deserialized, but it also makes reading the schema itself more approachable for new developers.

What about Protobuf?

So far, we have talked only a little bit about Protobuf. In this section, we are going to start explaining the advantages and disadvantages of Protobuf, and we are going to base the analysis on the criteria we saw in the previous section.

Serialized data size

Due to a lot of optimization and its binary format, Protobuf is one of the best formats in the area. This is especially true for serializing numbers. Let us see why this is the case.

The main reason why Protobuf is good at serializing data in a small number of bytes is that it is a binary format. In Protobuf, additional structural elements such as curly braces, colons, and brackets, typically found in formats such as JSON, XML, and YAML, are absent. Instead, Protobuf data is represented simply as raw bytes.

On top of that, we have optimization such as bitpacking and the use of varints, which help a lot. It makes serializing the same data to a smaller number of bytes easier than it would be if we naively serialized the binary representation of the data as it is in memory.

Bitpacking is a method that compresses data into as few bits as possible. Notice that we are talking about bits and not bytes here. You can think of bitpacking as trying to put multiple pieces of data in the same number of bytes. Here is an example of bitpacking in Protobuf:

0a  -> 0101 00000
    -> 010 is 2 (length-delimited type like string)
    -> 1 is the field tag

Do not worry too much about what this data is representing just yet. We are going to see that in Chapter 5, Serialization, when we talk about the internals of serialization. The most important thing to understand is that we have one byte containing two pieces of metadata. At scale, this reduces the payload size quite a lot.

Most of the integers in Protobuf are varints. Varints is short for variable-size integers and they map integers to different numbers of bytes. How this works is that the smaller values will get mapped to a smaller number of bytes and the bigger ones will get translated to a larger number of bytes. Let us see an example (decimal -> byte(s) in hexadecimal):

1 -> 01
128 -> 80 01
16,384 -> 80 80 01
...

As you can see, compared to fixed-size integers (four or eight bytes), it saves space in a lot of cases. However, this also can result in non-efficient compression of the data. We can also store a 32-bit number (normally 4 bytes) in 5 bytes. Here is an example of such a case:

2,147,483,647 (max 32-bit number value) -> ff ff ff ff 07

We have five bytes instead of four if we serialize with a fixed-size integer.

While we did not get into too much detail about the internals of serialization that make the serialized data size smaller, we learned about the two main optimizations that are used in Protobuf. We saw that it uses bitpacking to compress data into smaller amounts of bits. This is what Protobuf does to limit the impact of metadata on serialized data size. Then, we saw that Protobuf also uses varints. They map smaller integers to smaller numbers of bytes. This is how most integers are serialized in Protobuf.

Availability of data

Deserializing data in Protobuf is fast because we are parsing binary instead of text. Now, this is hard to demonstrate because it is highly dependent on the programming language you are using and the data you have. However, a study conducted by Auth0 (https://auth0.com/blog/beating-json-performance-with-protobuf/) showed that Protobuf, in their use case, has better availability than JSON.

What is interesting in this study is that they found that Protobuf messages were available in their JavaScript code in 4% less time than JSON. Now, this might seem like a marginal improvement, but you have to consider the fact that JSON is JavaScript object literal format. This means that by deserializing Protobuf in JavaScript, we have to convert binary to JSON in order to access the data in our code. And even with that conversion overhead, Protobuf manages to be faster than parsing JSON directly.

This shows that, in JavaScript (JS), even with the overhead of having to transform binary to JSON, Protobuf performs better than JSON itself.

Be aware that, for every implementation, you will have different numbers. Some are more optimized than others. However, a lot of implementations are just wrappers around the C implementation, which means you will get quasi-consistent deserialization.

Finally, it is important to mention that serialization and deserialization speeds depend a lot on the underlying data. Some data formats perform better on certain kinds of data. So, if you are considering using any data schema, I urge you to do some benchmarking first. It will prevent you from being surprised.

Readability of data

As you might expect, readability of data is not where Protobuf shines. As the serialized data is in binary, it requires more effort or tools for humans to be able to understand it. Let’s take a look at the following hexadecimal:

0a 07 43 6c 65 6d 65 6e 74 10 64 1a 06 0a 04 4d 61 72 6b 1a 06 0a 04 4a 6f 68 6e

We have no way to guess that the preceding hexadecimal represents a Person object.

Now, while the default way of serializing data is not human-readable, Protobuf provides libraries to serialize data into JSON or the Protobuf text format. We can take the previous data that we had in hexadecimal and get text output similar to that in the following (Protobuf text format) code block:

name: "Clément"
age: 100
friends: {
  name: "Mark"
}
friends: {
  name: "John"
}

Furthermore, this text format can also be used to create binary. This means that we could have a configuration file written in text and still use the data with Protobuf.

So, as we saw, Protobuf serialized data is by default not human-readable. It is binary and it would take some extra effort or more tools to read. However, since Protobuf provides a way to serialize data to its own text format or to JSON, we can still have human-readable serialized data. This is nice for configuration files, test data, and so on.

Type safety

An advantage that schema-backed data formats have is that the schemas are generally defined with types in mind. Protobuf can define objects called messages that contain fields that are themselves typed. We saw a few examples previously in this chapter. This means that we will be able to check at compile time that the values provided for certain fields are correct.

On top of that, since Protobuf has an extensive set of types, there is the possibility to optimize for specific use cases. An example of this is the type for the age field that we saw in Person. In JSON Schema, for example, we would have a type called integer; however, this does not ensure that we do not provide negative numbers. In Protobuf, we could use uint32 (unsigned 32-bit integer) or a uint64 (usigned 64-bit integer) and thus we would, at compile time, ensure that we do not get negative numbers.

Finally, Protobuf also provides the possibility to nest types and reference these types by their fully qualified name. This basically means that the types have scopes. Let us take an example to make things clearer. Let us say that we have the following definitions (note that this is simplified):

message PhoneNumber {
  enum Type {
    Mobile,
    Work,
    Fax
  }
  Type type;
  string number;
}
message Email {
  enum Type {
    Personal,
    Work
  }
  Type type;
  string email;
}

As we can see, we defined two enums called Type. It is necessary because PhoneNumber supports Fax and Mobile but Email does not. Now, here when we are dealing with the Type enum in PhoneNumber, we are going to refer to it as PhoneNumber.Type, and when we deal with the one in Email, we will refer to it as Email.Type. These names are the fully qualified names of the types and they differentiate the two types, which have the same name, by providing a scope.

Now, let us think about what would happen if we had the following definitions:

enum Type {
  Mobile,
  Work,
  Fax,
  Personal
}
message PhoneNumber {
  Type type;
  string number;
}
message Email {
  Type type;
  string email;
}

We could still create valid Email instances and PhoneNumber instances because we have all the types in the Type enum. However, in this case, we could also create invalid Emails and PhoneNumbers. For example, it is possible to create an Email type with the Mobile type. This means that by providing scoping for types, we can have type safety without having to create two enums:

enum PhoneType { /*...*/ }
enum EmailType { /*...*/ }

This is verbose and unnecessary since Protobuf lets us create types that will be called Phone.Type and Email.Type.

To summarize, Protobuf lets us use types in a very explicit and safe way. We have an extensive set of types that we can use and that will let us ensure that our data is correct at compile time. Finally, we saw that by providing nested types referenced by their fully qualified names, we can differentiate types with the same names and ensure that only certain correct values get set to fields.

Readability of schema

Finally, Protobuf has readable schemas because it was designed as self-documenting and as a language. We can find a lot of concepts that we are already familiar with such as type, comments, and imports. All of this makes onboarding new developers and using Protobuf-backed APIs easy.

The first thing worth mentioning is that we can have comments in our schemas to describe what the fields and types are doing. One of the best examples of that is in the proto files provided in Protobuf itself. Let’s look at duration.proto (simplified for brevity) in the following code block:

// A Duration represents a signed, fixed-length span of time represented
// as a count of seconds and fractions of seconds at nanosecond
// resolution. It is independent of any calendar and concepts like "day" or "month".
message Duration {
  // Signed seconds of the span of time. Must be from -315,576,000,000
  // to +315,576,000,000 inclusive. Note: these bounds are computed from:
  // 60 sec/min * 60 min/hr * 24 hr/day * 365.25 days/year * 10000 
  years
  int64 seconds;
  // Signed fractions of a second at nanosecond resolution of the span
  // of time. Durations less than one second are represented with a 0
  // `seconds` field and a positive or negative `nanos` field. For 
  durations
  // of one second or more, a non-zero value for the `nanos` field 
  must be
  // of the same sign as the `seconds` field. Must be from 
  -999,999,999
  // to +999,999,999 inclusive.
  int32 nanos;
}

We can see that we have a comment explaining the purpose of Duration and two others explaining what the fields represent, what their possible values are, and so on. This is important because it helps users use the types properly and it can let us make assumptions.

Next, we can import some other schema files to reuse types. This helps a lot, especially when the project becomes bigger. Instead of having to duplicate code or write everything in one file, we can separate by concern and reuse definitions in multiple places. Furthermore, since imports use the path of the schema file to access the definition in it, we can look at the definition. Now, this requires knowing a little bit more about the compiler and its options, but other than that, we can access the code and start poking around.

Finally, as we mentioned earlier, Protobuf has explicit typing. By looking at a field, we know exactly what kind of value we need to set and what the boundaries are (if any). Furthermore, after learning a little bit about the internals of serialization in Protobuf, you will be able to optimize the types that you use in a granular way.

Summary

In this chapter, we defined what serialization and deserialization are, we saw the goals we try to achieve with them, and we talked about where Protobuf stands. We saw that there are a lot of trade-offs when choosing a data format that fulfills all the requirements. We also saw that, in serialization, we do not only care about one criterion; rather, there are many criteria that we need to look at. And finally, we saw the advantages and disadvantages of Protobuf as a data format.

In the next chapter, we will talk about the Protobuf language and all its concepts. We will see how to define types, import other schemas, and other language concepts.

Quiz

  1. Serialization to file is the action of taking data from ___ memory and putting it in ___ memory.
    1. Non-volatile, volatile
    2. Volatile, non-volatile
  2. Deserialization from file is the action of taking data from ___ memory and putting it in ___ memory.
    1. Non-volatile, volatile
    2. Volatile, non-volatile
  3. What is availability in the context of serialization and deserialization?
    1. The time it takes for deserialization to make the data available in your code
    2. The time it takes for serialization to make the data available in a file
    3. None of the above
  4. When is serialized data considered human-readable?
    1. If we can, by any means, decode the data
    2. If a machine can easily read it
    3. If humans can easily understand the structure of the data

Answers

  1. B
  2. A
  3. A
  4. C
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Encode and decode complex data structures, enhancing data interchange efficiency across systems
  • Understand Protobuf by mastering the syntax, schema evolution, customizations, and more
  • Integrate Protobuf into your preferred language ecosystem, ensuring interoperability and effective collaboration
  • Purchase of the print or Kindle book includes a free PDF eBook

Description

Explore how Protocol Buffers (Protobuf) serialize structured data and provides a language-neutral, platform-neutral, and extensible solution. With this guide to mastering Protobuf, you'll build your skills to effectively serialize, transmit, and manage data across diverse platforms and languages. This book will help you enter the world of Protocol Buffers by unraveling the intricate nuances of Protobuf syntax and showing you how to define complex data structures. As you progress, you’ll learn schema evolution, ensuring seamless compatibility as your projects evolve. The book also covers advanced topics such as custom options and plugins, allowing you to tailor validation processes to your specific requirements. You’ll understand how to automate project builds using cutting-edge tools such as Buf and Bazel, streamlining your development workflow. With hands-on projects in Go and Python programming, you’ll learn how to practically apply Protobuf concepts. Later chapters will show you how to integrate data interchange capabilities across different programming languages, enabling efficient collaboration and system interoperability. By the end of this book, you’ll have a solid understanding of Protobuf internals, enabling you to discern when and how to use and redefine your approach to data serialization.

Who is this book for?

This book is for software developers, from novices to experienced programmers, who are interested in harnessing the power of Protocol Buffers. It's particularly valuable for those seeking efficient data serialization solutions for APIs, microservices, and data-intensive applications. The content covered in this book accommodates diverse programming backgrounds, offering essential knowledge to both beginners and seasoned developers.

What you will learn

  • Focus on efficient data interchange with advanced serialization techniques
  • Master Protocol Buffers syntax and schema evolution
  • Perform custom validation via Protoc plugins for precise data integrity
  • Integrate languages seamlessly for versatile system development
  • Automate project building with Buf and Bazel
  • Get to grips with Go and Python integration for real-world Protobuf applications
  • Streamline collaboration through system interoperability with Protobuf

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Apr 30, 2024
Length: 226 pages
Edition : 1st
Language : English
ISBN-13 : 9781805124672
Category :
Languages :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Apr 30, 2024
Length: 226 pages
Edition : 1st
Language : English
ISBN-13 : 9781805124672
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just zł20 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just zł20 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 585.97
Protocol Buffers Handbook
zł161.99
Asynchronous Programming in Rust
zł201.99
Mastering Go
zł221.99
Total 585.97 Stars icon

Table of Contents

12 Chapters
Chapter 1: Serialization Primer Chevron down icon Chevron up icon
Chapter 2: Protobuf is a Language Chevron down icon Chevron up icon
Chapter 3: Describing Data with Protobuf Text Format Chevron down icon Chevron up icon
Chapter 4: The Protobuf Compiler Chevron down icon Chevron up icon
Chapter 5: Serialization Internals Chevron down icon Chevron up icon
Chapter 6: Schema Evolution over Time Chevron down icon Chevron up icon
Chapter 7: Implementing the Address Book in Go Chevron down icon Chevron up icon
Chapter 8: Implementing the Address Book in Python Chevron down icon Chevron up icon
Chapter 9: Developing a Protoc Plugin in Golang Chevron down icon Chevron up icon
Chapter 10: Advanced Build Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(2 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Frequent buyer Jun 12, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is an excellent resource for anyone looking to dive deep into the world of Protocol Buffers, whether you’re a beginner or have some prior knowledge.Author does a fantastic job of breaking down the basics and gradually building up to more advanced topics. The examples are practical and relevant, making it easy to see how Protocol Buffers can be applied in real-world scenarios.I particularly appreciated the section on best practices and optimization techniques.The insights provided have already helped me improve the performance of my own projects.Overall, “Protocol Buffers: The Definitive Guide” is well-structured, informative, and engaging. It has definitely earned a permanent spot on my bookshelf.
Amazon Verified review Amazon
Juan Cruz Viotti Jun 26, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I have experience with binary serialisation from a research point of view (University of Oxford) and have worked with binary serialisation formats including Protocol Buffers in the context of IoT, and this book still managed to teach me a bunch of things I didn't know.First of all, Chapter 5 on serialisation internals is a gem. I really wish this book was there by the time I published some of my papers back in the day. I would have cited this book instead of attempting to write quite a lot about these things myself. Furthermore, the book teaches a very nice workflow for introducing readers on how to debug binary payloads, which people otherwise end up learning the hard way.Other than that, this book has a great introduction on writing your own "protoc" plugin/extensions (which I didn't even know was possible), and on using Protocol Buffers with more modern platforms like Buf.If you are into binary serialisation, schemas, or just need to learn Protocol Buffers to get something done, this well-written and pretty concise and to-the-point book might be all you need.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.