Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
gRPC Go for Professionals
gRPC Go for Professionals

gRPC Go for Professionals: Implement, test, and deploy production-grade microservices

eBook
₹799 ₹3083.99
Paperback
₹3351.99
Subscription
Free Trial
Renews at ₹800p/m

What do you get with a Packt Subscription?

Free for first 7 days. ₹800 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

gRPC Go for Professionals

Protobuf Primer

As we now understand the basic networking concepts behind gRPC, we can touch upon another pillar in the construction of your gRPC APIs. This pillar is Protocol Buffers, more commonly known as Protobuf. It is an important part of the communication process because, as we saw in the previous chapter, every message is encoded into binary, and this is exactly what Protobuf is doing for us in gRPC. In this chapter, the goal is to understand what Protobuf is and why it is needed for high-efficiency communication. Finally, we are going to look at some details concerning the serialization and deserialization of messages.

In this chapter, we’re going to cover the following main topics:

  • Protobuf is an Interface Description Language (IDL)
  • Serialization/deserialization
  • Protobuf versus JSON
  • Encoding details
  • Common types
  • Services

Prerequisites

You can find the code for this chapter at https://github.com/PacktPublishing/gRPC-Go-for-Professionals/tree/main/chapter2. In this chapter, we are going to discuss how Protocol Buffers serializes and deserializes data. While this can be done by writing code, we are going to stay away from that in order to learn how to use the protoc compiler to debug and optimize our Protobuf schemas. Thus, if you want to reproduce the examples specified, you will need to download the protoc compiler from the Protobuf GitHub Releases page (https://github.com/protocolbuffers/protobuf/releases). The easiest way to get started is to download the binary releases. These releases are named with this convention: protoc-${VERSION}-${OS}-{ARCHITECTURE}. Uncompress the zip file and follow the readme.txt instructions (note: we do intend to use Well-Known Types in the future so make sure you also install the includes). After that, you should be able to run the following command:

$ protoc --version

Finally, as always, you will be able to find the companion code in the GitHub repository under the folder for the current chapter (chapter2).

Protobuf is an IDL

Protobuf is a language. More precisely, it is an IDL. It is important to make such a distinction because, as we will see more in detail later, in Protobuf, we do not write any logic the way we do in a programming language, but instead, we write data schemas, which are contracts to be used for serialization and are to be fulfilled by deserialization. So, before explaining all the rules that we need to follow when writing a .proto file and going through all the details about serialization and deserialization, we need to first get a sense of what an IDL is and what is the goal of such a language.

An IDL, as we saw earlier, is an acronym for Interface Description Language, and as we can see, the name contains three parts. The first part, Interface, describes a piece of code that sits in between two or more applications and hides the complexity of implementation. As such, we do not make any assumptions about the hardware on which an application is running, the OS on which it runs, and in which programming language it is written. This interface is, by design, hardware-, OS-, and language-agnostic. This is important for Protobuf and several other serialization data schemas because it lets developers write the code once and it can be used across different projects.

The second part is Description, and this sits on top of the concept of Interface. Our interface is describing what the two applications can expect to receive and what they are expected to send to each other. This includes describing some types and their properties, the relationship between these types, and the way these types are serialized and deserialized. As this may be a bit abstract, let us look at an example in Protobuf. If we wanted to create a type called Account that contains an ID, a username, and the rights this account has, we could write the following:

syntax = "proto3";
enum AccountRight {
  ACCOUNT_RIGHT_UNSPECIFIED = 0;
  ACCOUNT_RIGHT_READ = 1;
  ACCOUNT_RIGHT_READ_WRITE = 2;
  ACCOUNT_RIGHT_ADMIN = 3;
}
message Account {
  uint64 id = 1;
  string username = 2;
  AccountRight right = 3;
}

If we skip some of the details that are not important at this stage, we can see that we define the following:

  • An enumeration listing all the possible rights and an extra role called ACCOUNT_RIGHT_UNSPECIFIED
  • A message (equivalent to a class or struct) listing the three properties that an Account type should have

Again, without looking at the details, it is readable, and the relationship between Account and AccountRight is easy to understand.

Finally, the last part is Language. This is here to say that, as with every language—computer ones or not—we have rules that we need to follow so that another human, or a compiler, can understand our intent. In Protobuf, we write our code to please the compiler (protoc), and then it does all the heavy lifting for us. It will read our code and generate code in the language that we need for our application, and then our user code will interact with the generated code. Let us look at a simplified output of what the Account type defined previously would give in Go:

type AccountRight int32
const (
  AccountRight_ACCOUNT_RIGHT_UNSPECIFIED AccountRight = 0
  AccountRight_ACCOUNT_RIGHT_READ AccountRight = 1
  AccountRight_ACCOUNT_RIGHT_READ_WRITE AccountRight = 2
  AccountRight_ACCOUNT_RIGHT_ADMIN AccountRight = 3
)
type Account struct {
  Id uint64 `protobuf:"varint,1,…`
  Username string `protobuf:"bytes,2,…`
  Right AccountRight `protobuf:"varint,3,…`
}

In this code, there are important things to notice. Let us break this code into pieces:

type AccountRight int32
const (
  AccountRight_ACCOUNT_RIGHT_UNSPECIFIED AccountRight = 0
  AccountRight_ACCOUNT_RIGHT_READ AccountRight = 1
  AccountRight_ACCOUNT_RIGHT_READ_WRITE AccountRight = 2
  AccountRight_ACCOUNT_RIGHT_ADMIN AccountRight = 3
)

Our AccountRight enum is defined as constants with values of type int32. Each enum variant’s name is prefixed with the name of the enum, and each constant has the value that we set after the equals sign in the Protobuf code. These values are called field tags, and we will introduce them later in this chapter.

Now, take a look at the following code:

type Account struct {
  Id uint64 `protobuf:"varint,1,…`
  Username string `protobuf:"bytes,2,…`
  Right AccountRight `protobuf:"varint,3,…`
}

Here, we have our Account message transpiled to a struct with Id, Username, and Right exported fields. Each of these fields has a type that is converted from a Protobuf type to a Golang type. In our example here, Go types and Protobuf types have the exact same names, but it is important to know that in some cases, the types will translate differently. Such an example is double in Protobuf, which will translate to float64 for Go. Finally, we have the field tags, referenced in the metadata following the field. Once again, their meaning will be explained later in this chapter.

So, to recapitulate, an IDL is a piece of code sitting between different applications and describing objects and their relationships by following certain defined rules. This IDL, in the case of Protobuf, will be read, and it will be used to generate code in another language. And after that, this generated code will be used by the user code to serialize and deserialize data.

Serialization and deserialization

Serialization and deserialization are two concepts that are used in many ways and in many kinds of applications. This section is going to discuss these two concepts in the context of Protobuf. So, even if you feel confident about your understanding of these two notions, it is important to get your head straight and understand them properly. Once you do, it will be easier to deal with the Encoding details section where we are going to delve deeper into how Protobuf serializes and deserializes data under the hood.

Let us start with serialization and then let us touch upon deserialization, which is just the opposite process. The goal of serialization is to store data, generally in a more compact or readable representation, to use it later. For Protobuf, this serialization happens on the data that you set in your generated code’s objects. For example, if we set the Id, Username, and Right fields in our Account struct, this data will be what Protobuf will work on. It will turn each field into a binary representation with different algorithms depending on the field type. And after that, we use this in-memory binary to either send data over the network (with gRPC, for example) or store it in more persistent storage.

Once it is time for us to use this serialized data again, Protobuf will perform deserialization. This is the process of reading the binary created earlier and populating the data back into an object in your favorite programming language to be able to act on it. Once again, Protobuf will use different algorithms depending on the type of data to read the underlying binary and know how to set or not set each of the fields of the object in question.

To summarize, Protobuf performs binary serialization to make data more compact than other formats such as XML or JSON. To do so, it will read data from the different fields of the generated code’s object, turn it into binary with different algorithms, and then when we finally need the data, Protobuf will read the data and populate the fields of a given object.

Protobuf versus JSON

If you’ve already worked on the backend or even frontend, there is a 99.99 percent chance that you’ve worked with JSON. This is by far the most popular data schema out there and there are reasons why it is the case. In this section, we are going to discuss the pros and cons of both JSON and Protobuf and we are going to explain which one is more suitable for which situation. The goal here is to be objective because as engineers, we need to be to choose the right tool for the right job.

As we could write chapters about the pros and cons of each technology, we are going to reduce the scope of these advantages and disadvantages to three categories. These categories are the ones that developers care the most about when developing applications, as detailed here:

  • Size of serialized data: We want to reduce the bandwidth when sending data over the network
  • Readability of the data schema and the serialized data: We want to be able to have a descriptive schema so that newcomers or users can quickly understand it, and we want to be able to visualize the data serialized for debugging or editing purposes
  • Strictness of the schema: This quickly becomes a requirement when APIs grow, and we need to ensure the correct type of data is being sent and received between different applications

Serialized data size

In serialization, the Holy Grail is, in a lot of use cases, reducing the size of your data. This is because most often, we want to send that data to another application across the network, and the lighter the payload, the faster it should arrive on the other side. In this space, Protobuf is the clear winner against JSON. This is the case because JSON serializes to text whereas Protobuf serializes to binary and thus has more room to improve how compact the serialized data is. An example of that is numbers. If you set a number to the id field in JSON, you would get something like this:

{ id: 123 }

First, we have some boilerplate with the braces, but most importantly we have a number that takes three characters, or three bytes. In Protobuf, if we set the same value to the same field, we would get the hexadecimal shown in the following callout.

Important note

In the chapter2 folder of the companion GitHub repository, you will find the files need to reproduce all the results in this chapter. With protoc, we will be able to display the hexadecimal representation of our serialized data. To do that, you can run the following command:

Linux/Mac: cat ${INPUT_FILE_NAME}.txt | protoc --encode=${MESSAGE_NAME} ${PROTO_FILE_NAME}.proto | hexdump –C

Windows (PowerShell): (Get-Content ${INPUT_FILE_NAME}.txt | protoc --encode=${MESSAGE_NAME} ${PROTO_FILE_NAME}.proto) -join "`n" | Format-Hex

For example:

$ cat account.txt | protoc --encode=Account account.proto | hexdump -C

00000000 08 7b |.{|

00000002

Right now, this might look like magic numbers, but we are going to see in the next section how it is encoded into two bytes. Now, two bytes instead of three might look negligible but imagine this kind of difference at scale, and you would have wasted millions of bytes.

Readability

The next important thing about data schema serialization is readability. However, readability is a little bit too broad, especially in the context of Protobuf. As we saw, as opposed to JSON, Protobuf separates the schema from the serialized data. We write the schema in a .proto file and then the serialization will give us some binary. In JSON, the schema is the actual serialized data. So, to be clearer and more precise about readability, let us split readability into two parts: the readability of the schema and the readability of the serialized data.

As for the readability of the schema, this is a matter of preference, but there are a few points that make Protobuf stand out. The first one of them is that Protobuf can contain comments, and this is nice to have for extra documentation describing requirements. JSON does not allow comments in the schema, so we must find a different way to provide documentation. Generally, it is done with GitHub wikis or other external documentation platforms. This is a problem because this kind of documentation quickly becomes outdated when the project and the team working on it get bigger. A simple oversight and your documentation do not describe the real state of your API. With Protobuf, it is still possible to have outdated documentation, but as the documentation is closer to the code, it provides more incentive and awareness to change the related comment.

The second feature that makes Protobuf more readable is the fact that it has explicit types. JSON has types but they are implicit. You know that a field contains a string if its value is surrounded by double quotes, a number when the value is only digits, and so on. In Protobuf, especially for numbers, we get more information out of types. If we have an int32 type, we can obviously know that this is a number, but on top of that, we know that it can accept negative numbers and we are able to know the range of numbers that can be stored in this field. Explicit types are important not only for security (more on that later) but also for letting the developer know the details of each field and letting them describe accurately their schemas to fulfill the business requirements.

For readability of the schema, I think we can agree that Protobuf is the winner here because it can be written as self-documenting code and we get explicit types for every field in objects.

As for the readability of serialized data, JSON is the clear winner here. As mentioned, JSON is both the data schema and the serialized data. What you see is what you get. Protobuf, however, serializes the data to binary, and it is way harder to read that, even if you know how Protobuf serializes and deserializes data. In the end, this is a trade-off between readability and serialized data size here. Protobuf will outperform JSON on serialized data and is way more explicit on the readability of the data schema. However, if you need human-readable data that can be edited by hand, Protobuf is not the right fit for your use case.

Schema strictness

Finally, the last category is the strictness of the schema. This is usually a nice feature to have when your team and your project scale because it ensures that the schema is correctly populated, and for a certain target language, it shortens the feedback loop for the developers.

Schemas are always valid ones because every field has an explicit type that can only contain certain values. We simply cannot pass a string to a field that was expecting a number or a negative number to a field that was expecting a positive number. This is enforced in the generated code by either runtime checks for dynamic languages or at compile time for typed languages. In our case, since Go is a typed language, we will have compile-time checks.

And finally, in typed languages, a schema shortens the feedback loop because instead of having a runtime check that might or might not trigger an error, we simply have a compilation error. This makes our software more reliable, and developers can feel confident that if they were able to compile, the data set into the object would be valid.

In pure JSON, we cannot ensure that our schema is correct at compile time. Most often, developers will add extra configurations such as JSON Schema to have this kind of assurance at runtime. This adds complexity to our project and requires every developer to be disciplined because they could simply go about their code without developing the schema. In Protobuf, we do schema-driven development. The schema comes first, and then our application revolves around the generated types. Furthermore, we have assurance at compile time that the values that we set are correct and we do not need to replicate the setup to all our microservices or subprojects. In the end, we spend less time on configuration and we spend more time thinking about our data schemas and the data encoding.

Encoding details

Up until now, we talked a lot about “algorithms”; however, we did not get too much into the specifics. In this section, we are going to see the major algorithms that are behind the serialization and deserialization processes in Protobuf. We are first going to see all the types that we can use for our fields, then with that, we are going to divide them into three categories, and finally, we are going to explain which algorithm is used for each category.

In Protobuf, types that are considered simple and that are provided by Protobuf out of the box are called scalar types. We can use 15 of such types, as listed here:

  • int32
  • int64
  • uint32
  • uint64
  • sint32
  • sint64
  • fixed32
  • fixed64
  • sfixed32
  • sfixed64
  • double
  • float
  • string
  • bytes
  • bool

And out of these 15 types, 10 are for integers (the 10 first ones). These types might be intimidating at first, but do not worry too much about how to choose between them right now; we are going to discuss that throughout this section. The most important thing to understand right now is that two-thirds of the types are for integers, and this shows what Protobuf is good at—encoding integers.

Now that we know the scalar types, let us separate these types into three categories. However, we are not here to make simple categories such as numbers, arrays, and so on. We want to make categories that are related to the Protobuf serialization algorithms. In total, we have three: fixed-size numbers, variable-size integers (varints), and length-delimited types. Here is a table with each category populated:

Fixed-size numbers

Varints

Length-delimited types

fixed32

int32

string

fixed64

int64

bytes

sfixed32

uint32

sfixed64

uint64

double

bool

float

Let’s go through each now.

Fixed-size numbers

The easiest one to understand for developers who are used to typed languages is fixed-size numbers. If you worked with lower-level languages in which you tried to optimize storage space, you know that we can, on most hardware, store an integer in 32 bits (4 bytes) or in 64 bits (8 bytes). fixed32 and fixed64 are just binary representations of a normal number that you would have in languages that give you control over the storage size of your integers (for example, Go, C++, Rust, and so on). If we serialize the number 42 into a fixed32 type, we will have the following:

$ cat fixed.txt | protoc --encode=Fixed32Value
  wrappers.proto | hexdump -C
00000000  0d 2a 00 00 00                          |.*...|
00000005

Here, 2a is 42, and 0d is a combination of the field tag and the type of the field (more about that later in this section). In the same manner, if we serialize 42 in a fixed64 type, we will have the following:

$ cat fixed.txt | protoc --encode=Fixed64Value
  wrappers.proto | hexdump -C
00000000  09 2a 00 00 00 00 00 00  00         |.*.......|
00000009

And the only thing that changed is the combination between the type of the field and the field tag (09). This is mostly because we changed the type to 64-bit numbers.

Two other scalar types that are easy to understand are float and double. Once again, Protobuf produces the binary representation of these types. If we encode 42.42 as float, we will get the following output:

$ cat floating_point.txt | protoc --encode=FloatValue
  wrappers.proto | hexdump -C
00000000  0d 14 ae 29 42                          |...)B|
00000005

In this case, this is a little bit more complicated to decode, but this is simply because float numbers are encoded differently. If you are interested in this kind of data storage, you can look at the IEEE Standard for Floating-Point Arithmetic (IEEE 754), which explains how a float is formed in memory. What is important to note here is that floats are encoded in 4 bytes, and in front, we have our tag + type. And for a double type with a value of 42.42, we will get the following:

$ cat floating_point.txt | protoc --encode=DoubleValue
  wrappers.proto | hexdump -C
00000000  09 f6 28 5c 8f c2 35 45  40         |..(\..5E@|
00000009

This is encoded in 8 bytes and the tag + type. Note that the tag + type also changed here because we are in the realm of 64-bit numbers.

Finally, we are left with sfixed32 and sfixed64. We did not mention it earlier, but fixed32 and fixed64 are unsigned numbers. This means that we cannot store negative numbers in fields with these types. sfixed32 and sfixed64 solve that. So, if we encode –42 in a sfixed32 type, we will have the following:

$ cat sfixed.txt | protoc --encode=SFixed32Value
  wrappers.proto | hexdump -C
00000000  0d d6 ff ff ff                          |.....|
00000005

This is obtained by taking the binary for 42, flipping all the bits (1’s complement), and adding one (2’s complement). Otherwise, if you serialize a positive number, you will have the same binary as the fixed32 type. Then, if we encode –42 in a field with type sfixed64, we will get the following:

$ cat sfixed.txt | protoc --encode=SFixed64Value
  wrappers.proto | hexdump -C
00000000  09 d6 ff ff ff ff ff ff  ff         |.........|
00000009

This is like the sfixed32 type, only the tag + type was changed.

To summarize, fixed integers are simple binary representations of integers that resemble how they are stored in most computers’ memory. As their name suggests, their serialized data will always be serialized into the same number of bytes. For some use cases, this is fine to use such representations; however, in most cases, we would like to reduce the number of bits that are just here for padding. And in these use cases, we will use something called varints.

Varints

Now that we have seen fixed integers, let us move to another type of serialization for numbers: variable-length integers. As its name suggests, we will not get a fixed number of bytes when serializing an integer.

To be more precise, the smaller the integer, the smaller the number of bytes it will be serialized into, and the bigger the integer, the larger the number of bytes. Let us look at how the algorithm works.

In this example, let us serialize the number 300. To start, we are going to take the binary representation of that number:

100101100

With this binary, we can now split it into groups of 7 bits and pad with zeros if needed:

0000010
0101100

Now, since we lack 2 more bits to create 2 bytes, we are going to add 1 as the most significant bit (MSB) for all the groups except the first one, and we are going to add 0 as the MSB for the first group:

00000010
10101100

These MSBs are continuation bits. This means that, when we have 1, we still have 7 bits to read after, and if we have 0, this is the last group to be read. Finally, we put this number into little-endian order, and we have the following:

10101100 00000010

Or, we would have AC 02 in hexadecimal. Now that we have serialized 300 into AC 02, and keeping in mind that deserialization is the opposite of serialization, we can deserialize that data. We take our binary representation for AC 02, drop the continuation bits (MSBs), and we reverse the order of bytes. In the end, we have the following binary:

100101100

This is the same binary we started with. It equals 300.

Now, in the real world, you might have larger numbers. For a quick reference on positive numbers, here is a list of the thresholds at which the number of bytes will increase:

Threshold value

Byte size

0

0

1

1

128

2

16,384

3

2,097,152

4

268,435,456

5

34,359,738,368

6

4,398,046,511,104

7

562,949,953,421,312

8

72,057,594,037,927,936

9

9,223,372,036,854,775,807

9

An astute reader might have noticed that having a varint is often beneficial, but in some cases, we might encode our values into more bytes than needed. For example, if we encode 72,057,594,037,927,936 into an int64 type, it will be serialized into 9 bytes, while with a fixed64 type, it will be encoded into 8. Furthermore, a problem coming from the encoding that we just saw is that negative numbers will be encoded into a large positive number and thus will be encoded into 9 bytes. That begs the following question: How can we efficiently choose between the different integer types?

How to choose?

The answer is, as always, it depends. However, we can be systematic in our choices to avoid many errors. We mostly have three choices that we need to make depending on the data we want to serialize:

  • The range of numbers needed
  • The need for negative numbers
  • The data distribution

The range

By now, you might have noticed that the 32 and 64 suffixes on our types are not always about the number of bits into which our data will be serialized. For varints, this is more about the range of numbers that can be serialized. These ranges are dependent on the algorithm used for serialization.

For fixed, signed, and variable-length integers, the range of numbers is the same as the one developers are used to with 32 and 64 bits. This means that we get the following:

[-2^(NUMBER_OF_BITS – 1), 2^(NUMBER_OF_BITS – 1) – 1]

Here, NUMBER_OF_BITS is either 32 or 64 depending on the type you want to use.

For unsigned numbers (uint)—this is again like what developers are expecting—we will get the following range:

[0, 2 * 2^(NUMBER_OF_BITS – 1) - 1]

The need for negative numbers

In the case where you simply do not need negative numbers (for example, for IDs), the ideal type to use is an unsigned integer (uint32, uint64). This will prevent you from encoding negative numbers, it will have twice the range in positive numbers compared to signed integers, and it will serialize using the varint algorithm.

And another type that you will potentially work with is the one for signed integers (sint32, sint64). We won’t go into details about how to serialize them, but the algorithm transforms any negative number into a positive number (ZigZag encoding) and serializes the positive number with the varint algorithm. This is more efficient for serializing negative numbers because instead of being serialized as a large positive number (9 bytes), we take advantage of the varint encoding. However, this is less efficient for serializing positive numbers because now we interleave the previously negative numbers and the positive numbers. This means that for the same positive number, we might have different amounts of encoding bytes.

The data distribution

Finally, one thing that is worth mentioning is that encoding efficiency is highly dependent on your data distribution. You might have chosen some types depending on some assumptions, but your actual data might be different. Two common examples are choosing an int32 or int64 type because we expect to have few negative values and choosing an int64 type because we expect to have few very big numbers. Both situations might result in significant inefficiencies because, in both cases, we might get a lot of values serialized into 9 bytes.

Unfortunately, there is no way of deciding the type that will always perfectly fit the data. In this kind of situation, there is nothing better than doing experiments on real data that is representative of your whole dataset. This will give you an idea of what you are doing correctly and what you are doing wrong.

Length-delimited types

Now that we’ve seen all the types for numbers, we are left with the length-delimited types. These are the types, such as string and bytes, from which we cannot know the length at compile time. Think about these as dynamic arrays.

To serialize such a dynamic structure, we simply prefix the raw data with the length of that data that is following. This means that if we have a string of length 10 and content “0123456789”, we will have the following sequence of bytes:

$ cat length-delimited.txt | protoc --encode=StringValue
  wrappers.proto | hexdump -C
00000000  0a 0a 30 31 32 33 34 35  36 37 38
  39              |..0123456789|
0000000c

Here, the first 0a instance is the field tag + type, the second 0a instance is the hexadecimal representation of 10, and then we have the ASCII values for each character. To see why 0 turns into 30, you can check the ASCII manual by typing man ascii in your terminal and looking for the hexadecimal set. You should have a similar output to the following:

30  0    31  1    32  2    33  3    34  4
35  5    36  6    37  7    38  8    39  9

Here, the first number of each pair is the hexadecimal value for the second one.

Another kind of message field that will be serialized into a length-delimited type is a repeated field. A repeated field is the equivalent of a list. To write such a field, we simply add the repeated keyword before the field type. If we wanted to serialize a list of IDs, we could write the following:

repeated uint64 ids = 1;

And with this, we could store 0 or more IDs.

Similarly, these fields will be serialized with the length as a prefix. If we take the ids field and serialize the numbers from 1 to 9, we will have the following:

$ cat repeated.txt | protoc --encode=RepeatedUInt64Values
  wrappers.proto | hexdump -C
00000000  0a 09 01 02 03 04 05 06  07 08 09 |...........|
0000000b

This is a list of 9 elements followed by 1, 2, … and so on.

Important note

Repeated fields are only serialized as length-delimited types when they are storing scalar types except for strings and bytes. These repeated fields are considered packed. For complex types or user-defined types (messages), the values will be encoded in a less optimal way. Each value will be encoded separately and prefixed by the type + tag byte(s) instead of having the type + tag serialized only once.

Field tags and wire types

Up until now, you read “tag + type” multiple times and we did not really see what this means. As mentioned, the first byte(s) of every serialized field will be a combination of the field type and the field tag. Let us start by seeing what a field tag is. You surely noticed something different about the syntax of a field. Each time we define a field, we add an equals sign and then an incrementing number. Here’s an example:

uint64 id = 1;

While they look like an assignment of value to the field, they are only here to give a unique identifier to the field. These identifiers, called tags, might look insignificant but they are the most important bit of information for serialization. They are used to tell Protobuf into which field to deserialize which data. As we saw earlier during the presentation of the different serialization algorithms, the field name is not serialized—only the type and the tag are. And thus, when deserialization kicks in, it will see a number and it will know where to redirect the following datum.

Now that we know that these tags are simply identifiers, let us see how these values are encoded. Tags are simply serialized as varints but they are serialized with a wire type. A wire type is a number that is given to a group of types in Protobuf. Here is the list of wire types:

Type

Meaning

Used for

0

Varint

int32, int64, uint32, uint64, sint32, sint64, bool, enum

1

64-bit

fixed64, sfixed64, double

2

Length-delimited

string, bytes, packed repeated fields

5

32-bit

fixed32, sfixed32, float

Here, 0 is the type for varints, 1 is for 64-bit, and so on.

To combine the tag and the wire type, Protobuf uses a concept called bit packing. This is a technique that is designed to reduce the number of bits into which the data will be serialized. In our case here, the data is the field metadata (the famous tag + type). So, here is how it works. The last 3 bits of the serialized metadata are reserved for the wire type, and the rest is for the tag. If we take the first example that we mentioned in the Fixed-size numbers section, where we serialized 42 in a fixed32 field with tag 1, we had the following:

0d 2a 00 00 00

This time we are only interested in the 0d part. This is the metadata of the field. To see how this was serialized, let us turn 0d into binary (with 0 padding):

00001101

Here, we have 101 (5) for the wire type—this is the wire type for 32 bits—and we have 00001 (1) for tag 1. Now, since the tag is serialized as a varint, it means that we could have more than 1 byte for that metadata. Here’s a reference for knowing the thresholds at which the number of bytes will increase:

Field tag

Size (in bits)

1

5

16

13

2,048

21

262,144

29

33,554,432

37

536,870,911

37

This means that, as fields without values set to them will not be serialized, we need to keep the lowest tags to the fields that are the most often populated. This will lower the overhead needed to store the metadata. In general, 15 tags are enough, but if you come up with a situation where you need more tags, you might consider moving a group of data into a new message with lower tags.

Common types

As of now, if you checked the companion code, you could see that we are defining a lot of “boring” types that are just wrappers around one field. It is important to note that we wrote them by hand to simply give an example of how you would inspect the serialization of certain data. Most of the time, you will be able to use already defined types that are doing the same.

Well-known types

Protobuf itself comes with a bunch of already defined types. We call them well-known types. While a lot of them are rarely useful outside of the Protobuf library itself or advanced use cases, some of them are important, and we are going to use some of them in this book.

The ones that we can understand quite easily are the wrappers. We wrote some by hand earlier. They usually start with the name of the type they are wrapping and finish with Value. Here is a list of wrappers:

  • BoolValue
  • BytesValue
  • DoubleValue
  • EnumValue
  • FloatValue
  • Int32Value
  • Int64Value
  • StringValue
  • UInt32Value
  • UInt64Value

These types might be interesting for debugging use cases such as the ones we saw earlier or just to serialize simple data such as a number, a string, and so on.

Then, there are types representing time, such as Duration and Timestamp. These two types are defined in the exact same way ([Duration | Timestamp] is not proper protobuf syntax, it means that we could replace by either of both terms):

message [Duration | Timestamp] {
  // Represents seconds of UTC time since Unix epoch
  // 1970-01-01T00:00:00Z. Must be from 0001-01-
    01T00:00:00Z to
  // 9999-12-31T23:59:59Z inclusive.
  int64 seconds = 1;
  // Non-negative fractions of a second at nanosecond
  // resolution. Negative
  // second values with fractions must still have non-
  // negative nanos values
  // that count forward in time. Must be from 0 to
  // 999,999,999
  // inclusive.
  int32 nanos = 2;
}

However, as their name suggests, they represent different concepts. A Duration type is the difference between the start and end time, whereas a Timestamp type is a simple point in time.

Finally, one last important well-known type is FieldMask. This is a type that represents a set of fields that should be included when serializing another type. To understand this one, it might be better to give an example. Let us say that we have an API endpoint returning an account with id, username, and email. If you wanted to only get the account’s email address to prepare a list of people you want to send a promotional email to, you could use a FieldMask type to tell Protobuf to only serialize the email field. This lets us reduce the additional cost of serialization and deserialization because now we only deal with one field instead of three.

Google common types

On top of well-known types, there are types that are defined by Google. These are defined in the googleapis/api-common-protos GitHub repository under the google/type directory and are easily usable in Golang code. I encourage you to check all the types, but I want to mention some interesting ones:

  • LatLng: A latitude/longitude pair storing the values as doubles
  • Money: An amount of money with its currency as defined by ISO 4217
  • Date: Year, Month, and Day stored as int32

Once again, go to the repository to check all the others. These types are battle-tested and in a lot of cases more optimized than trivial types that we would write. However, be aware that these types might also not be a good fit for your use cases. There is no such thing as a one-size-fits-all solution.

Services

Finally, the last construct that is important to see and that we are going to work with during this book is the service one. In Protobuf, a service is a collection of RPC endpoints that contains two major parts. The first part is the input of the RPC, and the second is the output. So, if we wanted to define a service for our accounts, we could have something like the following:

message GetAccountRequest {…}
message GetAccountResponse {…}
service AccountService {
  rpc GetAccount(GetAccountRequest) returns (GetAccountResponse);
  //...
}

Here, we define a message representing a request, and another one representing the response and we use these as input and output of our getAccount RPC call. In the next chapter, we are going to cover more advanced usage of the services, but right now what is important to understand is that Protobuf defines the services but does not generate the code for them. Only gRPC will.

Protobuf’s services are here to describe a contract, and it is the job of an RPC framework to fulfill that contract on the client and server part. Notice that I wrote an RPC framework and not simply gRPC. Any RPC framework could read the information provided by Protobuf’s services and generate code out of it. The goal of Protobuf here is to be independent of any language and framework. What the application does with the serialized data is not important to Protobuf.

Finally, these services are the pillars of gRPC. As we are going to see later in this book, we will use them to make requests, and we are going to implement them on the server side to return responses. Using the defined services on the client side will let us feel like we are directly calling a function on the server. If we talk about AccountService, for example, we can make a call to GetAccount by having the following code:

res := client.GetAccount(req)

Here, client is an instance of a gRPC client, req is an instance of GetAccountRequest, and res is an instance of GetAccountResponse. In this case, it feels a little bit like we are calling GetAccount, which is implemented on the server side. However, this is the doing of gRPC. It will hide all the complex ceremony of serializing and deserializing objects and sending those to the client and server.

Summary

In this chapter, we saw how to write messages and services, and we saw how scalar types are serialized and deserialized. This prepared us for the rest of the book, where we are going to use this knowledge extensively.

In the next chapter, we are going to talk about gRPC, why it uses Protobuf for serialization and deserialization, and what it is doing behind the scenes, and we are going to compare it with REST and GraphQL APIs.

Quiz

  1. What is the number 32 representing in the int32 scalar type?
    1. The number of bits the serialized data will be stored in
    2. The range of numbers that can fit into the scalar type
    3. Whether the type can accept negative numbers or not
  2. What is varint encoding doing?
    1. Compressing data in such a way that a smaller number of bytes will be required for serializing data
    2. Turning every negative number into positive numbers
  3. What is ZigZag encoding doing?
    1. Compressing data in such a way that a smaller number of bytes will be required for serializing data
    2. Turning every negative number into a positive number
  4. In the following code, what is the = 1 syntax and what is it used for?
    uint64 ids = 1;
    1. This is assigning the value 1 to a field
    2. 1 is an identifier that has no other purpose than helping developers
    3. 1 is an identifier that is helping the compiler know into which field to deserialize the binary data.
  5. What is a message?
    1. An object that contains fields and represents an entity
    2. A collection of API endpoints
    3. A list of possible states
  6. What is an enum?
    1. An object that contains fields and represents an entity
    2. A collection of API endpoints
    3. A list of possible states
  7. What is a service?
    1. An object that contains fields and represents an entity
    2. A collection of API endpoints
    3. A list of possible states

Answers

  1. B
  2. A
  3. B
  4. C
  5. A
  6. C
  7. B
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Discover essential guidelines to steer clear of pitfalls when designing and evolving your gRPC services
  • Develop your understanding of advanced gRPC concepts such as authentication and security
  • Put your knowledge into action as you build, test, and deploy a TODO list microservice

Description

In recent years, the popularity of microservice architecture has surged, bringing forth a new set of requirements. Among these, efficient communication between the different services takes center stage, and that's where gRPC shines. This book will take you through creating gRPC servers and clients in an efficient, secure, and scalable way. However, communication is just one aspect of microservices, so this book goes beyond that to show you how to deploy your application on Kubernetes and configure other tools that are needed for making your application more resilient. With these tools at your disposal, you’ll be ready to get started with using gRPC in a microservice architecture. In gRPC Go for Professionals, you'll explore core concepts such as message transmission and the role of Protobuf in serialization and deserialization. Through a step-by-step implementation of a TODO list API, you’ll see the different features of gRPC in action. You’ll then learn different approaches for testing your services and debugging your API endpoints. Finally, you’ll get to grips with deploying the application services via Docker images and Kubernetes.

Who is this book for?

Whether you’re interested in microservices or looking to use gRPC in your product, this book is for you. To fully benefit from its contents, you’ll need a solid grasp of Go programming and using a terminal. If you’re already familiar with gRPC, this book will help you to explore the different concepts and tools in depth.

What you will learn

  • Understand the different API endpoints that gRPC lets you write
  • Discover the essential considerations when writing your Protobuf files
  • Compile Protobuf code with protoc and Bazel for efficient development
  • Gain insights into how advanced gRPC concepts work
  • Grasp techniques for unit testing and load testing your API
  • Get to grips with deploying your microservices with Docker and Kubernetes
  • Discover tools for writing secure and efficient gRPC code

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 14, 2023
Length: 260 pages
Edition : 1st
Language : English
ISBN-13 : 9781837638840
Category :
Languages :
Concepts :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. ₹800 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Jul 14, 2023
Length: 260 pages
Edition : 1st
Language : English
ISBN-13 : 9781837638840
Category :
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
₹800 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
₹4500 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just ₹400 each
Feature tick icon Exclusive print discounts
₹5000 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just ₹400 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 9,309.97
Test-Driven Development in Go
₹2978.99
Effective Concurrency in Go
₹2978.99
gRPC Go for Professionals
₹3351.99
Total 9,309.97 Stars icon
Banner background image

Table of Contents

12 Chapters
Chapter 1: Networking Primer Chevron down icon Chevron up icon
Chapter 2: Protobuf Primer Chevron down icon Chevron up icon
Chapter 3: Introduction to gRPC Chevron down icon Chevron up icon
Chapter 4: Setting Up a Project Chevron down icon Chevron up icon
Chapter 5: Types of gRPC Endpoints Chevron down icon Chevron up icon
Chapter 6: Designing Effective APIs Chevron down icon Chevron up icon
Chapter 7: Out-of-the-Box Features Chevron down icon Chevron up icon
Chapter 8: More Essential Features Chevron down icon Chevron up icon
Chapter 9: Production-Grade APIs Chevron down icon Chevron up icon
Epilogue Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.5
(6 Ratings)
5 star 50%
4 star 50%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




arsalan Aug 06, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book's focus on advanced gRPC ideas like authentication and security is one of its finest features. These are essential considerations when working with distributed microservices, and the authors do a great job of demystifying these difficult subjects. The thorough grasp of effective microservice security and system integrity will be provided to readers.The book stands out for its practical approach, which enables users to put their information to use. It explores core concepts such as message transmission and the role of Protobuf in serialization and deserialization. A must read.
Amazon Verified review Amazon
POE Oct 20, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is intended for Go developers that want to leverage gRPC, Go’s implementation of the universal Remote Procedure Call (RPC) framework. Both a network and Protobuf primer are provided at the start of the book to help ensure the reader is prepared for the content that follows. A brief introduction to gRPC is provided, covering server, client, REST, and GraphQL.The author provides sufficient detail to help you setup a project and design an API. Performance is aptly considered with the programming examples. Several gRPC features are covered to include error handling, class, load balancing, request validation, logging, tracing, and more. Advanced topics include unit and load testing, debugging, and deploying with Docker, Kubernetes, and Envoy Proxy.
Amazon Verified review Amazon
KS Dec 04, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Here's what I loved:1. Comprehensive and practical: The book doesn't shy away from core concepts. It delves deep into message transmission, the magic of Protobuf, and even advanced gRPC features like authentication and security. Yet, it never loses sight of practicality, offering step-by-step implementation through a API.2. Testing and deployment done right: Often neglected aspects of microservice development are given their due here. Jean meticulously explores unit and load testing techniques, ensuring your code stays rock-solid. He then takes you on a deployment journey, leveraging Docker and Kubernetes for seamless production-ready deployments.3. Beyond the code: While technical mastery is key, Jean recognizes the bigger picture. He offers valuable insights into designing and evolving gRPC services, avoiding common pitfalls, and writing secure and efficient code. This holistic approach prepares you for the real-world challenges of microservice development.This book is for you if:1. You're a Go developer eager to leverage gRPC for building microservices.2. You've dabbled in gRPC but crave a deeper understanding and practical guidance.3. You're building microservices and want to ensure their scalability, efficiency, and robustness.Overall, "gRPC Go for Professionals" is a masterclass in building production-grade microservices with gRPC and Go. It's a must-have for any developer serious about mastering this powerful technology.Bonus points:1. The book is well-written, engaging, and avoids jargon overload.2. The accompanying code examples are clear, concise, and readily applicable.3. Jean's passion for gRPC shines through, making the learning experience even more enjoyable.Highly recommended!
Amazon Verified review Amazon
Tans Aug 02, 2023
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
For software developers wishing to fully utilise gRPC and Go in the construction of effective and reliable microservices, gRPC Go for Professionals is an invaluable and thorough reference.The book's practical approach to learning is one of its best qualities. Practical, real-world examples are used to walk readers through the process of designing, implementing, and testing production-grade microservices using gRPC and Go. The offered code samples are well-organized, and they are complemented by illuminating explanations, which promotes a deeper comprehension of the fundamental mechanics at work. You'll discover various techniques for evaluating your services and troubleshooting your API endpoints. Highly advised.
Amazon Verified review Amazon
KT Nov 27, 2023
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
...and someone writes a book about it.Look - information down a wire does not, never has, and never will both a) be efficient and b) align exactly with the same information in language-aligned data structures. How you make the translations is tricky, and one should take a long hard look at the entire process before ever starting.Google engineers naturally ignored the process of thinking, and dove right in. They *did* manage to strip out all the human-readable-on-the-wire stupidity which characterizes both XML and JSON/YAML, opting for machine-readable alacrity.However, they got hamstrung by how that layout translates into Go data structures (all the while crowing about how gRPC [ and protobufs - which are a whole 'nother level of mis-engineered hell ] ) are language-agnostic. In this case, it simply means that gRPC is a small inadequate solution looking for a problem, which is equally inadequate in a lot of languages, with a lot of bags on the side to attempt to correct these deficiencies.In short, it's Sun ONC-RPC with XDR, with slight variations and a similar number of drawbacks and failures, but it's Google, so that must mean it's the Right Thing(TM).Oh, the book is reasonably good at describing gRPC, insofar as any of these books end up ever being any good at anything. If you absolutely *must* use gRPC, then I guess you might get some value from this book. Be sure the expense it though, don't pay for it out of your own pocket.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.