Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
gRPC Go for Professionals
gRPC Go for Professionals

gRPC Go for Professionals: Implement, test, and deploy production-grade microservices

eBook
€8.99 €26.99
Paperback
€33.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

gRPC Go for Professionals

Protobuf Primer

As we now understand the basic networking concepts behind gRPC, we can touch upon another pillar in the construction of your gRPC APIs. This pillar is Protocol Buffers, more commonly known as Protobuf. It is an important part of the communication process because, as we saw in the previous chapter, every message is encoded into binary, and this is exactly what Protobuf is doing for us in gRPC. In this chapter, the goal is to understand what Protobuf is and why it is needed for high-efficiency communication. Finally, we are going to look at some details concerning the serialization and deserialization of messages.

In this chapter, we’re going to cover the following main topics:

  • Protobuf is an Interface Description Language (IDL)
  • Serialization/deserialization
  • Protobuf versus JSON
  • Encoding details
  • Common types
  • Services

Prerequisites

You can find the code for this chapter at https://github.com/PacktPublishing/gRPC-Go-for-Professionals/tree/main/chapter2. In this chapter, we are going to discuss how Protocol Buffers serializes and deserializes data. While this can be done by writing code, we are going to stay away from that in order to learn how to use the protoc compiler to debug and optimize our Protobuf schemas. Thus, if you want to reproduce the examples specified, you will need to download the protoc compiler from the Protobuf GitHub Releases page (https://github.com/protocolbuffers/protobuf/releases). The easiest way to get started is to download the binary releases. These releases are named with this convention: protoc-${VERSION}-${OS}-{ARCHITECTURE}. Uncompress the zip file and follow the readme.txt instructions (note: we do intend to use Well-Known Types in the future so make sure you also install the includes). After that, you should be able to run the following command:

$ protoc --version

Finally, as always, you will be able to find the companion code in the GitHub repository under the folder for the current chapter (chapter2).

Protobuf is an IDL

Protobuf is a language. More precisely, it is an IDL. It is important to make such a distinction because, as we will see more in detail later, in Protobuf, we do not write any logic the way we do in a programming language, but instead, we write data schemas, which are contracts to be used for serialization and are to be fulfilled by deserialization. So, before explaining all the rules that we need to follow when writing a .proto file and going through all the details about serialization and deserialization, we need to first get a sense of what an IDL is and what is the goal of such a language.

An IDL, as we saw earlier, is an acronym for Interface Description Language, and as we can see, the name contains three parts. The first part, Interface, describes a piece of code that sits in between two or more applications and hides the complexity of implementation. As such, we do not make any assumptions about the hardware on which an application is running, the OS on which it runs, and in which programming language it is written. This interface is, by design, hardware-, OS-, and language-agnostic. This is important for Protobuf and several other serialization data schemas because it lets developers write the code once and it can be used across different projects.

The second part is Description, and this sits on top of the concept of Interface. Our interface is describing what the two applications can expect to receive and what they are expected to send to each other. This includes describing some types and their properties, the relationship between these types, and the way these types are serialized and deserialized. As this may be a bit abstract, let us look at an example in Protobuf. If we wanted to create a type called Account that contains an ID, a username, and the rights this account has, we could write the following:

syntax = "proto3";
enum AccountRight {
  ACCOUNT_RIGHT_UNSPECIFIED = 0;
  ACCOUNT_RIGHT_READ = 1;
  ACCOUNT_RIGHT_READ_WRITE = 2;
  ACCOUNT_RIGHT_ADMIN = 3;
}
message Account {
  uint64 id = 1;
  string username = 2;
  AccountRight right = 3;
}

If we skip some of the details that are not important at this stage, we can see that we define the following:

  • An enumeration listing all the possible rights and an extra role called ACCOUNT_RIGHT_UNSPECIFIED
  • A message (equivalent to a class or struct) listing the three properties that an Account type should have

Again, without looking at the details, it is readable, and the relationship between Account and AccountRight is easy to understand.

Finally, the last part is Language. This is here to say that, as with every language—computer ones or not—we have rules that we need to follow so that another human, or a compiler, can understand our intent. In Protobuf, we write our code to please the compiler (protoc), and then it does all the heavy lifting for us. It will read our code and generate code in the language that we need for our application, and then our user code will interact with the generated code. Let us look at a simplified output of what the Account type defined previously would give in Go:

type AccountRight int32
const (
  AccountRight_ACCOUNT_RIGHT_UNSPECIFIED AccountRight = 0
  AccountRight_ACCOUNT_RIGHT_READ AccountRight = 1
  AccountRight_ACCOUNT_RIGHT_READ_WRITE AccountRight = 2
  AccountRight_ACCOUNT_RIGHT_ADMIN AccountRight = 3
)
type Account struct {
  Id uint64 `protobuf:"varint,1,…`
  Username string `protobuf:"bytes,2,…`
  Right AccountRight `protobuf:"varint,3,…`
}

In this code, there are important things to notice. Let us break this code into pieces:

type AccountRight int32
const (
  AccountRight_ACCOUNT_RIGHT_UNSPECIFIED AccountRight = 0
  AccountRight_ACCOUNT_RIGHT_READ AccountRight = 1
  AccountRight_ACCOUNT_RIGHT_READ_WRITE AccountRight = 2
  AccountRight_ACCOUNT_RIGHT_ADMIN AccountRight = 3
)

Our AccountRight enum is defined as constants with values of type int32. Each enum variant’s name is prefixed with the name of the enum, and each constant has the value that we set after the equals sign in the Protobuf code. These values are called field tags, and we will introduce them later in this chapter.

Now, take a look at the following code:

type Account struct {
  Id uint64 `protobuf:"varint,1,…`
  Username string `protobuf:"bytes,2,…`
  Right AccountRight `protobuf:"varint,3,…`
}

Here, we have our Account message transpiled to a struct with Id, Username, and Right exported fields. Each of these fields has a type that is converted from a Protobuf type to a Golang type. In our example here, Go types and Protobuf types have the exact same names, but it is important to know that in some cases, the types will translate differently. Such an example is double in Protobuf, which will translate to float64 for Go. Finally, we have the field tags, referenced in the metadata following the field. Once again, their meaning will be explained later in this chapter.

So, to recapitulate, an IDL is a piece of code sitting between different applications and describing objects and their relationships by following certain defined rules. This IDL, in the case of Protobuf, will be read, and it will be used to generate code in another language. And after that, this generated code will be used by the user code to serialize and deserialize data.

Serialization and deserialization

Serialization and deserialization are two concepts that are used in many ways and in many kinds of applications. This section is going to discuss these two concepts in the context of Protobuf. So, even if you feel confident about your understanding of these two notions, it is important to get your head straight and understand them properly. Once you do, it will be easier to deal with the Encoding details section where we are going to delve deeper into how Protobuf serializes and deserializes data under the hood.

Let us start with serialization and then let us touch upon deserialization, which is just the opposite process. The goal of serialization is to store data, generally in a more compact or readable representation, to use it later. For Protobuf, this serialization happens on the data that you set in your generated code’s objects. For example, if we set the Id, Username, and Right fields in our Account struct, this data will be what Protobuf will work on. It will turn each field into a binary representation with different algorithms depending on the field type. And after that, we use this in-memory binary to either send data over the network (with gRPC, for example) or store it in more persistent storage.

Once it is time for us to use this serialized data again, Protobuf will perform deserialization. This is the process of reading the binary created earlier and populating the data back into an object in your favorite programming language to be able to act on it. Once again, Protobuf will use different algorithms depending on the type of data to read the underlying binary and know how to set or not set each of the fields of the object in question.

To summarize, Protobuf performs binary serialization to make data more compact than other formats such as XML or JSON. To do so, it will read data from the different fields of the generated code’s object, turn it into binary with different algorithms, and then when we finally need the data, Protobuf will read the data and populate the fields of a given object.

Protobuf versus JSON

If you’ve already worked on the backend or even frontend, there is a 99.99 percent chance that you’ve worked with JSON. This is by far the most popular data schema out there and there are reasons why it is the case. In this section, we are going to discuss the pros and cons of both JSON and Protobuf and we are going to explain which one is more suitable for which situation. The goal here is to be objective because as engineers, we need to be to choose the right tool for the right job.

As we could write chapters about the pros and cons of each technology, we are going to reduce the scope of these advantages and disadvantages to three categories. These categories are the ones that developers care the most about when developing applications, as detailed here:

  • Size of serialized data: We want to reduce the bandwidth when sending data over the network
  • Readability of the data schema and the serialized data: We want to be able to have a descriptive schema so that newcomers or users can quickly understand it, and we want to be able to visualize the data serialized for debugging or editing purposes
  • Strictness of the schema: This quickly becomes a requirement when APIs grow, and we need to ensure the correct type of data is being sent and received between different applications

Serialized data size

In serialization, the Holy Grail is, in a lot of use cases, reducing the size of your data. This is because most often, we want to send that data to another application across the network, and the lighter the payload, the faster it should arrive on the other side. In this space, Protobuf is the clear winner against JSON. This is the case because JSON serializes to text whereas Protobuf serializes to binary and thus has more room to improve how compact the serialized data is. An example of that is numbers. If you set a number to the id field in JSON, you would get something like this:

{ id: 123 }

First, we have some boilerplate with the braces, but most importantly we have a number that takes three characters, or three bytes. In Protobuf, if we set the same value to the same field, we would get the hexadecimal shown in the following callout.

Important note

In the chapter2 folder of the companion GitHub repository, you will find the files need to reproduce all the results in this chapter. With protoc, we will be able to display the hexadecimal representation of our serialized data. To do that, you can run the following command:

Linux/Mac: cat ${INPUT_FILE_NAME}.txt | protoc --encode=${MESSAGE_NAME} ${PROTO_FILE_NAME}.proto | hexdump –C

Windows (PowerShell): (Get-Content ${INPUT_FILE_NAME}.txt | protoc --encode=${MESSAGE_NAME} ${PROTO_FILE_NAME}.proto) -join "`n" | Format-Hex

For example:

$ cat account.txt | protoc --encode=Account account.proto | hexdump -C

00000000 08 7b |.{|

00000002

Right now, this might look like magic numbers, but we are going to see in the next section how it is encoded into two bytes. Now, two bytes instead of three might look negligible but imagine this kind of difference at scale, and you would have wasted millions of bytes.

Readability

The next important thing about data schema serialization is readability. However, readability is a little bit too broad, especially in the context of Protobuf. As we saw, as opposed to JSON, Protobuf separates the schema from the serialized data. We write the schema in a .proto file and then the serialization will give us some binary. In JSON, the schema is the actual serialized data. So, to be clearer and more precise about readability, let us split readability into two parts: the readability of the schema and the readability of the serialized data.

As for the readability of the schema, this is a matter of preference, but there are a few points that make Protobuf stand out. The first one of them is that Protobuf can contain comments, and this is nice to have for extra documentation describing requirements. JSON does not allow comments in the schema, so we must find a different way to provide documentation. Generally, it is done with GitHub wikis or other external documentation platforms. This is a problem because this kind of documentation quickly becomes outdated when the project and the team working on it get bigger. A simple oversight and your documentation do not describe the real state of your API. With Protobuf, it is still possible to have outdated documentation, but as the documentation is closer to the code, it provides more incentive and awareness to change the related comment.

The second feature that makes Protobuf more readable is the fact that it has explicit types. JSON has types but they are implicit. You know that a field contains a string if its value is surrounded by double quotes, a number when the value is only digits, and so on. In Protobuf, especially for numbers, we get more information out of types. If we have an int32 type, we can obviously know that this is a number, but on top of that, we know that it can accept negative numbers and we are able to know the range of numbers that can be stored in this field. Explicit types are important not only for security (more on that later) but also for letting the developer know the details of each field and letting them describe accurately their schemas to fulfill the business requirements.

For readability of the schema, I think we can agree that Protobuf is the winner here because it can be written as self-documenting code and we get explicit types for every field in objects.

As for the readability of serialized data, JSON is the clear winner here. As mentioned, JSON is both the data schema and the serialized data. What you see is what you get. Protobuf, however, serializes the data to binary, and it is way harder to read that, even if you know how Protobuf serializes and deserializes data. In the end, this is a trade-off between readability and serialized data size here. Protobuf will outperform JSON on serialized data and is way more explicit on the readability of the data schema. However, if you need human-readable data that can be edited by hand, Protobuf is not the right fit for your use case.

Schema strictness

Finally, the last category is the strictness of the schema. This is usually a nice feature to have when your team and your project scale because it ensures that the schema is correctly populated, and for a certain target language, it shortens the feedback loop for the developers.

Schemas are always valid ones because every field has an explicit type that can only contain certain values. We simply cannot pass a string to a field that was expecting a number or a negative number to a field that was expecting a positive number. This is enforced in the generated code by either runtime checks for dynamic languages or at compile time for typed languages. In our case, since Go is a typed language, we will have compile-time checks.

And finally, in typed languages, a schema shortens the feedback loop because instead of having a runtime check that might or might not trigger an error, we simply have a compilation error. This makes our software more reliable, and developers can feel confident that if they were able to compile, the data set into the object would be valid.

In pure JSON, we cannot ensure that our schema is correct at compile time. Most often, developers will add extra configurations such as JSON Schema to have this kind of assurance at runtime. This adds complexity to our project and requires every developer to be disciplined because they could simply go about their code without developing the schema. In Protobuf, we do schema-driven development. The schema comes first, and then our application revolves around the generated types. Furthermore, we have assurance at compile time that the values that we set are correct and we do not need to replicate the setup to all our microservices or subprojects. In the end, we spend less time on configuration and we spend more time thinking about our data schemas and the data encoding.

Encoding details

Up until now, we talked a lot about “algorithms”; however, we did not get too much into the specifics. In this section, we are going to see the major algorithms that are behind the serialization and deserialization processes in Protobuf. We are first going to see all the types that we can use for our fields, then with that, we are going to divide them into three categories, and finally, we are going to explain which algorithm is used for each category.

In Protobuf, types that are considered simple and that are provided by Protobuf out of the box are called scalar types. We can use 15 of such types, as listed here:

  • int32
  • int64
  • uint32
  • uint64
  • sint32
  • sint64
  • fixed32
  • fixed64
  • sfixed32
  • sfixed64
  • double
  • float
  • string
  • bytes
  • bool

And out of these 15 types, 10 are for integers (the 10 first ones). These types might be intimidating at first, but do not worry too much about how to choose between them right now; we are going to discuss that throughout this section. The most important thing to understand right now is that two-thirds of the types are for integers, and this shows what Protobuf is good at—encoding integers.

Now that we know the scalar types, let us separate these types into three categories. However, we are not here to make simple categories such as numbers, arrays, and so on. We want to make categories that are related to the Protobuf serialization algorithms. In total, we have three: fixed-size numbers, variable-size integers (varints), and length-delimited types. Here is a table with each category populated:

Fixed-size numbers

Varints

Length-delimited types

fixed32

int32

string

fixed64

int64

bytes

sfixed32

uint32

sfixed64

uint64

double

bool

float

Let’s go through each now.

Fixed-size numbers

The easiest one to understand for developers who are used to typed languages is fixed-size numbers. If you worked with lower-level languages in which you tried to optimize storage space, you know that we can, on most hardware, store an integer in 32 bits (4 bytes) or in 64 bits (8 bytes). fixed32 and fixed64 are just binary representations of a normal number that you would have in languages that give you control over the storage size of your integers (for example, Go, C++, Rust, and so on). If we serialize the number 42 into a fixed32 type, we will have the following:

$ cat fixed.txt | protoc --encode=Fixed32Value
  wrappers.proto | hexdump -C
00000000  0d 2a 00 00 00                          |.*...|
00000005

Here, 2a is 42, and 0d is a combination of the field tag and the type of the field (more about that later in this section). In the same manner, if we serialize 42 in a fixed64 type, we will have the following:

$ cat fixed.txt | protoc --encode=Fixed64Value
  wrappers.proto | hexdump -C
00000000  09 2a 00 00 00 00 00 00  00         |.*.......|
00000009

And the only thing that changed is the combination between the type of the field and the field tag (09). This is mostly because we changed the type to 64-bit numbers.

Two other scalar types that are easy to understand are float and double. Once again, Protobuf produces the binary representation of these types. If we encode 42.42 as float, we will get the following output:

$ cat floating_point.txt | protoc --encode=FloatValue
  wrappers.proto | hexdump -C
00000000  0d 14 ae 29 42                          |...)B|
00000005

In this case, this is a little bit more complicated to decode, but this is simply because float numbers are encoded differently. If you are interested in this kind of data storage, you can look at the IEEE Standard for Floating-Point Arithmetic (IEEE 754), which explains how a float is formed in memory. What is important to note here is that floats are encoded in 4 bytes, and in front, we have our tag + type. And for a double type with a value of 42.42, we will get the following:

$ cat floating_point.txt | protoc --encode=DoubleValue
  wrappers.proto | hexdump -C
00000000  09 f6 28 5c 8f c2 35 45  40         |..(\..5E@|
00000009

This is encoded in 8 bytes and the tag + type. Note that the tag + type also changed here because we are in the realm of 64-bit numbers.

Finally, we are left with sfixed32 and sfixed64. We did not mention it earlier, but fixed32 and fixed64 are unsigned numbers. This means that we cannot store negative numbers in fields with these types. sfixed32 and sfixed64 solve that. So, if we encode –42 in a sfixed32 type, we will have the following:

$ cat sfixed.txt | protoc --encode=SFixed32Value
  wrappers.proto | hexdump -C
00000000  0d d6 ff ff ff                          |.....|
00000005

This is obtained by taking the binary for 42, flipping all the bits (1’s complement), and adding one (2’s complement). Otherwise, if you serialize a positive number, you will have the same binary as the fixed32 type. Then, if we encode –42 in a field with type sfixed64, we will get the following:

$ cat sfixed.txt | protoc --encode=SFixed64Value
  wrappers.proto | hexdump -C
00000000  09 d6 ff ff ff ff ff ff  ff         |.........|
00000009

This is like the sfixed32 type, only the tag + type was changed.

To summarize, fixed integers are simple binary representations of integers that resemble how they are stored in most computers’ memory. As their name suggests, their serialized data will always be serialized into the same number of bytes. For some use cases, this is fine to use such representations; however, in most cases, we would like to reduce the number of bits that are just here for padding. And in these use cases, we will use something called varints.

Varints

Now that we have seen fixed integers, let us move to another type of serialization for numbers: variable-length integers. As its name suggests, we will not get a fixed number of bytes when serializing an integer.

To be more precise, the smaller the integer, the smaller the number of bytes it will be serialized into, and the bigger the integer, the larger the number of bytes. Let us look at how the algorithm works.

In this example, let us serialize the number 300. To start, we are going to take the binary representation of that number:

100101100

With this binary, we can now split it into groups of 7 bits and pad with zeros if needed:

0000010
0101100

Now, since we lack 2 more bits to create 2 bytes, we are going to add 1 as the most significant bit (MSB) for all the groups except the first one, and we are going to add 0 as the MSB for the first group:

00000010
10101100

These MSBs are continuation bits. This means that, when we have 1, we still have 7 bits to read after, and if we have 0, this is the last group to be read. Finally, we put this number into little-endian order, and we have the following:

10101100 00000010

Or, we would have AC 02 in hexadecimal. Now that we have serialized 300 into AC 02, and keeping in mind that deserialization is the opposite of serialization, we can deserialize that data. We take our binary representation for AC 02, drop the continuation bits (MSBs), and we reverse the order of bytes. In the end, we have the following binary:

100101100

This is the same binary we started with. It equals 300.

Now, in the real world, you might have larger numbers. For a quick reference on positive numbers, here is a list of the thresholds at which the number of bytes will increase:

Threshold value

Byte size

0

0

1

1

128

2

16,384

3

2,097,152

4

268,435,456

5

34,359,738,368

6

4,398,046,511,104

7

562,949,953,421,312

8

72,057,594,037,927,936

9

9,223,372,036,854,775,807

9

An astute reader might have noticed that having a varint is often beneficial, but in some cases, we might encode our values into more bytes than needed. For example, if we encode 72,057,594,037,927,936 into an int64 type, it will be serialized into 9 bytes, while with a fixed64 type, it will be encoded into 8. Furthermore, a problem coming from the encoding that we just saw is that negative numbers will be encoded into a large positive number and thus will be encoded into 9 bytes. That begs the following question: How can we efficiently choose between the different integer types?

How to choose?

The answer is, as always, it depends. However, we can be systematic in our choices to avoid many errors. We mostly have three choices that we need to make depending on the data we want to serialize:

  • The range of numbers needed
  • The need for negative numbers
  • The data distribution

The range

By now, you might have noticed that the 32 and 64 suffixes on our types are not always about the number of bits into which our data will be serialized. For varints, this is more about the range of numbers that can be serialized. These ranges are dependent on the algorithm used for serialization.

For fixed, signed, and variable-length integers, the range of numbers is the same as the one developers are used to with 32 and 64 bits. This means that we get the following:

[-2^(NUMBER_OF_BITS – 1), 2^(NUMBER_OF_BITS – 1) – 1]

Here, NUMBER_OF_BITS is either 32 or 64 depending on the type you want to use.

For unsigned numbers (uint)—this is again like what developers are expecting—we will get the following range:

[0, 2 * 2^(NUMBER_OF_BITS – 1) - 1]

The need for negative numbers

In the case where you simply do not need negative numbers (for example, for IDs), the ideal type to use is an unsigned integer (uint32, uint64). This will prevent you from encoding negative numbers, it will have twice the range in positive numbers compared to signed integers, and it will serialize using the varint algorithm.

And another type that you will potentially work with is the one for signed integers (sint32, sint64). We won’t go into details about how to serialize them, but the algorithm transforms any negative number into a positive number (ZigZag encoding) and serializes the positive number with the varint algorithm. This is more efficient for serializing negative numbers because instead of being serialized as a large positive number (9 bytes), we take advantage of the varint encoding. However, this is less efficient for serializing positive numbers because now we interleave the previously negative numbers and the positive numbers. This means that for the same positive number, we might have different amounts of encoding bytes.

The data distribution

Finally, one thing that is worth mentioning is that encoding efficiency is highly dependent on your data distribution. You might have chosen some types depending on some assumptions, but your actual data might be different. Two common examples are choosing an int32 or int64 type because we expect to have few negative values and choosing an int64 type because we expect to have few very big numbers. Both situations might result in significant inefficiencies because, in both cases, we might get a lot of values serialized into 9 bytes.

Unfortunately, there is no way of deciding the type that will always perfectly fit the data. In this kind of situation, there is nothing better than doing experiments on real data that is representative of your whole dataset. This will give you an idea of what you are doing correctly and what you are doing wrong.

Length-delimited types

Now that we’ve seen all the types for numbers, we are left with the length-delimited types. These are the types, such as string and bytes, from which we cannot know the length at compile time. Think about these as dynamic arrays.

To serialize such a dynamic structure, we simply prefix the raw data with the length of that data that is following. This means that if we have a string of length 10 and content “0123456789”, we will have the following sequence of bytes:

$ cat length-delimited.txt | protoc --encode=StringValue
  wrappers.proto | hexdump -C
00000000  0a 0a 30 31 32 33 34 35  36 37 38
  39              |..0123456789|
0000000c

Here, the first 0a instance is the field tag + type, the second 0a instance is the hexadecimal representation of 10, and then we have the ASCII values for each character. To see why 0 turns into 30, you can check the ASCII manual by typing man ascii in your terminal and looking for the hexadecimal set. You should have a similar output to the following:

30  0    31  1    32  2    33  3    34  4
35  5    36  6    37  7    38  8    39  9

Here, the first number of each pair is the hexadecimal value for the second one.

Another kind of message field that will be serialized into a length-delimited type is a repeated field. A repeated field is the equivalent of a list. To write such a field, we simply add the repeated keyword before the field type. If we wanted to serialize a list of IDs, we could write the following:

repeated uint64 ids = 1;

And with this, we could store 0 or more IDs.

Similarly, these fields will be serialized with the length as a prefix. If we take the ids field and serialize the numbers from 1 to 9, we will have the following:

$ cat repeated.txt | protoc --encode=RepeatedUInt64Values
  wrappers.proto | hexdump -C
00000000  0a 09 01 02 03 04 05 06  07 08 09 |...........|
0000000b

This is a list of 9 elements followed by 1, 2, … and so on.

Important note

Repeated fields are only serialized as length-delimited types when they are storing scalar types except for strings and bytes. These repeated fields are considered packed. For complex types or user-defined types (messages), the values will be encoded in a less optimal way. Each value will be encoded separately and prefixed by the type + tag byte(s) instead of having the type + tag serialized only once.

Field tags and wire types

Up until now, you read “tag + type” multiple times and we did not really see what this means. As mentioned, the first byte(s) of every serialized field will be a combination of the field type and the field tag. Let us start by seeing what a field tag is. You surely noticed something different about the syntax of a field. Each time we define a field, we add an equals sign and then an incrementing number. Here’s an example:

uint64 id = 1;

While they look like an assignment of value to the field, they are only here to give a unique identifier to the field. These identifiers, called tags, might look insignificant but they are the most important bit of information for serialization. They are used to tell Protobuf into which field to deserialize which data. As we saw earlier during the presentation of the different serialization algorithms, the field name is not serialized—only the type and the tag are. And thus, when deserialization kicks in, it will see a number and it will know where to redirect the following datum.

Now that we know that these tags are simply identifiers, let us see how these values are encoded. Tags are simply serialized as varints but they are serialized with a wire type. A wire type is a number that is given to a group of types in Protobuf. Here is the list of wire types:

Type

Meaning

Used for

0

Varint

int32, int64, uint32, uint64, sint32, sint64, bool, enum

1

64-bit

fixed64, sfixed64, double

2

Length-delimited

string, bytes, packed repeated fields

5

32-bit

fixed32, sfixed32, float

Here, 0 is the type for varints, 1 is for 64-bit, and so on.

To combine the tag and the wire type, Protobuf uses a concept called bit packing. This is a technique that is designed to reduce the number of bits into which the data will be serialized. In our case here, the data is the field metadata (the famous tag + type). So, here is how it works. The last 3 bits of the serialized metadata are reserved for the wire type, and the rest is for the tag. If we take the first example that we mentioned in the Fixed-size numbers section, where we serialized 42 in a fixed32 field with tag 1, we had the following:

0d 2a 00 00 00

This time we are only interested in the 0d part. This is the metadata of the field. To see how this was serialized, let us turn 0d into binary (with 0 padding):

00001101

Here, we have 101 (5) for the wire type—this is the wire type for 32 bits—and we have 00001 (1) for tag 1. Now, since the tag is serialized as a varint, it means that we could have more than 1 byte for that metadata. Here’s a reference for knowing the thresholds at which the number of bytes will increase:

Field tag

Size (in bits)

1

5

16

13

2,048

21

262,144

29

33,554,432

37

536,870,911

37

This means that, as fields without values set to them will not be serialized, we need to keep the lowest tags to the fields that are the most often populated. This will lower the overhead needed to store the metadata. In general, 15 tags are enough, but if you come up with a situation where you need more tags, you might consider moving a group of data into a new message with lower tags.

Common types

As of now, if you checked the companion code, you could see that we are defining a lot of “boring” types that are just wrappers around one field. It is important to note that we wrote them by hand to simply give an example of how you would inspect the serialization of certain data. Most of the time, you will be able to use already defined types that are doing the same.

Well-known types

Protobuf itself comes with a bunch of already defined types. We call them well-known types. While a lot of them are rarely useful outside of the Protobuf library itself or advanced use cases, some of them are important, and we are going to use some of them in this book.

The ones that we can understand quite easily are the wrappers. We wrote some by hand earlier. They usually start with the name of the type they are wrapping and finish with Value. Here is a list of wrappers:

  • BoolValue
  • BytesValue
  • DoubleValue
  • EnumValue
  • FloatValue
  • Int32Value
  • Int64Value
  • StringValue
  • UInt32Value
  • UInt64Value

These types might be interesting for debugging use cases such as the ones we saw earlier or just to serialize simple data such as a number, a string, and so on.

Then, there are types representing time, such as Duration and Timestamp. These two types are defined in the exact same way ([Duration | Timestamp] is not proper protobuf syntax, it means that we could replace by either of both terms):

message [Duration | Timestamp] {
  // Represents seconds of UTC time since Unix epoch
  // 1970-01-01T00:00:00Z. Must be from 0001-01-
    01T00:00:00Z to
  // 9999-12-31T23:59:59Z inclusive.
  int64 seconds = 1;
  // Non-negative fractions of a second at nanosecond
  // resolution. Negative
  // second values with fractions must still have non-
  // negative nanos values
  // that count forward in time. Must be from 0 to
  // 999,999,999
  // inclusive.
  int32 nanos = 2;
}

However, as their name suggests, they represent different concepts. A Duration type is the difference between the start and end time, whereas a Timestamp type is a simple point in time.

Finally, one last important well-known type is FieldMask. This is a type that represents a set of fields that should be included when serializing another type. To understand this one, it might be better to give an example. Let us say that we have an API endpoint returning an account with id, username, and email. If you wanted to only get the account’s email address to prepare a list of people you want to send a promotional email to, you could use a FieldMask type to tell Protobuf to only serialize the email field. This lets us reduce the additional cost of serialization and deserialization because now we only deal with one field instead of three.

Google common types

On top of well-known types, there are types that are defined by Google. These are defined in the googleapis/api-common-protos GitHub repository under the google/type directory and are easily usable in Golang code. I encourage you to check all the types, but I want to mention some interesting ones:

  • LatLng: A latitude/longitude pair storing the values as doubles
  • Money: An amount of money with its currency as defined by ISO 4217
  • Date: Year, Month, and Day stored as int32

Once again, go to the repository to check all the others. These types are battle-tested and in a lot of cases more optimized than trivial types that we would write. However, be aware that these types might also not be a good fit for your use cases. There is no such thing as a one-size-fits-all solution.

Services

Finally, the last construct that is important to see and that we are going to work with during this book is the service one. In Protobuf, a service is a collection of RPC endpoints that contains two major parts. The first part is the input of the RPC, and the second is the output. So, if we wanted to define a service for our accounts, we could have something like the following:

message GetAccountRequest {…}
message GetAccountResponse {…}
service AccountService {
  rpc GetAccount(GetAccountRequest) returns (GetAccountResponse);
  //...
}

Here, we define a message representing a request, and another one representing the response and we use these as input and output of our getAccount RPC call. In the next chapter, we are going to cover more advanced usage of the services, but right now what is important to understand is that Protobuf defines the services but does not generate the code for them. Only gRPC will.

Protobuf’s services are here to describe a contract, and it is the job of an RPC framework to fulfill that contract on the client and server part. Notice that I wrote an RPC framework and not simply gRPC. Any RPC framework could read the information provided by Protobuf’s services and generate code out of it. The goal of Protobuf here is to be independent of any language and framework. What the application does with the serialized data is not important to Protobuf.

Finally, these services are the pillars of gRPC. As we are going to see later in this book, we will use them to make requests, and we are going to implement them on the server side to return responses. Using the defined services on the client side will let us feel like we are directly calling a function on the server. If we talk about AccountService, for example, we can make a call to GetAccount by having the following code:

res := client.GetAccount(req)

Here, client is an instance of a gRPC client, req is an instance of GetAccountRequest, and res is an instance of GetAccountResponse. In this case, it feels a little bit like we are calling GetAccount, which is implemented on the server side. However, this is the doing of gRPC. It will hide all the complex ceremony of serializing and deserializing objects and sending those to the client and server.

Summary

In this chapter, we saw how to write messages and services, and we saw how scalar types are serialized and deserialized. This prepared us for the rest of the book, where we are going to use this knowledge extensively.

In the next chapter, we are going to talk about gRPC, why it uses Protobuf for serialization and deserialization, and what it is doing behind the scenes, and we are going to compare it with REST and GraphQL APIs.

Quiz

  1. What is the number 32 representing in the int32 scalar type?
    1. The number of bits the serialized data will be stored in
    2. The range of numbers that can fit into the scalar type
    3. Whether the type can accept negative numbers or not
  2. What is varint encoding doing?
    1. Compressing data in such a way that a smaller number of bytes will be required for serializing data
    2. Turning every negative number into positive numbers
  3. What is ZigZag encoding doing?
    1. Compressing data in such a way that a smaller number of bytes will be required for serializing data
    2. Turning every negative number into a positive number
  4. In the following code, what is the = 1 syntax and what is it used for?
    uint64 ids = 1;
    1. This is assigning the value 1 to a field
    2. 1 is an identifier that has no other purpose than helping developers
    3. 1 is an identifier that is helping the compiler know into which field to deserialize the binary data.
  5. What is a message?
    1. An object that contains fields and represents an entity
    2. A collection of API endpoints
    3. A list of possible states
  6. What is an enum?
    1. An object that contains fields and represents an entity
    2. A collection of API endpoints
    3. A list of possible states
  7. What is a service?
    1. An object that contains fields and represents an entity
    2. A collection of API endpoints
    3. A list of possible states

Answers

  1. B
  2. A
  3. B
  4. C
  5. A
  6. C
  7. B
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Discover essential guidelines to steer clear of pitfalls when designing and evolving your gRPC services
  • Develop your understanding of advanced gRPC concepts such as authentication and security
  • Put your knowledge into action as you build, test, and deploy a TODO list microservice

Description

In recent years, the popularity of microservice architecture has surged, bringing forth a new set of requirements. Among these, efficient communication between the different services takes center stage, and that's where gRPC shines. This book will take you through creating gRPC servers and clients in an efficient, secure, and scalable way. However, communication is just one aspect of microservices, so this book goes beyond that to show you how to deploy your application on Kubernetes and configure other tools that are needed for making your application more resilient. With these tools at your disposal, you’ll be ready to get started with using gRPC in a microservice architecture. In gRPC Go for Professionals, you'll explore core concepts such as message transmission and the role of Protobuf in serialization and deserialization. Through a step-by-step implementation of a TODO list API, you’ll see the different features of gRPC in action. You’ll then learn different approaches for testing your services and debugging your API endpoints. Finally, you’ll get to grips with deploying the application services via Docker images and Kubernetes.

Who is this book for?

Whether you’re interested in microservices or looking to use gRPC in your product, this book is for you. To fully benefit from its contents, you’ll need a solid grasp of Go programming and using a terminal. If you’re already familiar with gRPC, this book will help you to explore the different concepts and tools in depth.

What you will learn

  • Understand the different API endpoints that gRPC lets you write
  • Discover the essential considerations when writing your Protobuf files
  • Compile Protobuf code with protoc and Bazel for efficient development
  • Gain insights into how advanced gRPC concepts work
  • Grasp techniques for unit testing and load testing your API
  • Get to grips with deploying your microservices with Docker and Kubernetes
  • Discover tools for writing secure and efficient gRPC code
Estimated delivery fee Deliver to Germany

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 14, 2023
Length: 260 pages
Edition : 1st
Language : English
ISBN-13 : 9781837638840
Category :
Languages :
Concepts :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Germany

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Publication date : Jul 14, 2023
Length: 260 pages
Edition : 1st
Language : English
ISBN-13 : 9781837638840
Category :
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 93.97
Test-Driven Development in Go
€29.99
Effective Concurrency in Go
€29.99
gRPC Go for Professionals
€33.99
Total 93.97 Stars icon
Banner background image

Table of Contents

12 Chapters
Chapter 1: Networking Primer Chevron down icon Chevron up icon
Chapter 2: Protobuf Primer Chevron down icon Chevron up icon
Chapter 3: Introduction to gRPC Chevron down icon Chevron up icon
Chapter 4: Setting Up a Project Chevron down icon Chevron up icon
Chapter 5: Types of gRPC Endpoints Chevron down icon Chevron up icon
Chapter 6: Designing Effective APIs Chevron down icon Chevron up icon
Chapter 7: Out-of-the-Box Features Chevron down icon Chevron up icon
Chapter 8: More Essential Features Chevron down icon Chevron up icon
Chapter 9: Production-Grade APIs Chevron down icon Chevron up icon
Epilogue Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.5
(6 Ratings)
5 star 50%
4 star 50%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




arsalan Aug 06, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book's focus on advanced gRPC ideas like authentication and security is one of its finest features. These are essential considerations when working with distributed microservices, and the authors do a great job of demystifying these difficult subjects. The thorough grasp of effective microservice security and system integrity will be provided to readers.The book stands out for its practical approach, which enables users to put their information to use. It explores core concepts such as message transmission and the role of Protobuf in serialization and deserialization. A must read.
Amazon Verified review Amazon
POE Oct 20, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is intended for Go developers that want to leverage gRPC, Go’s implementation of the universal Remote Procedure Call (RPC) framework. Both a network and Protobuf primer are provided at the start of the book to help ensure the reader is prepared for the content that follows. A brief introduction to gRPC is provided, covering server, client, REST, and GraphQL.The author provides sufficient detail to help you setup a project and design an API. Performance is aptly considered with the programming examples. Several gRPC features are covered to include error handling, class, load balancing, request validation, logging, tracing, and more. Advanced topics include unit and load testing, debugging, and deploying with Docker, Kubernetes, and Envoy Proxy.
Amazon Verified review Amazon
KS Dec 04, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Here's what I loved:1. Comprehensive and practical: The book doesn't shy away from core concepts. It delves deep into message transmission, the magic of Protobuf, and even advanced gRPC features like authentication and security. Yet, it never loses sight of practicality, offering step-by-step implementation through a API.2. Testing and deployment done right: Often neglected aspects of microservice development are given their due here. Jean meticulously explores unit and load testing techniques, ensuring your code stays rock-solid. He then takes you on a deployment journey, leveraging Docker and Kubernetes for seamless production-ready deployments.3. Beyond the code: While technical mastery is key, Jean recognizes the bigger picture. He offers valuable insights into designing and evolving gRPC services, avoiding common pitfalls, and writing secure and efficient code. This holistic approach prepares you for the real-world challenges of microservice development.This book is for you if:1. You're a Go developer eager to leverage gRPC for building microservices.2. You've dabbled in gRPC but crave a deeper understanding and practical guidance.3. You're building microservices and want to ensure their scalability, efficiency, and robustness.Overall, "gRPC Go for Professionals" is a masterclass in building production-grade microservices with gRPC and Go. It's a must-have for any developer serious about mastering this powerful technology.Bonus points:1. The book is well-written, engaging, and avoids jargon overload.2. The accompanying code examples are clear, concise, and readily applicable.3. Jean's passion for gRPC shines through, making the learning experience even more enjoyable.Highly recommended!
Amazon Verified review Amazon
Tans Aug 02, 2023
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
For software developers wishing to fully utilise gRPC and Go in the construction of effective and reliable microservices, gRPC Go for Professionals is an invaluable and thorough reference.The book's practical approach to learning is one of its best qualities. Practical, real-world examples are used to walk readers through the process of designing, implementing, and testing production-grade microservices using gRPC and Go. The offered code samples are well-organized, and they are complemented by illuminating explanations, which promotes a deeper comprehension of the fundamental mechanics at work. You'll discover various techniques for evaluating your services and troubleshooting your API endpoints. Highly advised.
Amazon Verified review Amazon
KT Nov 27, 2023
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
...and someone writes a book about it.Look - information down a wire does not, never has, and never will both a) be efficient and b) align exactly with the same information in language-aligned data structures. How you make the translations is tricky, and one should take a long hard look at the entire process before ever starting.Google engineers naturally ignored the process of thinking, and dove right in. They *did* manage to strip out all the human-readable-on-the-wire stupidity which characterizes both XML and JSON/YAML, opting for machine-readable alacrity.However, they got hamstrung by how that layout translates into Go data structures (all the while crowing about how gRPC [ and protobufs - which are a whole 'nother level of mis-engineered hell ] ) are language-agnostic. In this case, it simply means that gRPC is a small inadequate solution looking for a problem, which is equally inadequate in a lot of languages, with a lot of bags on the side to attempt to correct these deficiencies.In short, it's Sun ONC-RPC with XDR, with slight variations and a similar number of drawbacks and failures, but it's Google, so that must mean it's the Right Thing(TM).Oh, the book is reasonably good at describing gRPC, insofar as any of these books end up ever being any good at anything. If you absolutely *must* use gRPC, then I guess you might get some value from this book. Be sure the expense it though, don't pay for it out of your own pocket.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela