Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Mastering PyTorch
Mastering PyTorch

Mastering PyTorch: Build powerful neural network architectures using advanced PyTorch 1.x features

eBook
$9.99 $47.99
Paperback
$59.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Mastering PyTorch

Chapter 1: Overview of Deep Learning using PyTorch

Deep learning is a class of machine learning methods that has revolutionized the way computers/machines are used to perform cognitive tasks in real life. Based on the mathematical concept of deep neural networks, deep learning uses large amounts of data to learn non-trivial relationships between inputs and outputs in the form of complex nonlinear functions. Some of the inputs and outputs, as demonstrated in Figure 1.1, could be the following:

  • Input: An image of a text; output: Text
  • Input: Text; output: A natural voice speaking the text
  • Input: A natural voice speaking the text; output: Transcribed text

And so on. Here is a figure to support the preceding explanation:

Figure 1.1 – Deep learning model examples

Figure 1.1 – Deep learning model examples

Deep neural networks involve a lot of mathematical computations, linear algebraic equations, complex nonlinear functions, and various optimization algorithms. In order to build and train a deep neural network from scratch using a programming language such as Python, it would require us to write all the necessary equations, functions, and optimization schedules. Furthermore, the code would need to be written such that large amounts of data can be loaded efficiently, and training can be performed in a reasonable amount of time. This amounts to implementing several lower-level details each time we build a deep learning application.

Deep learning libraries such as Theano and TensorFlow, among various others, have been developed over the years to abstract these details out. PyTorch is one such Python-based deep learning library that can be used to build deep learning models.

TensorFlow was introduced as an open source deep learning Python (and C++) library by Google in late 2015, which revolutionized the field of applied deep learning. Facebook, in 2016, responded with its own open source deep learning library and called it Torch. Torch was initially used with a scripting language called Lua, and soon enough, the Python equivalent emerged called PyTorch. Around the same time, Microsoft released its own library – CNTK. Amidst the hot competition, PyTorch has been growing fast to become one of the most used deep learning libraries.

This book is meant to be a hands-on resource on some of the most advanced deep learning problems, how they are solved using complex deep learning architectures, and how PyTorch can be effectively used to build, train, and evaluate these complex models. While the book keeps PyTorch at the center, it also includes comprehensive coverage of some of the most recent and advanced deep learning models. The book is intended for data scientists, machine learning engineers, or researchers who have a working knowledge of Python and who, preferably, have used PyTorch before.

Due to the hands-on nature of this book, it is highly recommended to try the examples in each chapter by yourself on your computer to become proficient in writing PyTorch code. We begin with this introductory chapter and subsequently explore various deep learning problems and model architectures that will expose the various functionalities PyTorch has to offer.

This chapter will review some of the concepts behind deep learning and will provide a brief overview of the PyTorch library. We conclude this chapter with a hands-on exercise where we train a deep learning model using PyTorch.

The following topics will be covered in this chapter:

  • A refresher on deep learning
  • Exploring the PyTorch library
  • Training a neural network using PyTorch

Technical requirements

We will be using Jupyter notebooks for all of our exercises. And the following is the list of Python libraries that shall be installed for this chapter using pip. For example, run pip install torch==1.4.0 on the command line:

jupyter==1.0.0
torch==1.4.0
torchvision==0.5.0
matplotlib==3.1.2

All code files relevant to this chapter are available at https://github.com/PacktPublishing/Mastering-PyTorch/tree/master/Chapter01.

A refresher on deep learning

Neural networks are a sub-type of machine learning methods that are inspired by the structure and function of the human brain. In neural networks, each computational unit, analogically called a neuron, is connected to other neurons in a layered fashion. When the number of such layers is more than two, the neural network thus formed is called a deep neural network. Such models are generally called deep learning models.

Deep learning models have proven superior to other classical machine learning models because of their ability to learn highly complex relationships between input data and the output (ground truth). In recent times, deep learning has gained a lot of attention and rightly so, primarily because of the following two reasons:

  • The availability of powerful computing machines, especially in the cloud
  • The availability of huge amounts of data

Owing to Moore's law, which states that the processing power of computers will double every 2 years, we are now living in a time when deep learning models with several hundreds of layers can be trained within a realistic and reasonably short amount of time. At the same time, with the exponential increase in the use of digital devices everywhere, our digital footprint has exploded, resulting in gigantic amounts of data being generated across the world every moment.

Hence, it has been possible to train deep learning models for some of the most difficult cognitive tasks that were either intractable earlier or had sub-optimal solutions through other machine learning techniques.

Deep learning, or neural networks in general, has another advantage over the classical machine learning models. Usually, in a classical machine learning-based approach, feature engineering plays a crucial role in the overall performance of a trained model. However, a deep learning model does away with the need to manually craft features. With large amounts of data, deep learning models can perform very well without requiring hand-engineered features and can outperform the traditional machine learning models. The following graph indicates how deep learning models can leverage large amounts of data better than the classical machine models:

Figure 1.2 – Model performance versus dataset size

Figure 1.2 – Model performance versus dataset size

As can be seen in the graph, deep learning performance isn't necessarily distinguished up to a certain dataset size. However, as the data size starts to further increase, deep neural networks begin outperforming the non-deep learning models.

A deep learning model can be built based on various types of neural network architectures that have been developed over the years. A prime distinguishing factor between the different architectures is the type and combination of layers that are used in the neural network. Some of the well-known layers are the following:

  • Fully-connected or linear: In a fully connected layer, as shown in the following diagram, all neurons preceding this layer are connected to all neurons succeeding this layer:
Figure 1.3 – Fully connected layer

Figure 1.3 – Fully connected layer

This example shows two consecutive fully connected layers with N1 and N2 number of neurons, respectively. Fully connected layers are a fundamental unit of many – in fact, most – deep learning classifiers.

  • Convolutional: The following diagram shows a convolutional layer, where a convolutional kernel (or filter) is convolved over the input:
Figure 1.4 – Convolutional layer

Figure 1.4 – Convolutional layer

Convolutional layers are a fundamental unit of convolutional neural networks (CNNs), which are the most effective models for solving computer vision problems.

  • Recurrent: The following diagram shows a recurrent layer. While it looks similar to a fully connected layer, the key difference is the recurrent connection (marked with bold curved arrows):
Figure 1.5 – Recurrent layer

Figure 1.5 – Recurrent layer

Recurrent layers have an advantage over fully connected layers in that they exhibit memorizing capabilities, which comes in handy working with sequential data where one needs to remember past inputs along with the present inputs.

  • DeConv (the reverse of a convolutional layer): Quite the opposite of a convolutional layer, a deconvolutional layer works as shown in the following diagram:
Figure 1.6 – Deconvolutional layer

Figure 1.6 – Deconvolutional layer

This layer expands the input data spatially and hence is crucial in models that aim to generate or reconstruct images, for example.

  • Pooling: The following diagram shows the max-pooling layer, which is perhaps the most widely used kind of pooling layer:
Figure 1.7 – Pooling layer

Figure 1.7 – Pooling layer

This is a max-pooling layer that pools the highest number each from 2x2 sized subsections of the input. Other forms of pooling are min-pooling and mean-pooling.

  • Dropout: The following diagram shows how dropout layers work. Essentially, in a dropout layer, some neurons are temporarily switched off (marked with X in the diagram), that is, they are disconnected from the network:
Figure 1.8 – Dropout layer

Figure 1.8 – Dropout layer

Dropout helps in model regularization as it forces the model to function well in sporadic absences of certain neurons, which forces the model to learn generalizable patterns instead of memorizing the entire training dataset.

A number of well-known architectures based on the previously mentioned layers are shown in the following diagram:.

Figure 1.9 – Different neural network architectures

Figure 1.9 – Different neural network architectures

A more exhaustive set of neural network architectures can be found here: https://www.asimovinstitute.org/neural-network-zoo/.

Besides the types of layers and how they are connected in a network, other factors such as activation functions and the optimization schedule also define the model.

Activation functions

Activation functions are crucial to neural networks as they add the non-linearity without which, no matter how many layers we add, the entire neural network would be reduced to a simple linear model. The different types of activation functions listed here are basically different nonlinear mathematical functions.

Some of the popular activation functions are as follows:

  • Sigmoid: A sigmoid (or logistic) function is expressed as follows:

The function is shown in graph form as follows:

Figure 1.10 – Sigmoid function

Figure 1.10 – Sigmoid function

As can be seen from the graph, the sigmoid function takes in a numerical value x as input and outputs a value y in the range (0, 1).

  • TanH: TanH is expressed as follows:

The function is shown in graph form as follows:

Figure 1.11 – TanH function

Figure 1.11 – TanH function

Contrary to sigmoid, the output y varies from -1 to 1 in the case of the TanH activation function. Hence, this activation is useful in cases where we need both positive as well as negative outputs.

  • Rectified linear units (ReLUs): ReLUs are more recent than the previous two and are simply expressed as follows:

The function is shown in graph form as follows:

Figure 1.12 – ReLU function

Figure 1.12 – ReLU function

A distinct feature of ReLU in comparison with the sigmoid and TanH activation functions is that the output keeps growing with the input whenever the input is greater than 0. This prevents the gradient of this function from diminishing to 0 as in the case of the previous two activation functions. Although, whenever the input is negative, both the output and the gradient will be 0.

  • Leaky ReLU: ReLUs entirely suppress any incoming negative input by outputting 0. We may, however, want to also process negative inputs for some cases. Leaky ReLUs offer the option of processing negative inputs by outputting a fraction k of the incoming negative input. This fraction k is a parameter of this activation function, which can be mathematically expressed as follows:

The following graph shows the input-output relationship for leaky ReLU:

Figure 1.13 – Leaky ReLU function

Figure 1.13 – Leaky ReLU function

Activation functions are an actively evolving area of research within deep learning. It will not be possible to list all of the activation functions here but I encourage you to check out the recent developments in this domain. Many activation functions are simply nuanced modifications of the ones mentioned in this section.

Optimization schedule

So far, we have spoken of how a neural network structure is built. In order to train a neural network, we need to adopt an optimization schedule. Like any other parameter-based machine learning model, a deep learning model is trained by tuning its parameters. The parameters are tuned through the process of backpropagation, wherein the final or output layer of the neural network yields a loss. This loss is calculated with the help of a loss function that takes in the neural network's final layer's outputs and the corresponding ground truth target values. This loss is then backpropagated to the previous layers using gradient descent and the chain rule of differentiation.

The parameters or weights at each layer are accordingly modified in order to minimize the loss. The extent of modification is determined by a coefficient, which varies from 0 to 1, also known as the learning rate. This whole procedure of updating the weights of a neural network, which we call the optimization schedule, has a significant impact on how well a model is trained. Therefore, a lot of research has been done in this area and is still ongoing. The following are a few popular optimization schedules:

  • Stochastic Gradient Descent (SGD): It updates the model parameters in the following fashion:

β is the parameter of the model and X and y are the input training data and the corresponding labels respectively. L is the loss function and α is the learning rate. SGD performs this update for every training example pair (X, y). A variant of this –mini-batch gradient descent – performs updates for every k examples, where k is the batch size. Gradients are calculated altogether for the whole mini-batch. Another variant, batch gradient descent, performs parameter updates by calculating the gradient across the entire dataset.

  • Adagrad: In the previous optimization schedule, we used a single learning rate for all the parameters of the model. However, different parameters might need to be updated at different paces, especially in cases of sparse data, where some parameters are more actively involved in feature extraction than others. Adagrad introduces the idea of per-parameter updates, as shown here:

Here, we use the subscript i to denote the ith parameter and the superscript t is used to denote the time step t of the gradient descent iterations. SSGit is the sum of squared gradients for the ith parameter starting from time step 0 to time step t. є is used to denote a small value added to SSG to avoid division by zero. Dividing the global learning rate α by the square root of SSG ensures that the learning rate for frequently changing parameters lowers faster than the learning rate for rarely updated parameters.

  • Adadelta: In Adagrad, the denominator of the learning rate is a term that keeps on rising in value due to added squared terms in every time step. This causes the learning rates to decay to vanishingly small values. To tackle this problem, Adadelta introduces the idea of computing the sum of squared gradients only up to previous time steps. In fact, we can express it as a running decaying average of the past gradients:

γ here is the decaying factor we wish to choose for the previous sum of squared gradients. With this formulation, we ensure that the sum of squared gradients does not accumulate to a large value, thanks to the decaying average. Once SSGit is defined, we can use the Adagrad equation to define the update step for Adadelta.

However, if we look closely at the Adagrad equation, the root mean squared gradient is not a dimensionless quantity and hence should ideally not be used as a coefficient for the learning rate. To resolve this, we define another running average, this time for the squared parameter updates. Let's first define the parameter update:

And then, similar to the running decaying average of the past gradients equation (the first equation under Adadelta), we can define the square sum of parameter updates as follows:

Here, SSPU is the sum of squared parameter updates. Once we have this, we can adjust for the dimensionality problem in the Adagrad equation with the final Adadelta equation:

Noticeably, the final Adadelta equation doesn't require any learning rate. One can still however provide a learning rate as a multiplier. Hence, the only mandatory hyperparameter for this optimization schedule is the decaying factors..

  • RMSprop: We have implicitly discussed the internal workings of RMSprop while discussing Adadelta as both are pretty similar. The only difference is that RMSProp does not adjust for the dimensionality problem and hence the update equation stays the same as the equation presented in the Adagrad section, wherein the SSGit is obtained from the first equation in the Adadelta section. This essentially means that we do need to specify both a base learning rate as well as a decaying factor in the case of RMSProp.
  • Adaptive Moment Estimation (Adam): This is another optimization schedule that calculates customized learning rates for each parameter. Just like Adadelta and RMSprop, Adam also uses the decaying average of the previous squared gradients as demonstrated in the first equation in the Adadelta section. However, it also uses the decaying average of previous gradient values:

SG and SSG are mathematically equivalent to estimating the first and second moments of the gradient respectively, hence the name of this method – adaptive moment estimation. Usually, γ and γ' are close to 1 and in that case, the initial values for both SG and SSG might be pushed towards zero. To counteract that, these two quantities are reformulated with the help of bias correction:

and

Once they are defined, the parameter update is expressed as follows:

Basically, the gradient on the extreme right-hand side of the equation is replaced by the decaying average of the gradient. Noticeably, Adam optimization involves three hyperparameters – the base learning rate, and the two decaying rates for the gradients and squared gradients. Adam is one of the most successful, if not the most successful, optimization schedule in recent times for training complex deep learning models.

So, which optimizer shall we use? It depends. If we are dealing with sparse data, then the adaptive optimizers (numbers 2 to 5) will be advantageous because of the per-parameter learning rate updates. As mentioned earlier, with sparse data, different parameters might be worked at different paces and hence a customized per-parameter learning rate mechanism can greatly help the model in reaching optimal solutions. SGD might also find a decent solution but will take much longer in terms of training time. Among the adaptive ones, Adagrad has the disadvantage of vanishing learning rates due to a monotonically increasing learning rate denominator.

RMSProp, Adadelta, and Adam are quite close in terms of their performance on various deep learning tasks. RMSprop is largely similar to Adadelta, except for the use of the base learning rate in RMSprop versus the use of the decaying average of previous parameter updates in Adadelta. Adam is slightly different in that it also includes the first-moment calculation of gradients and accounts for bias correction. Overall, Adam could be the optimizer to go with, all else being equal. We will use some of these optimization schedules in the exercises in this book. Feel free to switch them with another one to observe changes in the following:

  • Model training time and trajectory (convergence)
  • Final model performance

In the coming chapters, we will use many of these architectures, layers, activation functions, and optimization schedules in solving different kinds of machine learning problems with the help of PyTorch. In the example included in this chapter, we will create a convolutional neural network that contains convolutional, linear, max-pooling, and dropout layers. Log-Softmax is used for the final layer and ReLU is used as the activation function for all the other layers. And the model is trained using an Adadelta optimizer with a fixed learning rate of 0.5.

Exploring the PyTorch library

PyTorch is a machine learning library for Python based on the Torch library. PyTorch is extensively used as a deep learning tool both for research as well as building industrial applications. It is primarily developed by Facebook's machine learning research labs. PyTorch is competition for the other well-known deep learning library – TensorFlow, which is developed by Google. The initial difference between these two was that PyTorch was based on eager execution whereas TensorFlow was built on graph-based deferred execution. Although, TensorFlow now also provides an eager execution mode.

Eager execution is basically an imperative programming mode where mathematical operations are computed immediately. A deferred execution mode would have all the operations stored in a computational graph without immediate calculations and then the entire graph would be evaluated later. Eager execution is considered advantageous for reasons such as intuitive flow, easy debugging, and less scaffolding code.

PyTorch is more than just a deep learning library. With its NumPy-like syntax/interface, it provides tensor computation capabilities with strong acceleration using GPUs. But what is a tensor? Tensors are computational units, very similar to NumPy arrays, except that they can also be used on GPUs to accelerate computing.

With accelerated computing and the facility to create dynamic computational graphs, PyTorch provides a complete deep learning framework. Besides all that, it is truly Pythonic in nature, which enables PyTorch users to exploit all the features Python provides, including the extensive Python data science ecosystem.

In this section, we will take a look at some of the useful PyTorch modules that extend various functionalities helpful in loading data, building models, and specifying the optimization schedule during the training of a model. We will also expand on what a tensor is and how it is implemented with all of its attributes in PyTorch.

PyTorch modules

The PyTorch library, besides offering the computational functions as NumPy does, also offers a set of modules that enable developers to quickly design, train, and test deep learning models. The following are some of the most useful modules.

torch.nn

When building a neural network architecture, the fundamental aspects that the network is built on are the number of layers, the number of neurons in each layer, and which of those are learnable, and so on. The PyTorch nn module enables users to quickly instantiate neural network architectures by defining some of these high-level aspects as opposed to having to specify all the details manually. The following is a one-layer neural network initialization without using the nn module:

import math
# we assume a 256-dimensional input and a 4-dimensional output for this 1-layer neural network
# hence, we initialize a 256x4 dimensional matrix filled with random values
weights = torch.randn(256, 4) / math.sqrt(256)
# we then ensure that the parameters of this neural network ar trainable, that is, the numbers in the 256x4 matrix can be tuned with the help of backpropagation of gradients
weights.requires_grad_()
# finally we also add the bias weights for the 4-dimensional output, and make these trainable too
bias = torch.zeros(4, requires_grad=True)

We can instead use nn.Linear(256, 4) to represent the same thing.

Within the torch.nn module, there is a submodule called torch.nn.functional. This submodule consists of all the functions within the torch.nn module whereas all the other submodules are classes. These functions are loss functions, activating functions, and also neural functions that can be used to create neural networks in a functional manner (that is, when each subsequent layer is expressed as a function of the previous layer) such as pooling, convolutional, and linear functions. An example of a loss function using the torch.nn.functional module could be the following:

import torch.nn.functional as F
loss_func = F.cross_entropy
loss = loss_func(model(X), y)

Here, X is the input, y is the target output, and model is the neural network model.

torch.optim

As we train a neural network, we back-propagate errors to tune the weights or parameters of the network – the process that we call optimization. The optim module includes all the tools and functionalities related to running various types of optimization schedules while training a deep learning model. Let's say we define an optimizer during a training session using the torch.optim modules, as shown in the following snippet:

opt = optim.SGD(model.parameters(), lr=lr)

Then, we don't need to manually write the optimization step as shown here:

with torch.no_grad():
    # applying the parameter updates using stochastic gradient descent
    for param in model.parameters(): param -= param.grad * lr
    model.zero_grad()

We can simply write this instead:

opt.step()
opt.zero_grad()

Next, we will look at the utis.data module.

torch.utils.data

Under the utis.data module, torch provides its own dataset and DatasetLoader classes, which are extremely handy due to their abstract and flexible implementations. Basically, these classes provide intuitive and useful ways of iterating and performing other such operations on tensors. Using these, we can ensure high performance due to optimized tensor computations and also have fail-safe data I/O. For example, let's say we use torch.utils.data.DataLoader as follows:

from torch.utils.data import (TensorDataset, DataLoader)
train_dataset = TensorDataset(x_train, y_train)
train_dataloader = DataLoader(train_dataset, batch_size=bs)

Then, we don't need to iterate through batches of data manually, like this:

for i in range((n-1)//bs + 1):
    x_batch = x_train[start_i:end_i]
    y_batch = y_train[start_i:end_i]
    pred = model(x_batch)

We can simply write this instead:

for x_batch,y_batch in train_dataloader:
    pred = model(x_batch)

Let's now look at tensor modules.

Tensor modules

As mentioned earlier, tensors are conceptually similar to NumPy arrays. A tensor is an n-dimensional array on which we can operate mathematical functions, accelerate computations via GPUs, and tensors can also be used to keep track of a computational graph and gradients, which prove vital for deep learning. To run a tensor on a GPU, all we need is to cast the tensor into a certain data type.

Here is how we can instantiate a tensor in PyTorch:

points = torch.tensor([1.0, 4.0, 2.0, 1.0, 3.0, 5.0]) 

To fetch the first entry, simply write the following:

float(points[0])

We can also check the shape of the tensor using this:

points.shape

In PyTorch, tensors are implemented as views over a one-dimensional array of numerical data stored in contiguous chunks of memory. These arrays are called storage instances. Every PyTorch tensor has a storage attribute that can be called to output the underlying storage instance for a tensor as shown in the following example:

points = torch.tensor([[1.0, 4.0], [2.0, 1.0], [3.0, 5.0]])
points.storage()

This should output the following:

Figure 1.14 – PyTorch tensor storage

Figure 1.14 – PyTorch tensor storage

When we say a tensor is a view on the storage instance, the tensor uses the following information to implement the view:

  • Size
  • Storage
  • Offset
  • Stride

Let's look into this with the help of our previous example:

points = torch.tensor([[1.0, 4.0], [2.0, 1.0], [3.0, 5.0]])

Let's investigate what these different pieces of information mean:

points.size()

This should output the following:

Figure 1.15 – PyTorch tensor size

Figure 1.15 – PyTorch tensor size

As we can see, size is similar to the shape attribute in NumPy, which tells us the number of elements across each dimension. The multiplication of these numbers equals the length of the underlying storage instance (6 in this case).

As we have already examined what the storage attribute means, let's look at offset:

points.storage_offset()

This should output the following:

Figure 1.16 – PyTorch tensor storage offset 1

Figure 1.16 – PyTorch tensor storage offset 1

The offset here represents the index of the first element of the tensor in the storage array. Because the output is 0, it means that the first element of the tensor is the first element in the storage array.

Let's check this:

points[1].storage_offset()

This should output the following:

Figure 1.17 – PyTorch tensor storage offset 2

Figure 1.17 – PyTorch tensor storage offset 2

Because points[1] is [2.0, 1.0] and the storage array is [1.0, 4.0, 2.0, 1.0, 3.0, 5.0], we can see that the first element of the tensor [2.0, 1.0], that is, . 2.0 is at index 2 of the storage array.

Finally, we'll look at the stride attribute:

points.stride()
Figure 1.18 – PyTorch tensor stride

Figure 1.18 – PyTorch tensor stride

As we can see, stride contains, for each dimension, the number of elements to be skipped in order to access the next element of the tensor. So, in this case, along the first dimension, in order to access the element after the first one, that is, 1.0 we need to skip 2 elements (that is, 1.0 and 4.0) to access the next element, that is, 2.0. Similarly, along the second dimension, we need to skip 1 element to access the element after 1.0, that is, 4.0. Thus, using all these attributes, tensors can be derived from a contiguous one-dimensional storage array.

The data contained within tensors is of numeric type. Specifically, PyTorch offers the following data types to be contained within tensors:

  • torch.float32 or torch.float—32-bit floating-point
  • torch.float64 or torch.double—64-bit, double-precision floating-point
  • torch.float16 or torch.half—16-bit, half-precision floating-point  
  • torch.int8—Signed 8-bit integers  
  • torch.uint8—Unsigned 8-bit integers  
  • torch.int16 or torch.short—Signed 16-bit integers  
  • torch.int32 or torch.int—Signed 32-bit integers  
  • torch.int64 or torch.long—Signed 64-bit integers

An example of how we specify a certain data type to be used for a tensor is as follows:

points = torch.tensor([[1.0, 2.0], [3.0, 4.0]], dtype=torch.float32)

Besides the data type, tensors in PyTorch also need a device specification where they will be stored. A device can be specified as instantiation:

points = torch.tensor([[1.0, 2.0], [3.0, 4.0]], dtype=torch.float32, device='cpu')

Or we can also create a copy of a tensor in the desired device:

points_2 = points.to(device='cuda')

As seen in the two examples, we can either allocate a tensor to a CPU (using device='cpu'), which happens by default if we do not specify a device, or we can allocate the tensor to a GPU (using device='cuda').

Note

PyTorch currently supports only GPUs that support CUDA.

When a tensor is placed on a GPU, the computations speed up and because the tensor APIs are largely uniform across CPU and GPU placed tensors in PyTorch, it is quite convenient to move the same tensor across devices, perform computations, and move it back.

If there are multiple devices of the same type, say more than one GPU, we can precisely locate the device we want to place the tensor in using the device index, such as the following:

points_3 = points.to(device='cuda:0')

You can read more about PyTorch-CUDA here: https://pytorch.org/docs/stable/notes/cuda.html. And you can read more generally about CUDA here: https://developer.nvidia.com/about-cuda.

Now that we have explored the PyTorch library and understood the PyTorch and Tensor modules, let's learn how to train a neural network using PyTorch.

Training a neural network using PyTorch

For this exercise, we will be using the famous MNIST dataset (available at http://yann.lecun.com/exdb/mnist/), which is a sequence of images of handwritten postcode digits, zero through nine, with corresponding labels. The MNIST dataset consists of 60,000 training samples and 10,000 test samples, where each sample is a grayscale image with 28 x 28 pixels. PyTorch also provides the MNIST dataset under its Dataset module.

In this exercise, we will use PyTorch to train a deep learning multi-class classifier on this dataset and test how the trained model performs on the test samples:

  1. For this exercise, we will need to import a few dependencies. Execute the following import statements:
    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    import torch.optim as optim
    from torch.utils.data import DataLoader
    from torchvision import datasets, transforms
    import matplotlib.pyplot as plt
  2. Next, we define the model architecture as shown in the following diagram:
    Figure 1.19 – Neural network architecture

    Figure 1.19 – Neural network architecture

    The model consists of convolutional layers, dropout layers, as well as linear/fully connected layers, all available through the torch.nn module:

    class ConvNet(nn.Module):
        def __init__(self):
            super(ConvNet, self).__init__()
            self.cn1 = nn.Conv2d(1, 16, 3, 1)
            self.cn2 = nn.Conv2d(16, 32, 3, 1)
            self.dp1 = nn.Dropout2d(0.10)
            self.dp2 = nn.Dropout2d(0.25)
            self.fc1 = nn.Linear(4608, 64) # 4608 is basically 12 X 12 X 32
            self.fc2 = nn.Linear(64, 10)
        def forward(self, x):
            x = self.cn1(x)
            x = F.relu(x)
            x = self.cn2(x)
            x = F.relu(x)
            x = F.max_pool2d(x, 2)
            x = self.dp1(x)
            x = torch.flatten(x, 1)
            x = self.fc1(x)
            x = F.relu(x)
            x = self.dp2(x)
            x = self.fc2(x)
            op = F.log_softmax(x, dim=1)
            return op

    The __init__ function defines the core architecture of the model, that is, all the layers with the number of neurons at each layer. And the forward function, as the name suggests, does a forward pass in the network. Hence it includes all the activation functions at each layer as well as any pooling or dropout used after any layer. This function shall return the final layer output, which we call the prediction of the model, which has the same dimensions as the target output (the ground truth).

    Notice that the first convolutional layer has a 1-channel input, a 16-channel output, a kernel size of 3, and a stride of 1. The 1-channel input is essentially for the grayscale images that will be fed to the model. We decided on a kernel size of 3x3 for various reasons. Firstly, kernel sizes are usually odd numbers so that the input image pixels are symmetrically distributed around a central pixel. 1x1 would be too small because then the kernel operating on a given pixel would not have any information about the neighboring pixels. 3 comes next, but why not go further to 5, 7, or, say, even 27?

    Well, at the extreme high end, a 27x27 kernel convolving over a 28x28 image would give us very coarse-grained features. However, the most important visual features in the image are fairly local and hence it makes sense to use a small kernel that looks at a few neighboring pixels at a time, for visual patterns. 3x3 is one of the most common kernel sizes used in CNNs for solving computer vision problems.

    Note that we have two consecutive convolutional layers, both with 3x3 kernels. This, in terms of spatial coverage, is equivalent to using one convolutional layer with a 5x5 kernel. However, using multiple layers with a smaller kernel size is almost always preferred because it results in deeper networks, hence more complex learned features as well as fewer parameters due to smaller kernels.

    The number of channels in the output of a convolutional layer is usually higher than or equal to the input number of channels. Our first convolutional layer takes in one channel data and outputs 16 channels. This basically means that the layer is trying to detect 16 different kinds of information from the input image. Each of these channels is called a feature map and each of them has a dedicated kernel extracting features for them.

    We escalate the number of channels from 16 to 32 in the second convolutional layer, in an attempt to extract more kinds of features from the image. This increment in the number of channels (or image depth) is common practice in CNNs. We will read more on this under width-based CNNs in Chapter 3, Deep CNN Architectures.

    Finally, the stride of 1 makes sense, as our kernel size is just 3. Keeping a larger stride value – say, 10 – would result in the kernel skipping many pixels in the image and we don't want to do that. If, however, our kernel size was 100, we might have considered 10 as a reasonable stride value. The larger the stride, the lower the number of convolution operations but the smaller the overall field of view for the kernel.

  3. We then define the training routine, that is, the actual backpropagation step. As can be seen, the torch.optim module greatly helps in keeping this code succinct:
    def train(model, device, train_dataloader, optim, epoch):
        model.train()
        for b_i, (X, y) in enumerate(train_dataloader):
            X, y = X.to(device), y.to(device)
            optim.zero_grad()
            pred_prob = model(X)
            loss = F.nll_loss(pred_prob, y) # nll is the negative likelihood loss
            loss.backward()
            optim.step()
            if b_i % 10 == 0:
                print('epoch: {} [{}/{} ({:.0f}%)]\t training loss: {:.6f}'.format(
                    epoch, b_i * len(X), len(train_			dataloader.dataset),
                    100. * b_i / len(train_dataloader), loss.			item()))

    This iterates through the dataset in batches, makes a copy of the dataset on the given device, makes a forward pass with the retrieved data on the neural network model, computes the loss between the model prediction and the ground truth, uses the given optimizer to tune model weights, and prints training logs every 10 batches. The entire procedure done once qualifies as 1 epoch, that is, when the entire dataset has been read once.

  4. Similar to the preceding training routine, we write a test routine that can be used to evaluate the model performance on the test set:
    def test(model, device, test_dataloader):
        model.eval()
        loss = 0
        success = 0
        with torch.no_grad():
            for X, y in test_dataloader:
                X, y = X.to(device), y.to(device)
                pred_prob = model(X)
                loss += F.nll_loss(pred_prob, y, reduction='sum').item()  # loss summed across the batch
                pred = pred_prob.argmax(dim=1, 		 keepdim=True)  # us argmax to get the most 		 likely prediction
                success += pred.eq(y.view_as(pred)).sum().item()
        loss /= len(test_dataloader.dataset)
        print('\nTest dataset: Overall Loss: {:.4f}, Overall Accuracy: {}/{} ({:.0f}%)\n'.format(
            loss, success, len(test_dataloader.dataset),
            100. * success / len(test_dataloader.dataset)))

    Most of this function is similar to the preceding train function. The only difference is that the loss computed from the model predictions and the ground truth is not used to tune the model weights using an optimizer. Instead, the loss is used to compute the overall test error across the entire test batch.

  5. Next, we come to another critical component of this exercise, which is loading the dataset. Thanks to PyTorch's DataLoader module, we can set up the dataset loading mechanism in a few lines of code:
    # The mean and standard deviation values are calculated as the mean of all pixel values of all images in the training dataset
    train_dataloader = torch.utils.data.DataLoader(
        datasets.MNIST('../data', train=True, download=True,
                       transform=transforms.Compose([
                           transforms.ToTensor(),
                           transforms.Normalize((0.1302,), (0.3069,))])), # train_X.mean()/256. and train_X.std()/256.
        batch_size=32, shuffle=True)
    test_dataloader = torch.utils.data.DataLoader(
        datasets.MNIST('../data', train=False, 
                       transform=transforms.Compose([
                           transforms.ToTensor(),
                           transforms.Normalize((0.1302,), (0.3069,)) 
                       ])),
        batch_size=500, shuffle=False)

    As you can see, we set batch_size to 32, which is a fairly common choice. Usually, there is a trade-off in deciding the batch size. A very small batch size can lead to slow training due to frequent gradient calculations and can lead to extremely noisy gradients. Very large batch sizes can, on the other hand, also slow down training due to a long waiting time to calculate gradients. It is mostly not worth waiting long before a single gradient update. It is rather advisable to make frequent, less precise gradients as it will eventually lead the model to a better set of learned parameters.

    For both the training and test dataset, we specify the local storage location we want to save the dataset to, and the batch size, which determines the number of data instances that constitute one pass of a training and test run. We also specify that we want to randomly shuffle training data instances to ensure a uniform distribution of data samples across batches. Finally, we also normalize the dataset to a normal distribution with a specified mean and standard deviation.

  6. We defined the training routine earlier. Now is the time to actually define which optimizer and device we will use to run the model training. And we will finally get the following:
    torch.manual_seed(0)
    device = torch.device("cpu")
    model = ConvNet()
    optimizer = optim.Adadelta(model.parameters(), lr=0.5)

    We define the device for this exercise as cpu. We also set a seed to avoid unknown randomness and ensure repeatability. We will use AdaDelta as the optimizer for this exercise with a learning rate of 0.5. While discussing optimization schedules earlier in the chapter, we mentioned that Adadelta could be a good choice if we are dealing with sparse data. And this is a case of sparse data, because not all pixels in the image are informative. Having said that, I encourage you to try out other optimizers such as Adam on this same problem to see how it affects the training process and model performance.

  7. And then we start the actual process of training the model for k number of epochs, and we also keep testing the model at the end of each training epoch:
    for epoch in range(1, 3):
        train(model, device, train_dataloader, optimizer, epoch)
        test(model, device, test_dataloader)

    For demonstration purposes, we will run the training for only two epochs. The output will be as follows:

    Figure 1.20 – Training logs

    Figure 1.20 – Training logs

  8. Now that we have trained a model, with a reasonable test set performance, we can also manually check whether the model inference on a sample image is correct:
    test_samples = enumerate(test_dataloader)
    b_i, (sample_data, sample_targets) = next(test_samples)
    plt.imshow(sample_data[0][0], cmap='gray', interpolation='none')

    The output will be as follows:

Figure 1.21 – Sample handwritten image

Figure 1.21 – Sample handwritten image

And now we run the model inference for this image and compare it with the ground truth:

     print(f"Model prediction is : {model(sample_data).data.max(1)[1][0]}")
print(f"Ground truth is : {sample_targets[0]}")

Note that, for predictions, we first calculate the class with maximum probability using the max function on axis=1. The max function outputs two lists – a list of probabilities of classes for every sample in sample_data and a list of class labels for each sample. Hence, we choose the second list using index [1]. We further select the first class label by using index [0] to look at only the first sample under sample_data. The output will be as follows:

Figure 1.22 – PyTorch model prediction

Figure 1.22 – PyTorch model prediction

This appears to be the correct prediction. The forward pass of the neural network done using model() produces probabilities. Hence, we use the max function to output the class with the maximum probability.

Note

The code pattern for this exercise is derived from the official PyTorch examples repository, which can be found here: https://github.com/pytorch/examples/tree/master/mnist.

Summary

In this chapter, we refreshed deep learning concepts such as layers, activation functions, and optimization schedules and how they contribute towards building varied deep learning architectures. We explored the PyTorch deep learning library, including some of the important modules, such as torch.nn, torch.optim, and torch.data, as well as tensor modules.

We then ran a hands-on exercise on training a deep learning model from scratch. We built a CNN for our exercise using PyTorch modules. We also wrote relevant PyTorch code to load the dataset, train and evaluate the model, and finally, make predictions from the trained model.

In the next chapter, we will explore a slightly more complex model architecture that involves multiple sub-models and use this type of hybrid model to tackle the real-world task of describing an image using natural text. Using PyTorch, we will implement such a system and generate captions for unseen images.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Understand how to use PyTorch 1.x to build advanced neural network models
  • Learn to perform a wide range of tasks by implementing deep learning algorithms and techniques
  • Gain expertise in domains such as computer vision, NLP, Deep RL, Explainable AI, and much more

Description

Deep learning is driving the AI revolution, and PyTorch is making it easier than ever before for anyone to build deep learning applications. This PyTorch book will help you uncover expert techniques to get the most out of your data and build complex neural network models. The book starts with a quick overview of PyTorch and explores using convolutional neural network (CNN) architectures for image classification. You'll then work with recurrent neural network (RNN) architectures and transformers for sentiment analysis. As you advance, you'll apply deep learning across different domains, such as music, text, and image generation using generative models and explore the world of generative adversarial networks (GANs). You'll not only build and train your own deep reinforcement learning models in PyTorch but also deploy PyTorch models to production using expert tips and techniques. Finally, you'll get to grips with training large models efficiently in a distributed manner, searching neural architectures effectively with AutoML, and rapidly prototyping models using PyTorch and fast.ai. By the end of this PyTorch book, you'll be able to perform complex deep learning tasks using PyTorch to build smart artificial intelligence models.

Who is this book for?

This book is for data scientists, machine learning researchers, and deep learning practitioners looking to implement advanced deep learning paradigms using PyTorch 1.x. Working knowledge of deep learning with Python programming is required.

What you will learn

  • Implement text and music generating models using PyTorch
  • Build a deep Q-network (DQN) model in PyTorch
  • Export universal PyTorch models using Open Neural Network Exchange (ONNX)
  • Become well-versed with rapid prototyping using PyTorch with fast.ai
  • Perform neural architecture search effectively using AutoML
  • Easily interpret machine learning (ML) models written in PyTorch using Captum
  • Design ResNets, LSTMs, Transformers, and more using PyTorch
  • Find out how to use PyTorch for distributed training using the torch.distributed API
Estimated delivery fee Deliver to Turkey

Standard delivery 10 - 13 business days

$12.95

Premium delivery 3 - 6 business days

$34.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Feb 12, 2021
Length: 450 pages
Edition : 1st
Language : English
ISBN-13 : 9781789614381
Category :
Languages :
Concepts :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Turkey

Standard delivery 10 - 13 business days

$12.95

Premium delivery 3 - 6 business days

$34.95
(Includes tracking information)

Product Details

Publication date : Feb 12, 2021
Length: 450 pages
Edition : 1st
Language : English
ISBN-13 : 9781789614381
Category :
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 174.97
Mastering PyTorch
$59.99
Deep Learning with PyTorch Lightning
$48.99
Modern Computer Vision with PyTorch
$65.99
Total $ 174.97 Stars icon
Banner background image

Table of Contents

19 Chapters
Section 1: PyTorch Overview Chevron down icon Chevron up icon
Chapter 1: Overview of Deep Learning using PyTorch Chevron down icon Chevron up icon
Chapter 2: Combining CNNs and LSTMs Chevron down icon Chevron up icon
Section 2: Working with Advanced Neural Network Architectures Chevron down icon Chevron up icon
Chapter 3: Deep CNN Architectures Chevron down icon Chevron up icon
Chapter 4: Deep Recurrent Model Architectures Chevron down icon Chevron up icon
Chapter 5: Hybrid Advanced Models Chevron down icon Chevron up icon
Section 3: Generative Models and Deep Reinforcement Learning Chevron down icon Chevron up icon
Chapter 6: Music and Text Generation with PyTorch Chevron down icon Chevron up icon
Chapter 7: Neural Style Transfer Chevron down icon Chevron up icon
Chapter 8: Deep Convolutional GANs Chevron down icon Chevron up icon
Chapter 9: Deep Reinforcement Learning Chevron down icon Chevron up icon
Section 4: PyTorch in Production Systems Chevron down icon Chevron up icon
Chapter 10: Operationalizing PyTorch Models into Production Chevron down icon Chevron up icon
Chapter 11: Distributed Training Chevron down icon Chevron up icon
Chapter 12: PyTorch and AutoML Chevron down icon Chevron up icon
Chapter 13: PyTorch and Explainable AI Chevron down icon Chevron up icon
Chapter 14: Rapid Prototyping with PyTorch Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.8
(43 Ratings)
5 star 90.7%
4 star 4.7%
3 star 0%
2 star 0%
1 star 4.7%
Filter icon Filter
Top Reviews

Filter reviews by




Amazon Customer Feb 19, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Great Book, A very detailed introduction on Pytorch. Well explained concepts. Fantastic book to be introduced into Machine learning and Pytorch
Amazon Verified review Amazon
Nivedita Jha Feb 15, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Amazing book! A good read for beginners covered the basic details in very crisp and clear with good examples. The very good summary allows you to understand the key aspects of the discipline(s) without daunting complexity.I've found this book very comprehensive and it shows the efforts that have been put in by the writer.The content of this book deserves 5 stars. I especially appreciate the author for writing such a great book that will help us to understand and learn the basic concepts of PyTorch and mastering in it.In short , Great book, up to date, engaging and covers a lot of topics clearly.
Amazon Verified review Amazon
shashank sagar jha Feb 22, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Best book.. Superb clarity.. Best part is the examples provided in this book. Almost all important topics are covered in depth.
Amazon Verified review Amazon
deepak Feb 23, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I did not work with PyTorch before and although the book states the basic knowledge of PyTorch as a pre-requisite, I was still able to learn from scratch thanks to the easy progression of topics. The book does cover a lot of ground in my opinion ranging from model architectures, to applications such as music generation and to model deployment and other such engineering considerations. The second half of the book helps get a feel of how deep learning works in practice in real-world applications. I especially liked the inclusion of a jupyter notebook based exercise in each and every chapter. Definitely worth a read.
Amazon Verified review Amazon
Shalini Jha Feb 15, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is a holy grail for beginners, who wants to learn pytorch at home.I appreciate the author to cover each and every concept of pytorch in the most simplest yet crisp manner.The content is described from scratch capturing every details...very basic to implementable level.The book has lot of examples to cover and explain the concepts. I would blindly recommend this book to anyone who wants to learn pytorch at home.It keeps reader engrossed and involved, kudos to author for such a pleasant compression.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela