Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Java Deep Learning Essentials
Java Deep Learning Essentials

Java Deep Learning Essentials: Unlocking the next generation of predictive power

eBook
NZ$14.99 NZ$64.99
Paperback
NZ$80.99
Subscription
Free Trial

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Java Deep Learning Essentials

Chapter 1. Deep Learning Overview

Artificial Intelligence (AI) is a word that you might start to see more often these days. AI has become a hot topic not only in academic society, but also in the field of business. Large tech companies such as Google and Facebook have actively bought AI-related start-ups. Mergers and acquisitions in these AI areas have been especially active, with big money flowing into AI. The Japanese IT/mobile carrier company Softbank released a robot called Pepper in June 2014, which understands human feelings, and a year later they have started to sell Pepper to general consumers. This is a good movement for the field of AI, without a doubt.

The idea of AI has been with us for decades. So, why has AI suddenly became a hot field? One of the factors that has driven recent AI-related movements, and is almost always used with the word AI, is deep learning. After deep learning made a vivid debut and its technological capabilities began to grow exponentially, people started to think that finally AI would become a reality. It sounds like deep learning is definitely something we need to know. So, what exactly is it?

To answer the previous questions, in this chapter we'll look at why and how AI has become popular by following its history and fields of studies. The topics covered will be:

  • The former approaches and techniques of AI
  • An introduction to machine learning and a look at how it has evolved into deep learning
  • An introduction to deep learning and some recent use cases

If you already know what deep learning is or if you would like to find out about the specific algorithm of the deep learning/implementation technique, you can skip this chapter and jump directly to Chapter 2, Algorithms for Machine Learning – Preparing for Deep Learning.

Although deep learning is an innovative technique, it is not actually that complicated. It is rather surprisingly simple. Reading through this book, you will see how brilliant it is. I sincerely hope that this book will contribute to your understanding of deep learning and thus to your research and business.

Transition of AI

So, why is it now that deep learning is in the spotlight? You might raise this question, especially if you are familiar with machine learning, because deep learning is not that different to any other machine learning algorithm (don't worry if you don't know this, as we'll go through it later in the book). In fact, we can say that deep learning is the adaptation of neural networks, one of the algorithms of machine learning, which mimics the structure of a human brain. However, what deep learning can achieve is much more significant and different to any other machine learning algorithm, including neural networks. If you see what processes and research deep learning has gone through, you will have a better understanding of deep learning itself. So, let's go through the transition of AI. You can just skim through this while sipping your coffee.

Definition of AI

All of a sudden, AI has become a hot topic in the world; however, as it turns out, actual AI doesn't exist yet. Of course, research is making progress in creating actual AI, but it will take more time to achieve it. Pleased or not, the human brain—which could be called "intelligence"—is structured in an extremely complicated way and you can't easily replicate it.

But wait a moment - we see many advertisements for products with the phrase by AI or using AI all over them. Are they fraudulent? Actually, they are! Surprised? You might see words like recommendation system by AI or products driven by AI, but the word AI used here doesn't indicate the actual meaning of AI. Strictly speaking, the word AI is used with a much broader meaning. The research into AI and the AI techniques accumulated in the past have achieved only some parts of AI, but now people are using the word AI for those parts too.

Let's look at a few examples. Roughly divided, there are three different categories recognized as AI in general:

  • Simple repetitive machine movements that a human programmed beforehand. For example, high speed processing industrial robots that only process the same set of work.
  • Searching or guessing answers to a given assignment following rules set by a human. For example, the iRobot Roomba can clean up along the shape of a room as it can assume the shape of a room by bumping into obstacles.
  • Providing an answer to unknown data by finding measurable regularity from the existing data. For example, a product recommendation system based on a user's purchase history or distributing banner ads among ad networks falls under this category.

People use the word AI for these categories and, needless to say, new technology that utilizes deep learning is also called AI. Yet, these technologies are different both in structure and in what they can do. So, which should we specifically call AI? Unfortunately, people have different opinions about that question and the answer cannot be objectively explained. Academically, a term has been set as either strong AI or weak AI depending on the level that a machine can achieve. However, in this book, to avoid confusion, AI is used to mean (Not yet achieved) human-like intelligence that is hard to distinguish from the actual human brain. The field of AI is being drastically developed, and the possibility of AI becoming reality is exponentially higher when driven by deep learning. This field is booming now more than ever in history. How long this boom will continue depends on future research.

AI booms in the past

AI suddenly became a hot topic recently: however, this is not the first AI boom. When you look back to the past, research into AI has been conducted for decades and there has been a cycle of being active and inactive. The recent boom is the third boom. Therefore, some people actually think that, at this time, it's just an evanescent boom again.

However, the latest boom has a significant difference from the past booms. Yes, that is deep learning. Deep learning has achieved what the past techniques could not achieve. What is that? Simply put, a machine itself is able to find out the feature quantity from the given data, and learn. With this achievement, we can see the great possibility of AI becoming a reality, because until now a machine couldn't understand a new concept by itself and a human needed to input a certain feature quantity in advance using past techniques created in the AI field.

It doesn't look like a huge difference if you just read this fact, but there's a world of difference. There has been a long path taken before reaching the stage where a machine can measure feature quantity by itself. People were finally able to take a big step forward when a machine could obtain intelligence driven by deep learning. So, what's the big difference between the past techniques and deep learning? Let's briefly look back into the past AI field to get a better sense of the difference.

The first AI boom came in the late 1950s. Back then, the mainstream research and development of a search program was based on fixed rules—needless to say, they were human-defined. The search was, simply put, dividing cases. In this search, if we wanted a machine to perform any process, we had to write out every possible pattern we might need for the process. A machine can calculate much faster than a human can. It doesn't matter how enormous the patterns are, a machine can easily handle them. A machine will keep searching a million times and eventually will find the best answer. However, even if a machine can calculate at high speed, if it is just searching for an answer randomly and blindly it will take a massive amount of time. Yes, don't forget that constraint condition, "time." Therefore, further studies were conducted on how to make the search more efficient. The most popular search methods among the studies were depth-first search (DFS) and breadth-first search (BFS).

Out of every possible pattern you can think of, search for the most efficient path and make the best possible choice among them within a realistic time frame. By doing this, you should get the best answer each time. Based on this hypothesis, two searching or traversing algorithms for a tree of graph data structures were developed: DFS and BFS. Both start at the root of a graph or tree, and DFS explores as far as possible along each branch before backtracking, whereas BFS explores the neighbor nodes first before moving to the next level neighbors. Here are some example diagrams that show the difference between DFS and BFS:

AI booms in the past

These search algorithms could achieve certain results in a specific field, especially fields like Chess and Shogi. This board game field is one of the areas that a machine excels in. If it is given an input of massive amounts of win/lose patterns, past game data, and all the permitted moves of a piece in advance, a machine can evaluate the board position and decide the best possible next move from a very large range of patterns.

For those of you who are interested in this field, let's look into how a machine plays chess in more detail. Let's say a machine makes the first move as "white," and there are 20 possible moves for both "white" and "black" for the next move. Remember the tree-like model in the preceding diagram. From the top of the tree at the start of the game, there are 20 branches underneath as white's next possible move. Under one of these 20 branches, there's another 20 branches underneath as black's next possible movement, and so on. In this case, the tree has 20 x 20 = 400 branches for black, depending on how white moves, 400 x 20 = 8,000 branches for white, 8,000 x 20 = 160,000 branches again for black, and... feel free to calculate this if you like.

A machine generates this tree and evaluates every possible board position from these branches, deciding the best arrangement in a second. How deep it goes (how many levels of the tree it generates and evaluates) is controlled by the speed of the machine. Of course, each different piece's movement should also be considered and embedded in a program, so the chess program is not as simple as previously thought, but we won't go into detail about this in this book. As you can see, it's not surprising that a machine can beat a human at Chess. A machine can evaluate and calculate massive amounts of patterns at the same time, in a much shorter time than a human could. It's not a new story that a machine has beaten a Chess champion; a machine has won a game over a human. Because of stories like this, people expected that AI would become a true story.

Unfortunately, reality is not that easy. We then found out that there was a big wall in front of us preventing us from applying the search algorithm to reality. Reality is, as you know, complicated. A machine is good at processing things at high speed based on a given set of rules, but it cannot find out how to act and what rules to apply by itself when only a task is given. Humans unconsciously evaluate, discard many things/options that are not related to them, and make a choice from millions of things (patterns) in the real world whenever they act. A machine cannot make these unconscious decisions like humans can. If we create a machine that can appropriately consider a phenomenon that happens in the real world, we can assume two possibilities:

  • A machine tries to accomplish its task or purpose without taking into account secondarily occurring incidents and possibilities
  • A machine tries to accomplish its task or purpose without taking into account irrelevant incidents and possibilities

Both of these machines would still freeze and be lost in processing before they accomplished their purpose when humans give them a task; in particular, the latter machine would immediately freeze before even taking its first action. This is because these elements are almost infinite and a machine can't sort them out within a realistic time if it tries to think/search these infinite patterns. This issue is recognized as one of the important challenges in the AI field, and it's called the frame problem.

A machine can achieve great success in the field of Chess or Shogi because the searching space, the space a machine should be processing within, is limited (set in a certain frame) in advance. You can't write out an enormous amount of patterns, so you can't define what the best solution is. Even if you are forced to limit the number of patterns or to define an optimal solution, you can't get the result within an economical time frame for use due to the enormous amounts of calculation needed. After all, the research at that time would only make a machine follow detailed rules set by a human. As such, although this search method could succeed in a specific area, it is far from achieving actual AI. Therefore, the first AI boom cooled down rapidly with disappointment.

The first AI boom was swept away; however, on the side, the research into AI continued. The second AI boom came in the 1980s. This time, the movement of so-called Knowledge Representation (KR) was booming. KR intended to describe knowledge that a machine could easily understand. If all the knowledge in the world was integrated into a machine and a machine could understand this knowledge, it should be able to provide the right answer even if it is given a complex task. Based on this assumption, various methods were developed for designing knowledge for a machine to understand better. For example, the structured forms on a web page—the semantic web—is one example of an approach that tried to design in order for a machine to understand information easier. An example of how the semantic web is described with KR is shown here:

AI booms in the past

Making a machine gain knowledge is not like a human ordering a machine what to do one-sidedly, but more like a machine being able to respond to what humans ask and then answer. One of the simple examples of how this is applied to the actual world is positive-negative analysis, one of the topics of sentiment analysis. If you input data that defines a tone of positive or negative for every word in a sentence (called "a dictionary") into a machine beforehand, a machine can compare the sentence and the dictionary to find out whether the sentence is positive or negative.

This technique is used for the positive-negative analysis of posts or comments on a social network or blog. If you ask a machine "Is the reaction to this blog post positive or negative?" it analyzes the comments based on its knowledge (dictionary) and replies to you. From the first AI boom, where a machine only followed rules that humans set, the second AI boom showed some progress.

By integrating knowledge into a machine, a machine becomes the almighty. This idea itself is not bad for achieving AI; however, there were two high walls ahead of us in achieving it. First, as you may have noticed, inputting all real-world knowledge requires an almost infinite amount of work now that the Internet is more commonly used and we can obtain enormous amounts of open data from the Web. Back then, it wasn't realistic to collect millions of pieces of data and then analyze and input that knowledge into a machine. Actually, this work of databasing all the world's data has continued and is known as Cyc (http://www.cyc.com/). Cyc's ultimate purpose is to build an inference engine based on the database of this knowledge, called knowledge base. Here is an example of KR using the Cyc project:

AI booms in the past

Second, it's not that a machine understands the actual meaning of the knowledge. Even if the knowledge is structured and systemized, a machine understands it as a mark and never understands the concept. After all, the knowledge is input by a human and what a machine does is just compare the data and assume meaning based on the dictionary. For example, if you know the concept of "apple" and "green" and are taught "green apple = apple + green", then you can understand that "a green apple is a green colored apple" at first sight, whereas a machine can't. This is called the symbol grounding problem and is considered one of the biggest problems in the AI field, as well as the frame problem.

The idea was not bad—it did improve AI—however, this approach won't achieve AI in reality as it's not able to create AI. Thus, the second AI boom cooled down imperceptibly, and with a loss of expectation from AI, the number of people who talked about AI decreased. When it came to the question of "Are we really able to achieve AI?" the number of people who answered "no" increased gradually.

Machine learning evolves

While people had a hard time trying to establish a method to achieve AI, a completely different approach had steadily built a generic technology . That approach is called machine learning. You should have heard the name if you have touched on data mining even a little. Machine learning is a strong tool compared to past AI approaches, which simply searched or assumed based on the knowledge given by a human, as mentioned earlier in the chapter, so machine learning is very advanced. Until machine learning, a machine could only search for an answer from the data that had already been inputted. The focus was on how fast a machine could pull out knowledge related to a question from its saved knowledge. Hence, a machine can quickly reply to a question it already knows, but gets stuck when it faces questions it doesn't know.

On the other hand, in machine learning, a machine is literally learning. A machine can cope with unknown questions based on the knowledge it has learned. So, how was a machine able to learn, you ask? What exactly is learning here? Simply put, learning is when a machine can divide a problem into "yes" or "no." We'll go through more detail on this in the next chapter, but for now we can say that machine learning is a method of pattern recognition.

We could say that, ultimately, every question in the world can be replaced with a question that can be answered with yes or no. For example, the question "What color do you like?" can be considered almost the same as asking "Do you like red? Do you like green? Do you like blue? Do you like yellow?..." In machine learning, using the ability to calculate and the capacity to process at high speed as a weapon, a machine utilizes a substantial amount of training data, replaces complex questions with yes/no questions, and finds out the regularity with which data is yes, and which data is no (in other words, it learns). Then, with that learning, a machine assumes whether the newly-given data is yes or no and provides an answer. To sum up, machine learning can give an answer by recognizing and sorting out patterns from the data provided and then classifying that data into the possible appropriate pattern (predicting) when it faces unknown data as a question.

In fact, this approach is not doing something especially difficult. Humans also unconsciously classify data into patterns. For example, if you meet a man/woman who's perfectly your type at a party, you might be desperate to know whether the man/woman in front of you has similar feelings towards you. In your head, you would compare his/her way of talking, looks, expressions, or gestures to past experience (that is, data) and assume whether you will go on a date! This is the same as a presumption based on pattern recognition.

Machine learning is a method that can process this pattern recognition not by humans but by a machine in a mechanical manner. So, how can a machine recognize patterns and classify them? The standard of classification by machine learning is a presumption based on a numerical formula called the probabilistic statistical model. This approach has been studied based on various mathematical models.

Learning, in other words, is tuning the parameters of a model and, once the learning is done, building a model with one adjusted parameter. The machine then categorizes unknown data into the most possible pattern (that is, the pattern that fits best). Categorizing data mathematically has great merit. While it is almost impossible for a human to process multi-dimensional data or multiple-patterned data, machine learning can process the categorization with almost the same numerical formulas. A machine just needs to add a vector or the number of dimensions of a matrix. (Internally, when it classifies multi-dimensions, it's not done by a classified line or a classified curve but by a hyperplane.)

Until this approach was developed, machines were helpless in terms of responding to unknown data without a human's help, but with machine learning machines became capable of responding to data that humans can't process. Researchers were excited about the possibilities of machine learning and jumped on the opportunity to start working on improving the method. The concept of machine learning itself has a long history, but researchers couldn't do much research and prove the usefulness of machine learning due to a lack of available data. Recently, however, many open-source data have become available online and researchers can easily experiment with their algorithms using the data. Then, the third AI boom came about like this. The environment surrounding machine learning also gave its progress a boost. Machine learning needs a massive amount of data before it can correctly recognize patterns. In addition, it needs to have the capability to process data. The more data and types of patterns it handles, the more the amount of data and the number of calculations increases. Hence, obviously, past technology wouldn't have been able to deal with machine learning.

However, time is progressing, not to mention that the processing capability of machines has improved. In addition, the web has developed and the Internet is spreading all over the world, so open data has increased. With this development, everyone can handle data mining only if they pull data from the web. The environment is set for everyone to casually study machine learning. The web is a treasure box of text-data. By making good use of this text-data in the field of machine learning, we are seeing great development, especially with statistical natural language processing. Machine learning has also made outstanding achievements in the field of image recognition and voice recognition, and researchers have been working on finding the method with the best precision.

Machine learning is utilized in various parts of the business world as well. In the field of natural language processing, the prediction conversion in the input method editor (IME) could soon be on your mind. The fields of image recognition, voice recognition, image search, and voice search in the search engine are good examples. Of course, it's not limited to these fields. It is also applied to a wide range of fields from marketing targeting, such as the sales prediction of specific products or the optimization of advertisements, or designing store shelf or space planning based on predicting human behavior, to predicting the movements of the financial market. It can be said that the most used method of data mining in the business world is now machine learning. Yes, machine learning is that powerful. At present, if you hear the word "AI," it's usually the case that the word simply indicates a process done by machine learning.

What even machine learning cannot do

A machine learns by gathering data and predicting an answer. Indeed, machine learning is very useful. Thanks to machine learning, questions that are difficult for a human to solve within a realistic time frame (such as using a 100-dimensional hyperplane for categorization!) are easy for a machine. Recently, "big data" has been used as a buzzword and, by the way, analyzing this big data is mainly done using machine learning too.

Unfortunately, however, even machine learning cannot make AI. From the perspective of "can it actually achieve AI?" machine learning has a big weak point. There is one big difference in the process of learning between machine learning and human learning. You might have noticed the difference, but let's see. Machine learning is the technique of pattern classification and prediction based on input data. If so, what exactly is that input data? Can it use any data? Of course… it can't. It's obvious that it can't correctly predict based on irrelevant data. For a machine to learn correctly, it needs to have appropriate data, but then a problem occurs. A machine is not able to sort out what is appropriate data and what is not. Only if it has the right data can machine learning find a pattern. No matter how easy or difficult a question is, it's humans that need to find the right data.

Let's think about this question: "Is the object in front of you a human or a cat?" For a human, the answer is all too obvious. It's not difficult at all to distinguish them. Now, let's do the same thing with machine learning. First, we need to prepare the format that a machine can read, in other words, we need to prepare the image data of a human and a cat respectively. This isn't anything special. The problem is the next step. You probably just want to use the image data for inputting, but this doesn't work. As mentioned earlier, a machine can't find out what to learn from data by itself. Things a machine should learn need to be processed from the original image data and created by a human. Let's say, in this example, we might need to use data that can define the differences such as face colors, facial part position, the facial outlines of a human and a cat, and so on, as input data. These values, given as inputs that humans need to find out, are called the features.

Machine learning can't do feature engineering. This is the weakest point of machine learning. Features are, namely, variables in the model of machine learning. As this value shows the feature of the object quantitatively, a machine can appropriately handle pattern recognition. In other words, how you set the value of identities will make a huge difference in terms of the precision of prediction. Potentially, there are two types of limitations with machine learning:

  • An algorithm can only work well on data with the assumption of the training data - with data that has different distribution. In many cases, the learned model does not generalize well.
  • Even the well-trained model lacks the ability to make a smart meta-decision. Therefore, in most cases, machine learning can be very successful in a very narrow direction.

Let's look at a simple example so that you can easily imagine how identities have a big influence on the prediction precision of a model. Imagine there is a corporation that wants to promote a package of asset management based on the amount of assets. The corporation would like to recommend an appropriate product, but as it can't ask a personal question, it needs to predict how many assets a customer might have and prepare in advance. In this case, what type of potential customers shall we consider as an identity? We can assume many factors such as their height, weight, age, address, and so on as an identity, but clearly age or residence seem more relevant than height or weight. You probably won't get a good result if you try machine learning based on height or weight, as it predicts based on irrelevant data, meaning it's just a random prediction.

As such, machine learning can provide an appropriate answer against the question only after the machine reads an appropriate identity. But, unfortunately, the machine can't judge what the appropriate identity is, and the precision of machine learning depends on this feature engineering!

Machine learning has various methods, but the problem of being unable to do feature engineering is seen across all of these. Various methods have been developed and people compete against their precision rates, but after we have achieved precision to a certain extent, people decide whether a method of machine learning is good or bad based on how great a feature they can find. This is no longer a difference in algorithms, but more like a human's intuition or taste, or the fine-tuning of parameters, and this can't be said to be innovative at all. Various methods have been developed, but after all, the hardest thing is to think of the best identity and a human has to do that part anyway.

Things dividing a machine and human

We have gone through three problems: the frame problem, the symbol grounding problem, and feature engineering. None of these problems concern humans at all. So, why can't a machine handle these problems? Let's review the three problems again. If you think about it carefully, you will find that all three problems confront the same issue in the end:

  • The frame problem is that a machine can't recognize what knowledge it should use when it is assigned a task
  • The symbol grounding problem is that a machine can't understand a concept that puts knowledge together because it only recognizes knowledge as a mark
  • The problem of feature engineering in machine learning is that a machine can't find out what the feature is for objects

These problems can be solved only if a machine can sort out which feature of things/phenomena it should focus on and what information it should use. After all, this is the biggest difference between a machine and a human. Every object in this world has its own inherent features. A human is good at catching these features. Is this by experience or by instinct? Anyhow, humans know features, and, based on these features, humans can understand a thing as a "concept."

Now, let's briefly explain what a concept is. First of all, as a premise, take into account that every single thing in this world is constituted of a set of symbol representations and the symbols' content. For example, if you don't know the word "cat" and see a cat when you walk down a street, does it mean you can't recognize a cat? No, this is not true. You know it exists, and if you see another cat just after, you will understand it as "a similar thing to what I saw earlier." Later, you are told "That is called a cat", or you look it up for yourself, and for the first time you can connect the existence and the word.

This word, cat, is the symbol representation and the concept that you recognize as a cat is the symbol content. You can see these are two sides of the same coin. (Interestingly, there is no necessity between these two sides. There is no necessity to write cat as C-A-T or to pronounce it as such. Even so, in our system of understanding, these are considered to be inevitable. If people hear "cat", we all imagine the same thing.) The concept is, namely, symbol content. These two concepts have terms. The former is called signifiant and the latter is called signifié, and a set of these two as a pair is called signe. (These words are French. You can say signifier, signified, and sign in English, respectively.) We could say what divides a machine and human is whether it can get signifié by itself or not.

What would happen if a machine could find the notable feature from given data? As for the frame problem, if a machine could extract the notable feature from the given data and perform the knowledge representation, it wouldn't have the problem of freezing when thinking of how to pick up the necessary knowledge anymore. In terms of the symbol grounding problem, if a machine could find the feature by itself and understand the concept from the feature, it could understand the inputted symbol.

Needless to say, the feature engineering problem in machine learning would also be solved. If a machine can obtain appropriate knowledge by itself following a situation or a purpose, and not use knowledge from a fixed situation, we can solve the various problems we have been facing in achieving AI. Now, the method that a machine can use to find the important feature value from the given data is close to being accomplished. Yes, finally, this is deep learning. In the next section, I'll explain this deep learning, which is considered to be the biggest breakthrough in the more-than-50 years of AI history.

AI and deep learning

Machine learning, the spark for the third AI boom, is very useful and powerful as a data mining method; however, even with this approach of machine learning, it appeared that the way towards achieving AI was closed. Finding features is a human's role, and here there is a big wall preventing machine learning from reaching AI. It looked like the third AI boom would come to an end as well. However, surprisingly enough, the boom never ended, and on the contrary a new wave has risen. What triggered this wave is deep learning.

With the advent of deep learning, at least in the fields of image recognition and voice recognition, a machine became able to obtain "what should it decide to be a feature value" from the inputted data by itself rather than from a human. A machine that could only handle a symbol as a symbol notation has become able to obtain concepts.

AI and deep learning

Correspondence diagram between AI booms up to now and the research fields of AI

The first time deep learning appeared was actually quite a while ago, back in 2006. Professor Hinton at Toronto University in Canada, and others, published a paper (https://www.cs.toronto.edu/~hinton/absps/fastnc.pdf). In this paper, a method called deep belief nets (DBN) was presented, which is an expansion of neural networks, a method of machine learning. DBN was tested using the MNIST database, the standard database for comparing the precision and accuracy of each image recognition method. This database includes 70,000 28 x 28 pixel hand-written character image data of numbers from 0 to 9 (60,000 are for training and 10,000 are for testing).

Then, they constructed a prediction model based on the training data and measured its accuracy based on whether a machine could correctly answer which number from 0 to 9 was written in the test case. Although this paper presented a result with considerably higher precision than a conventional method, it didn't attract much attention at the time, maybe because it was compared with another general method of machine learning.

Then, a while later in 2012, the whole AI research world was shocked by one method. At the world competition for image recognition, Imagenet Large Scale Visual Recognition Challenge (ILSVRC), a method using deep learning called SuperVision (strictly, that's the name of the team), which was developed by Professor Hinton and others from Toronto University, won the competition. It far surpassed the other competitors, with formidable precision. At this competition, the task was assigned for a machine to automatically distinguish whether an image was a cat, a dog, a bird, a car, a boat, and so on. 10 million images were provided as learning data and 150,000 images were used for the test. In this test, each method competes to return the lowest error rate (that is, the highest accuracy rate).

Let's look at the following table that shows the result of the competition:

Rank

Team name

Error

1

SuperVision

0.15315

2

SuperVision

0.16422

3

ISI

0.26172

4

ISI

0.26602

5

ISI

0.26646

6

ISI

0.26952

7

OXFORD_VGG

0.26979

8

XRCE/INRIA

0.27058

You can see that the difference in the error rate between SuperVision and the second position, ISI, is more than 10%. After the second position, it's just a competition within 0.1%. Now you know how greatly SuperVision outshone the others with precision rates. Moreover, surprisingly, it was the first time SuperVision joined this ILSVRC, in other words, image recognition is not their usual field. Until SuperVision (deep learning) appeared, the normal approach for the field of image recognition was machine learning. And, as mentioned earlier, a feature value necessary to use machine learning had to be set or designed by humans. They reiterated design features based on human intuition and experiences and fine-tuning parameters over and over, which, in the end, contributed to improving precision by just 0.1%. The main issue of the research and the competition before deep learning evolved was who was able to invent good feature engineering. Therefore, researchers must have been surprised when deep learning suddenly showed up out of the blue.

There is one other major event that spread deep learning across the world. That event happened in 2012, the same year the world was shocked by SuperVision at ILSVRC, when Google announced that a machine could automatically detect a cat using YouTube videos as learning data from the deep learning algorithm that Google proposed. The details of this algorithm are explained at http://googleblog.blogspot.com/2012/06/using-large-scale-brain-simulations-for.html. This algorithm extracted 10 million images from YouTube videos and used them as input data. Now, remember, in machine learning, a human has to detect feature values from images and process data. On the other hand, in deep learning, original images can be used for inputs as they are. This shows that a machine itself comes to find features automatically from training data. In this research, a machine learned the concept of a cat. (Only this cat story is famous, but the research was also done with human images and it went well. A machine learned what a human is!) The following image introduced in the research illustrates the characteristics of what deep learning thinks a cat is, after being trained using still frames from unlabeled YouTube videos:

AI and deep learning

These two big events impressed us with deep learning and triggered the boom that is still accelerating now.

Following the development of the method that can recognize a cat, Google conducted another experiment for a machine to draw a picture by utilizing deep learning. This method is called Inceptionism (http://googleresearch.blogspot.ch/2015/06/inceptionism-going-deeper-into-neural.html). As written in the article, in this method, the network is asked:

"Whatever you see there, I want more of it!". This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.

While the use of neural networks in machine learning is a method usually used to detect patterns to be able to specify an image, what Inceptionism does is the opposite. As you can see from the following examples of Inceptionism, these paintings look odd and like the world of a nightmare:

AI and deep learning

Or rather, they could look artistic. The tool that enables anyone to try Inceptionism is open to the public on GitHub and is named Deep Dream (https://github.com/google/deepdream). Example implementations are available on that page. You can try them if you can write Python codes.

Well, nothing stops deep learning gaining momentum, but there are still questions, such as what exactly is innovative about deep learning? What special function dramatically increased this precision? Surprisingly, actually, there isn't a lot of difference for deep learning in algorithms. As mentioned briefly, deep learning is an application of neural networks, which is an algorithm of machine learning that imitates the structure of a human brain; nevertheless, a device adopted it and changed everything. The representatives are pretraining and dropout (with an activation function). These are also keywords for implementation, so please remember them.

To begin with, what does the deep in deep learning indicate? As you probably know, the human brain is a circuit structure, and that structure is really complicated. It is made up of an intricate circuit piled up in many layers. On the other hand, when the neural network algorithm first appeared its structure was quite simple. It was a simplified structure of the human brain and the network only had a few layers. Hence, the patterns it could recognize were extremely limited. So, everyone wondered "Can we just accumulate networks like the human brain and make its implementation complex?" Of course, though this approach had already been tried. Unfortunately, as a result, the precision was actually lower than if we had just piled up the networks. Indeed, we faced various issues that didn't occur with a simple network. Why was this? Well, in a human brain, a signal runs into a different part of the circuit depending on what you see. Based on the patterns that differ based on which part of the circuit is stimulated, you can distinguish various things.

To reproduce this mechanism, the neural network algorithm substitutes the linkage of the network by weighting with numbers. This is a great way to do it, but soon a problem occurs. If a network is simple, weights are properly allocated from the learning data and the network can recognize and classify patterns well. However, once a network gets complicated, the linkage becomes too dense and it is difficult to make a difference in the weights. In short, it cannot divide into patterns properly. Also, in a neural network, the network can make a proper model by adopting a mechanism that feeds back errors that occurred during training to the whole network. Again, if the network is simple the feedback can be reflected properly, but if the network has many layers a problem occurs in which the error disappears before it's reflected to the whole network—just imagine if that error was stretched out and diluted.

The intention that things would go well if the network was built with a complicated structure ended in disappointing failure. The concept of the algorithm itself was splendid but it couldn't be called a good algorithm by any standards; that was the world's understanding. While deep learning succeeded in making a network multi-layered, that is, making a network "deep," the key to success was to make each layer learn in stages. The previous algorithm treated the whole multi-layered network as one gigantic neural network and made it learn as one, which caused the problems mentioned earlier.

Hence, deep learning took the approach of making each layer learn in advance. This is literally known as pretraining. In pretraining, learning starts from the lower-dimension layer in order. Then, the data that is learned in the lower layer is treated as input data for the next layer. This way, machines become able to take a step by learning a feature of a low layer at the low-grade layer and gradually learning a feature of a higher grade. For example, when learning what a cat is, the first layer is an outline, the next layer is the shape of its eyes and nose, the next layer is a picture of a face, the next layers is the detail of a face, and so on. Similarly, it can be said that humans take the same learning steps as they catch the whole picture first and see the detailed features later. As each layer learns in stages, the feedback for an error of learning can also be done properly in each layer. This leads to an improvement in precision. There is also a device for each respective approach to each layer's learning, but this will be introduced later on.

We have also addressed the fact that the network became too dense. The method that prevents this density problem is called the dropout. Networks with the dropout learn by cutting some linkages randomly within the units of networks. The dropout physically makes the network sparse. Which linkage is cut is random, so a different network is formed at each learning step. Just by looking, you might doubt that this will work, but it greatly contributes to improving the precision and as a result it increases the robustness of the network. The circuit of the human brain also has different places in which to react or not depending on the subject it sees. The dropout seems to be able to successfully imitate this mechanism. By embedding the dropout in the algorithm, the adjustment of the network weight was done well.

Deep learning has seen great success in various fields; however, of course deep learning has a demerit too. As is shown in the name "deep learning," the learning in this method is very deep. This means the steps to complete the learning take a long time. The amount of calculation in this process tends to be enormous. In fact, the previously mentioned learning of the recognition of a cat by Google took three days to be processed with 1,000 computers. Conversely, although the idea of deep learning itself could be conceived using past techniques, it couldn't be implemented. The method wouldn't appear if you couldn't easily use a machine that has a large-scale processing capacity with massive data.

As we keep saying, deep learning is just the first step for a machine to obtain human-like knowledge. Nobody knows what kind of innovation will happen in the future. Yet we can predict to what extent a computer's performance will be improved in the future. To predict this, Moore's law is used. The performance of an integrated circuit that supports the progress of a computer is indicated by the loaded number of transistors. Moore's law shows the number, and the number of transistors is said to double every one and a half years. In fact, the number of transistors in the CPU of a computer has been increasing following Moore's law. Compared to the world's first micro-processor, the Intel® 4004 processor, which had 1x103 (one thousand) transistors, the recent 2015 version, the 5th Generation Intel® Core™ Processor, has 1x109 (one billion)! If this technique keeps improving at this pace, the number of transistors will exceed ten billion, which is more than the number of cells in the human cerebrum.

Based on Moore's law, further in the future in 2045, it is said that we will reach a critical point called Technical Singularity where humans will be able to do technology forecasting. By that time, a machine is expected to be able to produce self-recursive intelligence. In other words, in about 30 years, AI will be ready. What will the world be like then…

AI and deep learning

History of Moore's law

The number of transistors loaded in the processor invented by Intel has been increasing smoothly following Moore's law.

The world famous professor Stephen Hawking answered in an interview by the BBC (http://www.bbc.com/news/technology-30290540):

"The development of full artificial intelligence could spell the end of the human race."

Will deep learning become a black magic? Indeed, the progress of technology has sometimes caused tragedy. Achieving AI is still far in the future, yet we should be careful when working on deep learning.

Summary

In this chapter, you learned how techniques in the field of AI have evolved into deep learning. We now know that there were two booms in AI and that we are now in the third boom. Searching and traversing algorithms were developed in the first boom, such as DFS and BFS. Then, the study focused on how knowledge could be represented with symbols that a machine could easily understand in the second boom.

Although these booms had faded away, techniques developed during those times built up much useful knowledge of AI fields. The third boom spread out with machine learning algorithms in the beginning with those of pattern recognition and classification based on probabilistic statistical models. With machine learning, we've made great progress in various fields, but this is not enough to realize true AI because we need to tell a machine what the features of objects to be classified are. The technique required for machine learning is called feature engineering. Then, deep learning came out, based on one machine learning algorithm - namely, neural networks. A machine can automatically learn what the features of objects are with deep learning, and thus deep learning is recognized as a very innovative technique. Studies of deep learning are becoming more and more active, and every day new technologies are invented. Some of the latest technologies are introduced in the last chapter of this book, Chapter 8, What's Next?, for reference.

Deep learning is often thought to be very complicated, but the truth is it's not. As mentioned, deep learning is the evolving technique of machine learning, and deep learning itself is very simple yet elegant. We'll look at more details of machine learning algorithms in the next chapter. With a great understanding of machine learning, you will easily acquire the essence of deep learning.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Go beyond the theory and put Deep Learning into practice with Java
  • Find out how to build a range of Deep Learning algorithms using a range of leading frameworks including DL4J, Theano and Caffe
  • Whether you’re a data scientist or Java developer, dive in and find out how to tackle Deep Learning

Description

AI and Deep Learning are transforming the way we understand software, making computers more intelligent than we could even imagine just a decade ago. Deep Learning algorithms are being used across a broad range of industries – as the fundamental driver of AI, being able to tackle Deep Learning is going to a vital and valuable skill not only within the tech world but also for the wider global economy that depends upon knowledge and insight for growth and success. It’s something that’s moving beyond the realm of data science – if you’re a Java developer, this book gives you a great opportunity to expand your skillset. Starting with an introduction to basic machine learning algorithms, to give you a solid foundation, Deep Learning with Java takes you further into this vital world of stunning predictive insights and remarkable machine intelligence. Once you’ve got to grips with the fundamental mathematical principles, you’ll start exploring neural networks and identify how to tackle challenges in large networks using advanced algorithms. You will learn how to use the DL4J library and apply Deep Learning to a range of real-world use cases. Featuring further guidance and insights to help you solve challenging problems in image processing, speech recognition, language modeling, this book will make you rethink what you can do with Java, showing you how to use it for truly cutting-edge predictive insights. As a bonus, you’ll also be able to get to grips with Theano and Caffe, two of the most important tools in Deep Learning today. By the end of the book, you’ll be ready to tackle Deep Learning with Java. Wherever you’ve come from – whether you’re a data scientist or Java developer – you will become a part of the Deep Learning revolution!

Who is this book for?

This book is intended for data scientists and Java developers who want to dive into the exciting world of deep learning. It would also be good for machine learning users who intend to leverage deep learning in their projects, working within a big data environment.

What you will learn

  • Get a practical deep dive into machine learning and deep learning algorithms
  • Implement machine learning algorithms related to deep learning
  • Explore neural networks using some of the most popular Deep Learning frameworks
  • Dive into Deep Belief Nets and Stacked Denoising Autoencoders algorithms
  • Discover more deep learning algorithms with Dropout and Convolutional Neural Networks
  • Gain an insight into the deep learning library DL4J and its practical uses
  • Get to know device strategies to use deep learning algorithms and libraries in the real world
  • Explore deep learning further with Theano and Caffe
Estimated delivery fee Deliver to New Zealand

Standard delivery 10 - 13 business days

NZ$20.95

Premium delivery 5 - 8 business days

NZ$74.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : May 30, 2016
Length: 254 pages
Edition : 1st
Language : English
ISBN-13 : 9781785282195
Vendor :
Oracle
Category :
Languages :
Concepts :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to New Zealand

Standard delivery 10 - 13 business days

NZ$20.95

Premium delivery 5 - 8 business days

NZ$74.95
(Includes tracking information)

Product Details

Publication date : May 30, 2016
Length: 254 pages
Edition : 1st
Language : English
ISBN-13 : 9781785282195
Vendor :
Oracle
Category :
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just NZ$7 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just NZ$7 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total NZ$ 170.98
Java Deep Learning Essentials
NZ$80.99
Mastering Concurrency Programming with Java 8
NZ$89.99
Total NZ$ 170.98 Stars icon
Banner background image

Table of Contents

9 Chapters
1. Deep Learning Overview Chevron down icon Chevron up icon
2. Algorithms for Machine Learning – Preparing for Deep Learning Chevron down icon Chevron up icon
3. Deep Belief Nets and Stacked Denoising Autoencoders Chevron down icon Chevron up icon
4. Dropout and Convolutional Neural Networks Chevron down icon Chevron up icon
5. Exploring Java Deep Learning Libraries – DL4J, ND4J, and More Chevron down icon Chevron up icon
6. Approaches to Practical Applications – Recurrent Neural Networks and More Chevron down icon Chevron up icon
7. Other Important Deep Learning Libraries Chevron down icon Chevron up icon
8. What's Next? Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.6
(11 Ratings)
5 star 54.5%
4 star 9.1%
3 star 0%
2 star 18.2%
1 star 18.2%
Filter icon Filter
Top Reviews

Filter reviews by




Luigi Cardarella Nov 10, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Bel libro che spiega proprio bene il deep learning e finalmente degli esempi con Dl4j.Veramente bello.Lo consiglio vivamente a chi vuole sviluppare o semplicemente curiosare in questo fantastico mondo del deep learning con java.
Amazon Verified review Amazon
Sujit Pal Jun 17, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I thought this was a very well-written book on Deep Learning (DL). Java is (in my opinion) not the best language for teaching algorithms, but the example code is very readable. Like many DL books, the book focuses a lot on basic concepts and the math derivations behind them, so in that sense it is relatively undifferentiated from these books - however, it is is the only one that does so in Java. This is the only book I have read that has extensive coverage of pre-training (Deep Belief Networks, Restricted Boltzmann Machines, Denoising Autoencoders (DA), and Stacked DAs). Other "standard" networks such as Multilayer Perceptrons, Convolutional Neural Networks and Recurrent Neural Networks are also covered, about as well as other books I have read. The author provides good intuition around ideas such as dropout and learning rate adjustments. I bought the book because I wanted a quick intro to the DeepLearning4j framework - unfortunately the book has only one chapter dedicated to that with a fairly basic example. However, one can use it as a template and refer to the (very informative) DL4j website for more information. Overall, I think it is a good resource for Java programmers who want to learn Deep Learning.
Amazon Verified review Amazon
Xidong Wu May 16, 2017
Full star icon Full star icon Full star icon Full star icon Full star icon 5
As a software engineer, I have read several theoretical books to try to understand the concepts of deep learning. I found it was really difficult for me to truly grasp the concepts such as DBN and CNN. After having gone through this book and read line by line the code provided, I can declare that I am a deep learning expert now.
Amazon Verified review Amazon
Kindle Customer May 26, 2017
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Good examples and starting points. Provides a good history and multiple language examples. Recommend as a starting point. Useful resources.
Amazon Verified review Amazon
Amazonのお客様 Jun 23, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I'm at mid-level for deep learning and was looking for a book I could go through the whole picture from the theory to implementation.I think this is what I needed.For the next step, I want a book which goes further implementations and coding.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela