Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Hands-On Deep Learning with TensorFlow
Hands-On Deep Learning with TensorFlow

Hands-On Deep Learning with TensorFlow: Uncover what is underneath your data!

eBook
AU$14.99 AU$42.99
Paperback
AU$53.99
Subscription
Free Trial
Renews at AU$24.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $24.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Hands-On Deep Learning with TensorFlow

Chapter 1. Getting Started

TensorFlow is a new machine learning and graph computation library recently released by Google. Its Python interface ensures the elegant design of common models, while its compiled backend ensures speed.

Let's take a glimpse at the techniques you'll learn and the models you'll build as you apply TensorFlow.

Installing TensorFlow

In this section, you will learn what TensorFlow is, how to install it, and how to build simple models and do simple computations. Further, you will learn how to build a logistic regression model for classification, and introduce a machine learning problem to help us learn TensorFlow.

We're going to learn what kind of library TensorFlow is and install it on our own Linux machine, or a free instance of CoCalc if you don't have access to a Linux machine.

TensorFlow – main page

First, what is TensorFlow? TensorFlow is a new machine learning library put out by Google. It is designed to be very easy to use and is very fast. If you go to the TensorFlow website, tensorflow.org, you will have access to a wealth of information about what TensorFlow is and how to use it. We'll be referring to this often, particularly the documentation.

TensorFlow – the installation page

Before we get started with TensorFlow, note that you need to install it, as it probably doesn't come preinstalled on your operating system. So, if you go to the Install tab on the TensorFlow web page, click on Installing TensorFlow on Ubuntu, and then click on "native" pip, you will learn how to install TensorFlow.

TensorFlow – the installation page

Installing TensorFlow is very challenging, even for experienced system administrators. So, I highly recommend using something like the pip installation; alternatively, if you're familiar with Docker, use the Docker installation. You can install TensorFlow from the source, but this can be very difficult. We will install TensorFlow using a precompiled binary called a wheel file . You can install this file using Python's pip module installer.

Installing via pip

For the pip installation, you have the option of using either a Python 2 or Python 3 version. Also, you can choose between the CPU and GPU version. If your computer has a powerful graphics card, the GPU version may be for you.

Installing via pip

However, you need to check that your graphics card is compatible with TensorFlow. If it's not, it's fine; everything in this series can be done with just the CPU version.

Note

We can install TensorFlow by using the pip install tensorflow command (based on your CPU or GPU support and pip version), as shown in the preceding screenshot.

So, if you copy the following line for TensorFlow, you can install it as well:

# Python 3.4 installation
sudo pip3 install --upgrade \
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp34-cp34m-linux_x86_64.whl

If you don't have Python 3.4, as the wheel file called for, that's okay. You can probably still use the same wheel file. Let's take a look at how to do this for Python 3.5. First, you just need to download the wheel file directly, by either putting the following URL in your browser or using a command-line program, such as wget, as we're doing here:

wget  https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp34-cp34m-linux_x86_64.whl

If you download this, it will very quickly be grabbed by your computer.

Now all you need to do is change the name of the file from cp34, which stands for Python 3.4, to whichever version of Python 3 you're using. In this case, we'll change it to a version using Python 3.5, so we'll change 4 to 5:

mv tensorflow-1.2.1-cp34-cp34m-linux_x86_64.whl tensorflow-1.2.1-cp35-cp35m-linux_x86_64.whl

Now you can install TensorFlow for Python 3.5 by simply changing the installation line here to pip3 install and the name of the new wheel file after changing it to 3.5:

sudo pip3 install ./tensorflow-1.2.1-cp35-cp35m-linux_x86_64.whl

We can see this works just fine. Now you've installed TensorFlow.

Installing via pip

If your installation somehow becomes corrupted later, you can always jump back to this segment to remind yourself about the steps involved in the installation.

Installing via CoCalc

If you don't have administrative or installation rights on your computer but still want to try TensorFlow, you can try running TensorFlow over the web in a CoCalc instance. If you go to https://cocalc.com/ and create a new account, you can create a new project. This will give you a sort of a virtual machine that you can play around with. Conveniently, TensorFlow is already installed in the Anaconda 3 kernel.

Installing via CoCalc

Let's create a new project called TensorFlow. Click on +Create new project…, enter a title for your project, and click on Create Project. Now we can go into our project by clicking on the title. It will take a couple of seconds to load.

Installing via CoCalc

Click on +New to create a new file. Here, we'll create a Jupyter notebook:

Installing via CoCalc

Jupyter is a convenient way to interact with IPython and the primary means of using CoCalc for these computations. It may take a few seconds to load.

When you get to the interface shown in the following screenshot, the first thing you need to do is change the kernel to Anaconda Python 3 by going to Kernel | Change kernel… | Python 3 (Anaconda):

Installing via CoCalc

This will give you the proper dependencies to use TensorFlow. It may take a few seconds for the kernel to change. Once you are connected to the new kernel, you can type import tensorflow in the cell and go to Cell | Run Cells to check whether it works:

Installing via CoCalc

If your Jupyter notebook takes a long time to load, you can instead create a Terminal in CoCalc using the button shown in the following screenshot:

Installing via CoCalc

Once there, type anaconda3 to switch environments, then type ipython3 to launch an interactive Python session, as shown in the following screenshot:

Installing via CoCalc

You can easily work here, although you won't be able to visualize the output. Type import tensorflow in the Terminal and off you go.

So far in this section, you've learned what TensorFlow is and how to install it, either locally or on a virtual machine on the web. Now we're ready to explore simple computations in TensorFlow.

Simple computations

First, we're going to take a look at the tensor object type. Then we'll have a graphical understanding of TensorFlow to define computations. Finally, we'll run the graphs with sessions, showing how to substitute intermediate values.

Defining scalars and tensors

The first thing you need to do is download the source code pack for this book and open the simple.py file. You can either use this file to copy and paste lines into TensorFlow or CoCalc, or type them directly yourselves. First, let's import tensorflow as tf. This is a convenient way to refer to it in Python. You'll want to hold your constant numbers in tf.constant calls. For example, let's do a = tf.constant(1) and b = tf.constant(2):

import tensorflow as tf
# You can create constants in TF to hold specific values
a = tf.constant(1)
b = tf.constant(2)

Of course, you can add and multiply these to get other values, namely c and d:

# Of course you can add, multiply, and compute on these as you like
c = a + b
d = a * b

TensorFlow numbers are stored in tensors, a fancy term for multidimensional arrays. If you pass a Python list to TensorFlow, it does the right thing and converts it into an appropriately dimensioned tensor. You can see this illustrated in the following code:

# TF numbers are stored in "tensors", a fancy term for multidimensional arrays. If you pass TF a Python list, it can convert it
V1 = tf.constant([1., 2.])   # Vector, 1-dimensional
V2 = tf.constant([3., 4.])   # Vector, 1-dimensional
M = tf.constant([[1., 2.]])             # Matrix, 2d
N = tf.constant([[1., 2.],[3.,4.]])     # Matrix, 2d
K = tf.constant([[[1., 2.],[3.,4.]]])   # Tensor, 3d+

The V1 vector, a one-dimensional tensor, is passed as a Python list of [1. , 2.]. The dots here just force Python to store the number as decimal values rather than integers. The V2 vector is another Python list of [3. , 4. ]. The M variable is a two-dimensional matrix made from a list of lists in Python, creating a two-dimensional tensor in TensorFlow. The N variable is also a two-dimensional matrix. Note that this one actually has multiple rows in it. Finally, K is a true tensor, containing three dimensions. Note that the final dimension contains just one entry, a single two-by-two box.

Don't worry if this terminology is a bit confusing. Whenever you see a strange new variable, you can jump back to this point to understand what it might be.

Computations on tensors

You can also do simple things, such as add tensors together:

V3 = V1 + V2

Alternatively, you can multiply them element-wise, so each common position is multiplied together:

# Operations are element-wise by default
M2 = M * M

For true matrix multiplication, however, you need to use tf.matmul, passing in your two tensors as arguments:

NN = tf.matmul(N,N)

Doing computation

Everything so far has just specified the TensorFlow graph; we haven't yet computed anything. To do this, we need to start a session in which the computations will take place. The following code creates a new session:

sess = tf.Session()

Once you have a session open, doing: sess.run(NN) will evaluate the given expression and return an array. We can easily send this to a variable by doing the following:

output = sess.run(NN)
print("NN is:")
print(output)

If you run this cell now, you should see the correct tensor array for the NN output on the screen:

Doing computation

When you're done using your session, it's good to close it, just like you would close a file handle:

# Remember to close your session when you're done using it
sess.close()

For interactive work, we can use tf.InteractiveSession() like so:

sess = tf.InteractiveSession()

You can then easily compute the value of any node. For example, entering the following code and running the cell will output the value of M2:

# Now we can compute any node
print("M2 is:")
print(M2.eval())

Variable tensors

Of course, not all our numbers are constant. To update weights in a neural network, for example, we need to use tf.Variable to create the appropriate object:

W = tf.Variable(0, name="weight")

Note that variables in TensorFlow are not initialized automatically. To do so, we need to use a special call, namely tf.global_variables_initializer(), and then run that call with sess.run():

init_op = tf.global_variables_initializer()
sess.run(init_op)

This is to put a value in that variable. In this case, it will stuff a 0 value into the W variable. Let's just verify that W has that value:

print("W is:")
print(W.eval())

You should see an output value for W of 0 in your cell:

Variable tensors

Let's see what happens when you add a to it:

W += a
print("W after adding a:")
print(W.eval())

Recall that a is 1, so you get the expected value of 1 here:

Variable tensors

Let's add a again, just to make sure we can increment and that it's truly a variable:

W += a
print("W after adding a:")
print(W.eval())

Now you should see that W is holding 2, as we have incremented it twice with a:

Variable tensors

Viewing and substituting intermediate values

You can return or supply arbitrary nodes when doing a TensorFlow computation. Let's define a new node but also return another node at the same time in a fetch call. First, let's define our new node E, as shown here:

E = d + b # 1*2 + 2 = 4

Let's take a look at what E starts as:

print("E as defined:")
print(E.eval())

You should see that, as expected, E equals 4. Now let's see how we can pass in multiple nodes, E and d, to return multiple values from a sess.run call:

# Let's see what d was at the same time
print("E and d:")
print(sess.run([E,d]))

You should see multiple values, namely 4 and 2, returned in your output:

Viewing and substituting intermediate values

Now suppose we want to use a different intermediate value, say for debugging purposes. We can use feed_dict to supply a custom value to a node anywhere in our computation when returning a value. Let's do that now with d equals 4 instead of 2:

# Use a custom d by specifying a dictionary
print("E with custom d=4:")
print(sess.run(E, feed_dict = {d:4.}))

Remember that E equals d + b and the values of d and b are both 2. Although we've inserted a new value of 4 for d, you should see that the value of E will now be output as 6:

Viewing and substituting intermediate values

You have now learned how to do core computations with TensorFlow tensors. It's time to take the next step forward by building a logistic regression model.

Logistic regression model building

Okay, let's get started with building a real machine learning model. First, we'll see the proposed machine learning problem: font classification. Then, we'll review a simple algorithm for classification, called logistic regression. Finally, we'll implement logistic regression in TensorFlow.

Introducing the font classification dataset

Before we jump in, let's load all the necessary modules:

import tensorflow as tf
import numpy as np

If you're copying and pasting to IPython, make sure your autoindent property is set to OFF:

%autoindent

The tqdm module is optional; it just shows nice progress bars:

try:
    from tqdm import tqdm
except ImportError:
    def tqdm(x, *args, **kwargs):
        return x

Next, we'll set a seed of 0, just to get consistent data splitting from run to run:

# Set random seed
np.random.seed(0)

In this book, we've provided a dataset of the images of characters using five fonts. For convenience, these are stored in a compressed NumPy file (data_with_labels.npz), which can be found in the download package of this book. You can easily load these into Python with numpy.load:

# Load data
data = np.load('data_with_labels.npz')
train = data['arr_0']/255.
labels = data['arr_1']

The train variable here holds the actual pixel values scaled from 0 to 1, and labels holds the type of font that it was; therefore, it'll be either 0, 1, 2, 3, or 4, as there are five fonts in total. You can print out these values, so you can look at them using the following code:

# Look at some data
print(train[0])
print(labels[0])

However, that's not very instructive, as most of the values are zeroes and only the central part of the screen contains the image data:

Introducing the font classification dataset

If you have Matplotlib installed, now is a good place to import it. We'll use plt.ion() to automatically bring up figures when needed:

# If you have matplotlib installed
import matplotlib.pyplot as plt
plt.ion()

Here are some example images of characters from each font:

Introducing the font classification dataset

Yeah, they're pretty flashy. In the dataset, each image is represented as a 36 x 36 two-dimensional matrix of pixel darkness values. The 0 value represents a white pixel, while 255 represents a black pixel. Everything in between is a shade of gray. Here's the code to display these fonts on your own machine:

# Let's look at a subplot of one of A in each font
f, plts = plt.subplots(5, sharex=True)
c = 91
for i in range(5):
    plts[i].pcolor(train[c + i * 558],
                   cmap=plt.cm.gray_r)

If your plot appears really wide, you can easily resize the window just using your mouse. It's often much more work to resize it ahead of time in Python if you're simply plotting interactively. Our goal is to decide which font an image belongs to, given that we have many other labeled images of the fonts. To expand the dataset and help avoid overfitting, we have also jittered each character around in the 36 x 36 area, giving us nine times as many data points.

It may be helpful to come back to this after working with later models. It's important to keep the original data in mind, no matter how advanced the final model is.

Logistic regression

If you're familiar with linear regression, you're halfway toward understanding logistic regression. Basically, we're going to assign a weight to each pixel in the image, then take the weighted sum of those pixels (beta for weights and X for pixels). This will give us a score for that image being a particular font. Every font will have its own set of weights, as they will value pixels differently. To convert these scores into proper probabilities (represented by Y), we will use what's called the softmax function to force their sum to be between 0 and 1, as illustrated next. Whatever probability is the greatest for a particular image, we will classify it into the associated class.

You can read more about the theory of logistic regression in most statistical modeling textbooks. Here is its formula:

Logistic regression

One good reference that focuses on applications is William H. Greene's Econometric Analysis, Pearson, published in the year 2012.

Getting data ready

Implementing logistic regression is pretty easy in TensorFlow and will serve as scaffolding for more complex machine learning algorithms. First, we need to convert our integer labels into a one-hot format. This means, instead of labeling an image with font class 2, we transform the label into [0, 0, 1, 0, 0]. That is, we stick 1 in position two (note 0-up counting is common in computer science) and 0 for every other class. Here's the code for our to_onehot function:

def to_onehot(labels,nclasses = 5):
    '''
    Convert labels to "one-hot" format.
    >>> a = [0,1,2,3]
    >>> to_onehot(a,5)
    array([[ 1.,  0.,  0.,  0.,  0.],
           [ 0.,  1.,  0.,  0.,  0.],
           [ 0.,  0.,  1.,  0.,  0.],
           [ 0.,  0.,  0.,  1.,  0.]])
    '''
    outlabels = np.zeros((len(labels),nclasses))
    for i,l in enumerate(labels):
        outlabels[i,l] = 1
    return outlabels

With this done, we can go ahead and call the function:

onehot = to_onehot(labels)

For the pixels, we don't really want a matrix in this case, so we'll flatten the 36 x 36 numbers into a one-dimensional vector of length 1,296, but this will come a little bit later. Also, recall that we've rescaled the pixel values of 0-255 so that they fall between 0 and 1.

Okay, our final piece of preparation is to split our dataset into training and validation sets. This will help us catch overfitting later on. The training set will help us determine the weights in our logistic regression model, and the validation set will just be used to confirm that those weights are reasonably correct on new data:

# Split data into training and validation
indices = np.random.permutation(train.shape[0])
valid_cnt = int(train.shape[0] * 0.1)
test_idx, training_idx = indices[:valid_cnt],\
                         indices[valid_cnt:]
test, train = train[test_idx,:],\
              train[training_idx,:]
onehot_test, onehot_train = onehot[test_idx,:],\
                        onehot[training_idx,:]

Building a TensorFlow model

Okay, let's kick off the TensorFlow code by creating an interactive session:

sess = tf.InteractiveSession()

With this, we've started our first model in TensorFlow.

We're going to use a placeholder variable for x, which represents our input images. This is just to tell TensorFlow that we will supply the value for this node via feed_dict later on:

# These will be inputs
## Input pixels, flattened
x = tf.placeholder("float", [None, 1296])

Also, note that we can specify the shape of this tensor, and here we have used None as one of the sizes. The None size allows us to send an arbitrary number of data points into the algorithm at once for batch processing. We'll use the variable y_ likewise to hold our known labels to be used for training later on:

## Known labels
y_ = tf.placeholder("float", [None,5])

To perform logistic regression, we need a set of weights (W). In fact, we need 1,296 weights for each of the five font classes, which will give us our shape. Note that we also want to include an extra weight for each class as a bias (b). This is the same as adding an extra input variable that always takes the value 1:

# Variables
W = tf.Variable(tf.zeros([1296,5]))
b = tf.Variable(tf.zeros([5]))

With all these TensorFlow variables floating around, we need to make sure they get initialized. Let's call them now:

# Just initialize
sess.run(tf.global_variables_initializer())

Good job! You've got everything prepared. Now you can implement the softmax formula to compute probabilities. Because we set up our weights and input very carefully, TensorFlow makes this task very easy with just a call to tf.matmul and tf.nn.softmax:

# Define model
y = tf.nn.softmax(tf.matmul(x,W) + b)

That's it! You've implemented an entire machine learning classifier in TensorFlow. Nice work. But where do we get the values for the weights? Let's take a look at using TensorFlow to train the model.

Logistic regression training

First, you'll learn about the loss function for our machine learning classifier and implement it in TensorFlow. Then, we'll quickly train the model by evaluating the right TensorFlow node. Finally, we'll verify that our model is reasonably accurate and the weights make sense.

Developing the loss function

Optimizing our model really means minimizing how wrong we are. With our labels in one-hot style, it's easy to compare these with the class probabilities predicted by the model. The categorical cross_entropy function is a formal way to measure this. While the exact statistics are beyond the scope of this course, you can think of it as punishing the model for more for less accurate predictions. To compute it, we multiply our one-hot real labels element-wise with the natural log of the predicted probabilities, then sum these values and negate them. Conveniently, TensorFlow already includes this function as tf.nn.softmax_cross_entropy_with_logits() and we can just call that:

# Climb on cross-entropy
cross_entropy = tf.reduce_mean(
        tf.nn.softmax_cross_entropy_with_logits(
        logits = y + 1e-50, labels = y_))

Note that we are adding a small error value of 1e-50 here to avoid numerical instability problems.

Training the model

TensorFlow is convenient in that it provides built-in optimizers to take advantage of the loss function we just wrote. Gradient descent is a common choice and will slowly nudge our weights toward better results. This is the node that will update our weights:

# How we train
train_step = tf.train.GradientDescentOptimizer(
                0.02).minimize(cross_entropy)

Before we actually start training, we should specify a few more nodes to assess how well the model does:

# Define accuracy
correct_prediction = tf.equal(tf.argmax(y,1),
                     tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(
           correct_prediction, "float"))

The correct_prediction node is 1 if our model assigns the highest probability to the correct class, and 0 otherwise. The accuracy variable averages these predictions over the available data, giving us an overall sense of how well the model did.

When training in machine learning, we often want to use the same data point multiple times to squeeze all the information out. Each pass through the entire training data is called an epoch. Here, we're going to save both the training and validation accuracy every 10 epochs:

# Actually train
epochs = 1000
train_acc = np.zeros(epochs//10)
test_acc = np.zeros(epochs//10)
for i in tqdm(range(epochs)):
    # Record summary data, and the accuracy
    if i % 10 == 0:
        # Check accuracy on train set
        A = accuracy.eval(feed_dict={
            x: train.reshape([-1,1296]),
            y_: onehot_train})
        train_acc[i//10] = A
        # And now the validation set
        A = accuracy.eval(feed_dict={
            x: test.reshape([-1,1296]),
            y_: onehot_test})
        test_acc[i//10] = A
    train_step.run(feed_dict={
        x: train.reshape([-1,1296]),
        y_: onehot_train})

Note that we use feed_dict to pass in different types of data to get different output values. Finally, train_step.run updates the model every iteration. This should only take a few minutes on a typical computer, much less if you're using a GPU, and a bit more on an underpowered machine.

You just trained a model with TensorFlow; awesome!

Evaluating the model accuracy

After 1,000 epochs, let's take a look at the model. If you have Matplotlib installed, you can view the accuracies in a graphical plot; if not, you can still look at the number. For the final results, use the following code:

# Notice that accuracy flattens out
print(train_acc[-1])
print(test_acc[-1])

If you do have Matplotlib installed, you can use the following code to display the plot:

# Plot the accuracy curves
plt.figure(figsize=(6,6))
plt.plot(train_acc,'bo')
plt.plot(test_acc,'rx')

You should see something like the following plot (note that we used some random initialization, so it might not be exactly the same):

Evaluating the model accuracy

It seems like the validation accuracy flattens out after about 400-500 iterations; beyond this, our model may either be overfitting or not learning much more. Also, even though the final accuracy of about 40 percent might seem poor, recall that, with five classes, a totally random guess would only have 20 percent accuracy. With this limited dataset, the simple model is doing all it can.

It's also often helpful to look at computed weights. These can give you a clue as to what the model thinks is important. Let's plot them by pixel position for a given class:

# Look at a subplot of the weights for each font
f, plts = plt.subplots(5, sharex=True)
for i in range(5):
    plts[i].pcolor(W.eval()[:,i].reshape([36,36]))

This should give you a result similar to the following (again, if the plot comes out very wide, you can squeeze in the window size to square it up):

Evaluating the model accuracy

We can see that the weights near the interior are important in some models, while the weights on the outside are essentially zero. This makes sense, since none of the font characters reach the corners of the images.

Again, note that your final results might look a little different due to random initialization effects. Always feel free to experiment and change the parameters of the model; that's how you'll learn new things.

Summary

In this chapter, we installed TensorFlow on a machine we can use. After some small steps with basic computations, we jumped into a machine learning problem, successfully building a decent model with just logistic regression and a few lines of TensorFlow code.

In the next chapter, we'll see TensorFlow in its prime with deep neural networks.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Explore various possibilities with deep learning and gain amazing insights from data using Google’s brainchild-- TensorFlow
  • Want to learn what more can be done with deep learning? Explore various neural networks with the help of this comprehensive guide
  • Rich in concepts, advanced guide on deep learning that will give you background to innovate in your environment

Description

Dan Van Boxel’s Deep Learning with TensorFlow is based on Dan’s best-selling TensorFlow video course. With deep learning going mainstream, making sense of data and getting accurate results using deep networks is possible. Dan Van Boxel will be your guide to exploring the possibilities with deep learning; he will enable you to understand data like never before. With the efficiency and simplicity of TensorFlow, you will be able to process your data and gain insights that will change how you look at data. With Dan’s guidance, you will dig deeper into the hidden layers of abstraction using raw data. Dan then shows you various complex algorithms for deep learning and various examples that use these deep neural networks. You will also learn how to train your machine to craft new features to make sense of deeper layers of data. In this book, Dan shares his knowledge across topics such as logistic regression, convolutional neural networks, recurrent neural networks, training deep networks, and high level interfaces. With the help of novel practical examples, you will become an ace at advanced multilayer networks, image recognition, and beyond.

Who is this book for?

If you are a data scientist who performs machine learning on a regular basis, are familiar with deep neural networks, and now want to gain expertise in working with convoluted neural networks, then this book is for you. Some familiarity with C++ or Python is assumed.

What you will learn

  • Set up your computing environment and install TensorFlow
  • Build simple TensorFlow graphs for everyday computations
  • Apply logistic regression for classification with TensorFlow
  • Design and train a multilayer neural network with TensorFlow
  • Intuitively understand convolutional neural networks for image recognition
  • Bootstrap a neural network from simple to more accurate models
  • See how to use TensorFlow with other types of networks
  • Program networks with SciKit-Flow, a high-level interface to TensorFlow

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 31, 2017
Length: 174 pages
Edition : 1st
Language : English
ISBN-13 : 9781787282773
Category :
Languages :
Concepts :

What do you get with a Packt Subscription?

Free for first 7 days. $24.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Jul 31, 2017
Length: 174 pages
Edition : 1st
Language : English
ISBN-13 : 9781787282773
Category :
Languages :
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
AU$24.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
AU$249.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just AU$5 each
Feature tick icon Exclusive print discounts
AU$349.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just AU$5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total AU$44.97 AU$145.97 AU$101.00 saved
TensorFlow 1.x Deep Learning Cookbook
AU$67.99
Neural Network Programming with TensorFlow
AU$60.99
Hands-On Deep Learning with TensorFlow
AU$53.99
Total AU$44.97AU$145.97 AU$101.00 saved Stars icon
Banner background image

Table of Contents

6 Chapters
1. Getting Started Chevron down icon Chevron up icon
2. Deep Neural Networks Chevron down icon Chevron up icon
3. Convolutional Neural Networks Chevron down icon Chevron up icon
4. Introducing Recurrent Neural Networks Chevron down icon Chevron up icon
5. Wrapping Up Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Half star icon Empty star icon Empty star icon 2.8
(4 Ratings)
5 star 0%
4 star 25%
3 star 50%
2 star 0%
1 star 25%
Amazon buyer Sep 28, 2018
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Practical approach
Amazon Verified review Amazon
Maxwell B. Anselm May 11, 2018
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
If you've done some reading about machine learning and already know the gist of how neural networks work, this book will get you up to speed with some simple, practical examples. It's very light and handwavy on the theory, so I wouldn't recommend it to someone who is completely fresh to the topic. But I liked that it took a very hands-on approach with the code examples, explaining every line and generally doing things "the hard way" so that you actually felt like you were in control of the networks you set up.I followed along using the latest TensorFlow libraries available on Arch Linux and I was surprised that the examples were already a little out of date. I got quite a few deprecation warnings and one example was straight up broken because of outdated syntax, but it was easy to figure out how to fix everything.I happen to know that the author originally presented the material as a video series and this book was transcribed by a third party and... unfortunately, it shows. There are some weird wordings that I can only assume were incorrectly transcribed, and some of the text refers to code examples that must be downloaded separately as if they're right there in the text. But again, nothing got in the way of understanding the material.
Amazon Verified review Amazon
Dimitri Shvorob Mar 11, 2018
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
Wishing to learn about TensorFlow, I decided to survey TF books available from Amazon, and pick one or two for further study. I excluded self-published offerings, and ended up with this longish list, dominated by Packt titles:"Machine Learning with TensorFlow" by Shukla, published by Manning in 2018-02, 272 pp, $43"Mastering TensorFlow 1.x" by Fandango, Packt, 2018-01, 474 pp, $35"Pro Deep Learning with TensorFlow" by Pattanayak, Apress, 2017-12, 398 pp, $37"TensorFlow 1.x Deep Learning Cookbook" by Gulli and Kapoor, Packt, 2017-12, 536 pp, $32"Neural Network Programming with TensorFlow" by Ghotra and Dua, Packt, 2017-11, 274 pp, $40"Predictive Analytics with TensorFlow" by Karim, Packt, 2017-11, 522 pp, $50"Machine Learning with TensorFlow 1.x" by Hua and Azeem, Packt, 2017-11, 304 pp, $39"Learning TensorFlow" by Hope and Resheff, O'Reilly, 2017-08, 242 pp, $25"Hands-On Deep Learning with TensorFlow" by Van Boxel, Packt, 2017-07, 174 pp, $35"Deep Learning with TensorFlow" by Zaccone, Karim and Menshawy, Packt, 2017-04, 320 pp, $50"TensorFlow Machine Learning Cookbook" by McClure, Packt, 2017-02, 370 pp, $30"Building Machine Learning Projects with TensorFlow" by Bonnin, Packt, 2016-11, 291 pp, $35"Getting Started with TensorFlow" by Zaccone, Packt, 2016-07, 180 pp, $35I reviewed the doc on tensorflow.org - including the doc for older releases - then started looking at books. One week later, I am still not done, but some options can already be discarded."Hands-On Deep Learning with TensorFlow" is one of them. The book is the thinnest of the bunch; with just 174 Packt pages - equivalent to under 100 of "regular" ones - to play with, it cannot really be a TensorFlow reference, only a (sketchy) TensorFlow introduction. In this case, page count is kept down by (with one exception) focusing on a single problem, MNIST character recognition. Despite a recent release date, the book does not cover the higher-level APIs of Estimators and Datasets, and adopts the "old school", low-level approach. It is really not bad, and does add value to the doc (for release 1.3 or so) and online treatments of "TensorFlow vs. MNIST", but the truth is, for $35, you can find something more substantial. Consider "Hands-On Deep Learning with TensorFlow" if you see it on sale.
Amazon Verified review Amazon
Lipin Pius Feb 28, 2018
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
Please don't buy this book if you're looking for some good learning material
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.