Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
TensorFlow 1.x Deep Learning Cookbook

You're reading from   TensorFlow 1.x Deep Learning Cookbook Over 90 unique recipes to solve artificial-intelligence driven problems with Python

Arrow left icon
Product type Paperback
Published in Dec 2017
Publisher Packt
ISBN-13 9781788293594
Length 536 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Dr. Amita Kapoor Dr. Amita Kapoor
Author Profile Icon Dr. Amita Kapoor
Dr. Amita Kapoor
Antonio Gulli Antonio Gulli
Author Profile Icon Antonio Gulli
Antonio Gulli
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. TensorFlow - An Introduction 2. Regression FREE CHAPTER 3. Neural Networks - Perceptron 4. Convolutional Neural Networks 5. Advanced Convolutional Neural Networks 6. Recurrent Neural Networks 7. Unsupervised Learning 8. Autoencoders 9. Reinforcement Learning 10. Mobile Computation 11. Generative Models and CapsNet 12. Distributed TensorFlow and Cloud Deep Learning 13. Learning to Learn with AutoML (Meta-Learning) 14. TensorFlow Processing Units

Using a data flow graph

TensorFlow has TensorBoard to provide a graphical image of the computation graph. This makes it convenient to understand, debug, and optimize complex neural network programs. TensorBoard can also provide quantitative metrics about the execution of the network. It reads TensorFlow event files, which contain the summary data that you generate while running the TensorFlow Session.

How to do it...

  1. The first step in using TensorBoard is to identify which OPs summaries you would like to have. In the case of DNNs, it is customary to know how the loss term (objective function) varies with time. In the case of Adaptive learning rate, the learning rate itself varies with time. We can get the summary of the term we require with the help of tf.summary.scalar OPs. Suppose, the variable loss defines the error term and we want to know how it varies with time, then we can do this as follows:
loss = tf... 
tf.summary.scalar('loss', loss)
  1. You can also visualize the distribution of gradients, weights, or even output of a particular layer using tf.summary.histogram:
output_tensor  = tf.matmul(input_tensor, weights) + biases 
tf.summary.histogram('output', output_tensor)
  1. The summaries will be generated during the session. Instead of executing every summary operation individually, you can define tf.merge_all_summaries OPs in the computation graph to get all summaries in a single run.
  2. The generated summary then needs to be written in an event file using tf.summary.Filewriter:
writer = tf.summary.Filewriter('summary_dir', sess.graph) 
  1. This writes all the summaries and the graph in the 'summary_dir' directory.
  2. Now, to visualize the summaries, you need to invoke TensorBoard from the command line:
tensorboard --logdir=summary_dir 
  1. Next, open your browser and type the address http://localhost:6006/ (or the link you received after running the TensorBoard command).
  2. You will see something like the following, with many tabs on the top. The Graphs tab will display the graph:
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at R$50/month. Cancel anytime