Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
TensorFlow 1.x Deep Learning Cookbook

You're reading from   TensorFlow 1.x Deep Learning Cookbook Over 90 unique recipes to solve artificial-intelligence driven problems with Python

Arrow left icon
Product type Paperback
Published in Dec 2017
Publisher Packt
ISBN-13 9781788293594
Length 536 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Dr. Amita Kapoor Dr. Amita Kapoor
Author Profile Icon Dr. Amita Kapoor
Dr. Amita Kapoor
Antonio Gulli Antonio Gulli
Author Profile Icon Antonio Gulli
Antonio Gulli
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. TensorFlow - An Introduction 2. Regression FREE CHAPTER 3. Neural Networks - Perceptron 4. Convolutional Neural Networks 5. Advanced Convolutional Neural Networks 6. Recurrent Neural Networks 7. Unsupervised Learning 8. Autoencoders 9. Reinforcement Learning 10. Mobile Computation 11. Generative Models and CapsNet 12. Distributed TensorFlow and Cloud Deep Learning 13. Learning to Learn with AutoML (Meta-Learning) 14. TensorFlow Processing Units

Using XLA to enhance computational performance

Accelerated linear algebra (XLA) is a domain-specific compiler for linear algebra. According to https://www.tensorflow.org/performance/xla/, it is still in the experimental stage and is used to optimize TensorFlow computations. It can provide improvements in execution speed, memory usage, and portability on the server and mobile platforms. It provides two-way JIT (Just In Time) compilation or AoT (Ahead of Time) compilation. Using XLA, you can produce platform-dependent binary files (for a large number of platforms like x64, ARM, and so on), which can be optimized for both memory and speed.

Getting ready

At present, XLA is not included in the binary distributions of TensorFlow. One needs to build it from source. To build TensorFlow from source, knowledge of LLVM and Bazel along with TensorFlow is required. TensorFlow.org supports building from source in only MacOS and Ubuntu. The steps needed to build TensorFlow from the source are as follows (https://www.tensorflow.org/install/install_sources):

  1. Determine which TensorFlow you want to install--TensorFlow with CPU support only or TensorFlow with GPU support.
  2. Clone the TensorFlow repository:
git clone https://github.com/tensorflow/tensorflow 
cd tensorflow
git checkout Branch #where Branch is the desired branch
  1. Install the following dependencies:
    • Bazel
    • TensorFlow Python dependencies
    • For the GPU version, NVIDIA packages to support TensorFlow
  1. Configure the installation. In this step, you need to choose different options such as XLA, Cuda support, Verbs, and so on:
./configure 
  1. Next, use bazel-build:
  2. For CPU only version you use:
bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
  1. If you have a compatible GPU device and you want the GPU Support, then use:
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
  1. On a successful run, you will get a script, build_pip_package.
  2. Run this script as follows to build the whl file:
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

  1. Install the pip package:
sudo pip install /tmp/tensorflow_pkg/tensorflow-1.1.0-py2-none-any.whl

Now you are ready to go.

How to do it...

TensorFlow generates TensorFlow graphs. With the help of XLA, it is possible to run the TensorFlow graphs on any new kind of device.

  1. JIT Compilation: This is to turn on JIT compilation at session level:
# Config to turn on JIT compilation
config = tf.ConfigProto()
config.graph_options.optimizer_options.global_jit_level = tf.OptimizerOptions.ON_1

sess = tf.Session(config=config)
  1. This is to turn on JIT compilation manually:
jit_scope = tf.contrib.compiler.jit.experimental_jit_scope

x = tf.placeholder(np.float32)
with jit_scope():
y = tf.add(x, x) # The "add" will be compiled with XLA.
  1. We can also run computations via XLA by placing the operator on a specific XLA device XLA_CPU or XLA_GPU:
with tf.device \ ("/job:localhost/replica:0/task:0/device:XLA_GPU:0"):
output = tf.add(input1, input2)

AoT Compilation: Here, we use tfcompile as standalone to convert TensorFlow graphs into executable code for different devices (mobile).

TensorFlow.org tells about tfcompile:

tfcompile takes a subgraph, identified by the TensorFlow concepts of feeds and fetches, and generates a function that implements that subgraph. The feeds are the input arguments for the function, and the fetches are the output arguments for the function. All inputs must be fully specified by the feeds; the resulting pruned subgraph cannot contain placeholder or variable nodes. It is common to specify all placeholders and variables as feeds, which ensures the resulting subgraph no longer contains these nodes. The generated function is packaged as a cc_library, with a header file exporting the function signature, and an object file containing the implementation. The user writes code to invoke the generated function as appropriate.

For advanced steps to do the same, you can refer to https://www.tensorflow.org/performance/xla/tfcompile.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime