Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
TensorFlow 1.x Deep Learning Cookbook

You're reading from   TensorFlow 1.x Deep Learning Cookbook Over 90 unique recipes to solve artificial-intelligence driven problems with Python

Arrow left icon
Product type Paperback
Published in Dec 2017
Publisher Packt
ISBN-13 9781788293594
Length 536 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Dr. Amita Kapoor Dr. Amita Kapoor
Author Profile Icon Dr. Amita Kapoor
Dr. Amita Kapoor
Antonio Gulli Antonio Gulli
Author Profile Icon Antonio Gulli
Antonio Gulli
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. TensorFlow - An Introduction 2. Regression FREE CHAPTER 3. Neural Networks - Perceptron 4. Convolutional Neural Networks 5. Advanced Convolutional Neural Networks 6. Recurrent Neural Networks 7. Unsupervised Learning 8. Autoencoders 9. Reinforcement Learning 10. Mobile Computation 11. Generative Models and CapsNet 12. Distributed TensorFlow and Cloud Deep Learning 13. Learning to Learn with AutoML (Meta-Learning) 14. TensorFlow Processing Units

Invoking CPU/GPU devices

TensorFlow supports both CPUs and GPUs. It also supports distributed computation. We can use TensorFlow on multiple devices in one or more computer system. TensorFlow names the supported devices as "/device:CPU:0" (or "/cpu:0") for the CPU devices and "/device:GPU:I" (or "/gpu:I") for the ith GPU device.

As mentioned earlier, GPUs are much faster than CPUs because they have many small cores. However, it is not always an advantage in terms of computational speed to use GPUs for all types of computations. The overhead associated with GPUs can sometimes be more computationally expensive than the advantage of parallel computation offered by GPUs. To deal with this issue, TensorFlow has provisions to place computations on a particular device. By default, if both CPU and GPU are present, TensorFlow gives priority to GPU.

How to do it...

TensorFlow represents devices as strings. Here, we will show you how one can manually assign a device for matrix multiplication in TensorFlow. To verify that TensorFlow is indeed using the device (CPU or GPU) specified, we create the session with the log_device_placement flag set to True, namely, config=tf.ConfigProto(log_device_placement=True):

  1. If you are not sure about the device and want TensorFlow to choose the existing and supported device, you can set the allow_soft_placement flag to True:
config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True) 
  1. Manually select CPU for operation:
with tf.device('/cpu:0'): 
rand_t = tf.random_uniform([50,50], 0, 10, dtype=tf.float32, seed=0)
a = tf.Variable(rand_t)
b = tf.Variable(rand_t)
c = tf.matmul(a,b)
init = tf.global_variables_initializer()

sess = tf.Session(config)
sess.run(init)
print(sess.run(c))
  1. We get the following output:

We can see that all the devices, in this case, are '/cpu:0'.

  1. Manually select a single GPU for operation:
with tf.device('/gpu:0'): 
rand_t = tf.random_uniform([50,50], 0, 10, dtype=tf.float32, seed=0)
a = tf.Variable(rand_t)
b = tf.Variable(rand_t)
c = tf.matmul(a,b)
init = tf.global_variables_initializer()

sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
sess.run(init)
print(sess.run(c))
  1. The output now changes to the following:
  1. The '/cpu:0' after each operation is now replaced by '/gpu:0'.
  2. Manually select multiple GPUs:
c=[]
for d in ['/gpu:1','/gpu:2']:
with tf.device(d):
rand_t = tf.random_uniform([50, 50], 0, 10, dtype=tf.float32, seed=0)
a = tf.Variable(rand_t)
b = tf.Variable(rand_t)
c.append(tf.matmul(a,b))
init = tf.global_variables_initializer()

sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True,log_device_placement=True))
sess.run(init)
print(sess.run(c))
sess.close()
  1. In this case, if the system has three GPU devices, then the first set of multiplication will be carried on by '/gpu:1' and the second set by '/gpu:2'.

How it works...

The tf.device() argument selects the device (CPU or GPU). The with block ensures the operations for which the device is selected. All the variables, constants, and operations defined within the with block will use the device selected in tf.device(). Session configuration is controlled using tf.ConfigProto. By setting the allow_soft_placement and log_device_placement flags, we tell TensorFlow to automatically choose the available devices in case the specified device is not available, and to give log messages as output describing the allocation of devices while the session is executed.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime