Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
TensorFlow 1.x Deep Learning Cookbook

You're reading from   TensorFlow 1.x Deep Learning Cookbook Over 90 unique recipes to solve artificial-intelligence driven problems with Python

Arrow left icon
Product type Paperback
Published in Dec 2017
Publisher Packt
ISBN-13 9781788293594
Length 536 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Dr. Amita Kapoor Dr. Amita Kapoor
Author Profile Icon Dr. Amita Kapoor
Dr. Amita Kapoor
Antonio Gulli Antonio Gulli
Author Profile Icon Antonio Gulli
Antonio Gulli
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. TensorFlow - An Introduction FREE CHAPTER 2. Regression 3. Neural Networks - Perceptron 4. Convolutional Neural Networks 5. Advanced Convolutional Neural Networks 6. Recurrent Neural Networks 7. Unsupervised Learning 8. Autoencoders 9. Reinforcement Learning 10. Mobile Computation 11. Generative Models and CapsNet 12. Distributed TensorFlow and Cloud Deep Learning 13. Learning to Learn with AutoML (Meta-Learning) 14. TensorFlow Processing Units

Working with constants, variables, and placeholders

TensorFlow in the simplest terms provides a library to define and perform different mathematical operations with tensors. A tensor is basically an n-dimensional matrix. All types of data, that is, scalar, vectors, and matrices are special types of tensors:

Types of data

Tensor

Shape

Scalar

0-D Tensor

[]

Vector

1-D Tensor

[D0]

Matrix

2-D Tensor

[D0,D1]

Tensors

N-D Tensor

[D0,D1,....Dn-1]

TensorFlow supports three types of tensors:

  • Constants
  • Variables
  • Placeholders

Constants: Constants are the tensors whose values cannot be changed.

Variables: We use variable tensors when the values require updating within a session. For example, in the case of neural networks, the weights need to be updated during the training session, which is achieved by declaring weights as variables. The variables need to be explicitly initialized before use. Another important thing to note is that constants are stored in the computation graph definition; they are loaded every time the graph is loaded. In other words, they are memory expensive. Variables, on the other hand, are stored separately; they can exist on the parameter server.

Placeholders: These are used to feed values into a TensorFlow graph. They are used along with feed_dict to feed the data. They are normally used to feed new training examples while training a neural network. We assign a value to a placeholder while running the graph in the session. They allow us to create our operations and build the computation graph without requiring the data. An important point to note is that placeholders do not contain any data and thus there is no need to initialize them as well.

How to do it...

Let's start with constants:

  1. We can declare a scalar constant:
t_1 = tf.constant(4)   
  1. A constant vector of shape [1,3] can be declared as follows:
t_2 = tf.constant([4, 3, 2]) 
  1. To create a tensor with all elements zero, we use tf.zeros(). This statement creates a zero matrix of shape [M,N] with dtype (int32, float32, and so on):
tf.zeros([M,N],tf.dtype)   

Let's take an example:

zero_t = tf.zeros([2,3],tf.int32) 
# Results in an 2×3 array of zeros: [[0 0 0], [0 0 0]]
  1. We can also create tensor constants of the same shape as an existing Numpy array or tensor constant as follows:
tf.zeros_like(t_2) 
# Create a zero matrix of same shape as t_2
tf.ones_like(t_2)
# Creates a ones matrix of same shape as t_2
  1. We can create a tensor with all elements set to one; here, we create a ones matrix of shape [M,N]:
tf.ones([M,N],tf.dtype) 

Let's take an example:

ones_t = tf.ones([2,3],tf.int32) 
# Results in an 2×3 array of ones:[[1 1 1], [1 1 1]]

Let's proceed to sequences:

  1. We can generate a sequence of evenly spaced vectors, starting from start to stop, within total num values:
tf.linspace(start, stop, num) 
  1. The corresponding values differ by (stop-start)/(num-1).
  2. Let's take an example:
range_t = tf.linspace(2.0,5.0,5) 
# We get: [ 2. 2.75 3.5 4.25 5. ]
  1. Generate a sequence of numbers starting from the start (default=0), incremented by delta (default =1), until, but not including, the limit:
tf.range(start,limit,delta) 

Here is an example:

range_t = tf.range(10) 
# Result: [0 1 2 3 4 5 6 7 8 9]

TensorFlow allows random tensors with different distributions to be created:

  1. To create random values from a normal distribution of shape [M,N] with the mean (default =0.0) and standard deviation (default=1.0) with seed, we can use the following:
t_random = tf.random_normal([2,3], mean=2.0, stddev=4, seed=12) 

# Result: [[ 0.25347459 5.37990952 1.95276058], [-1.53760314 1.2588985 2.84780669]]
  1. To create random values from a truncated normal distribution of shape [M,N] with the mean (default =0.0) and standard deviation (default=1.0) with seed, we can use the following:
t_random = tf.truncated_normal([1,5], stddev=2, seed=12) 
# Result: [[-0.8732627 1.68995488 -0.02361972 -1.76880157 -3.87749004]]
  1. To create random values from a given gamma distribution of shape [M,N] in the range [minval (default=0), maxval] with seed, perform as follows:
t_random = tf.random_uniform([2,3], maxval=4, seed=12) 

# Result: [[ 2.54461002 3.69636583 2.70510912], [ 2.00850058 3.84459829 3.54268885]]
  1. To randomly crop a given tensor to a specified size, do as follows:
tf.random_crop(t_random, [2,5],seed=12) 

Here, t_random is an already defined tensor. This will result in a [2,5] tensor randomly cropped from tensor t_random.

Many times we need to present the training sample in random order; we can use tf.random_shuffle() to randomly shuffle a tensor along its first dimension. If t_random is the tensor we want to shuffle, then we use the following:

tf.random_shuffle(t_random) 
  1. Randomly generated tensors are affected by the value of the initial seed. To obtain the same random numbers in multiple runs or sessions, the seed should be set to a constant value. When there are large numbers of random tensors in use, we can set the seed for all randomly generated tensors using tf.set_random_seed(); the following command sets the seed for random tensors for all sessions as 54:
tf.set_random_seed(54)  
Seed can have only integer value.

Let's now turn to the variables:

  1. They are created using the variable class. The definition of variables also includes the constant/random values from which they should be initialized. In the following code, we create two different tensor variables, t_a and t_b. Both will be initialized to random uniform distributions of shape [50, 50], minval=0, and maxval=10:
rand_t = tf.random_uniform([50,50], 0, 10, seed=0) 
t_a = tf.Variable(rand_t)
t_b = tf.Variable(rand_t)
Variables are often used to represent weights and biases in a neural network.
  1. In the following code, we define two variables weights and bias. The weights variable is randomly initialized using normal distribution, with mean zeros and standard deviation of two, the size of weights is 100×100. The bias consists of 100 elements each initialized to zero. Here we have also used the optional argument name to give a name to the variable defined in the computational graph.
weights = tf.Variable(tf.random_normal([100,100],stddev=2)) 
bias = tf.Variable(tf.zeros[100], name = 'biases')
  1. In all the preceding examples, the source of initialization of variables is some constant. We can also specify a variable to be initialized from another variable; the following statement will initialize weight2 from the weights defined earlier:
weight2=tf.Variable(weights.initialized_value(), name='w2') 
  1. The definition of variables specify how the variable is to be initialized, but we must explicitly initialize all declared variables. In the definition of the computational graph, we do it by declaring an initialization Operation Object:
intial_op = tf.global_variables_initializer(). 
  1. Each variable can also be initialized separately using tf.Variable.initializer during the running graph:
bias = tf.Variable(tf.zeros([100,100]))
with tf.Session() as sess:
sess.run(bias.initializer)
  1. Saving variables: We can save the variables using the Saver class. To do this, we define a saver Operation Object:
saver = tf.train.Saver()  
  1. After constants and variables, we come to the most important element placeholders, they are used to feed data to the graph. We can define a placeholder using the following:
tf.placeholder(dtype, shape=None, name=None) 
  1. dtype specifies the data type of the placeholder and must be specified while declaring the placeholder. Here, we define a placeholder for x and calculate y = 2 * x using feed_dict for a random 4×5 matrix:
x = tf.placeholder("float")
y = 2 * x
data = tf.random_uniform([4,5],10)
with tf.Session() as sess:
x_data = sess.run(data)
print(sess.run(y, feed_dict = {x:x_data}))

How it works...

All constants, variables, and placeholders will be defined in the computation graph section of the code. If we use the print statement in the definition section, we will only get information about the type of tensor, and not its value.

To find out the value, we need to create the session graph and explicitly use the run command with the desired tensor values as fetches:

print(sess.run(t_1))  
# Will print the value of t_1 defined in step 1

There's more...

Very often, we will need constant tensor objects with a large size; in this case, to optimize memory, it is better to declare them as variables with a trainable flag set to False:

t_large = tf.Variable(large_array, trainable = False) 

TensorFlow was designed to work impeccably with Numpy, hence all the TensorFlow data types are based on those of Numpy. Using tf.convert_to_tensor(), we can convert the given value to tensor type and use it with TensorFlow functions and operators. This function accepts Numpy arrays, Python Lists, and Python scalars and allows interoperability with tensor Objects.

The following table lists some of the common TensorFlow supported data types (taken from TensorFlow.org):

Data type

TensorFlow type

DT_FLOAT

tf.float32

DT_DOUBLE

tf.float64

DT_INT8

tf.int8

DT_UINT8

tf.uint8

DT_STRING

tf.string

DT_BOOL

tf.bool

DT_COMPLEX64

tf.complex64

DT_QINT32

tf.qint32

Note that unlike Python/Numpy sequences, TensorFlow sequences are not iterable. Try the following code:

for i in tf.range(10)  

You will get an error:

#TypeError("'Tensor' object is not iterable.")
You have been reading a chapter from
TensorFlow 1.x Deep Learning Cookbook
Published in: Dec 2017
Publisher: Packt
ISBN-13: 9781788293594
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image