Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
TensorFlow Machine Learning Cookbook

You're reading from   TensorFlow Machine Learning Cookbook Over 60 recipes to build intelligent machine learning systems with the power of Python

Arrow left icon
Product type Paperback
Published in Aug 2018
Publisher Packt
ISBN-13 9781789131680
Length 422 pages
Edition 2nd Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Sujit Pal Sujit Pal
Author Profile Icon Sujit Pal
Sujit Pal
Nick McClure Nick McClure
Author Profile Icon Nick McClure
Nick McClure
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Getting Started with TensorFlow FREE CHAPTER 2. The TensorFlow Way 3. Linear Regression 4. Support Vector Machines 5. Nearest-Neighbor Methods 6. Neural Networks 7. Natural Language Processing 8. Convolutional Neural Networks 9. Recurrent Neural Networks 10. Taking TensorFlow to Production 11. More with TensorFlow 12. Other Books You May Enjoy

Working with matrices

Understanding how TensorFlow works with matrices is very important in understanding the flow of data through computational graphs.

It is worth emphasizing the importance of matrices in machine learning (and mathematics in general). Most machine learning algorithms are computationally expressed as matrix operations. This book does not cover the mathematical background on matrix properties and matrix algebra (linear algebra), so the reader is strongly encouraged to learn enough about matrices to be comfortable with matrix algebra.

Getting ready

Many algorithms depend on matrix operations. TensorFlow gives us easy-to-use operations to perform such matrix calculations. For all of the following examples, we first create a graph session by running the following code:

import tensorflow as tf 
sess = tf.Session() 

How to do it...

We will proceed with the recipe as follows:

  1. Creating matrices: We can create two-dimensional matrices from NumPy arrays or nested lists, as we described in the Creating and using tensors recipe. We can also use the tensor creation functions and specify a two-dimensional shape for functions such as zeros(), ones(), truncated_normal(), and so on. TensorFlow also allows us to create a diagonal matrix from a one-dimensional array or list with the diag() function, as follows:
identity_matrix = tf.diag([1.0, 1.0, 1.0]) 
A = tf.truncated_normal([2, 3]) 
B = tf.fill([2,3], 5.0) 
C = tf.random_uniform([3,2]) 
D = tf.convert_to_tensor(np.array([[1., 2., 3.],[-3., -7., -1.],[0., 5., -2.]])) 
print(sess.run(identity_matrix)) 
[[ 1.  0.  0.] 
 [ 0.  1.  0.] 
 [ 0.  0.  1.]] 
print(sess.run(A)) 
[[ 0.96751703  0.11397751 -0.3438891 ] 
 [-0.10132604 -0.8432678   0.29810596]] 
print(sess.run(B)) 
[[ 5.  5.  5.] 
 [ 5.  5.  5.]] 
print(sess.run(C)) 
[[ 0.33184157  0.08907614] 
 [ 0.53189191  0.67605299] 
 [ 0.95889051 0.67061249]] 
print(sess.run(D)) 
[[ 1.  2.  3.] 
 [-3. -7. -1.] 
 [ 0.  5. -2.]] 
Note that if we were to run sess.run(C) again, we would reinitialize the random variables and end up with different random values.
  1. Addition, subtraction, and multiplication: To add, subtract, or multiply matrices of the same dimension, TensorFlow uses the following function:
print(sess.run(A+B)) 
[[ 4.61596632  5.39771316  4.4325695 ] 
 [ 3.26702736  5.14477345  4.98265553]] 
print(sess.run(B-B)) 
[[ 0.  0.  0.] 
 [ 0.  0.  0.]] 
Multiplication 
print(sess.run(tf.matmul(B, identity_matrix))) 
[[ 5.  5.  5.] 
 [ 5.  5.  5.]] 

It is important to note that the matmul() function has arguments that specify whether or not to transpose the arguments before multiplication or whether each matrix is sparse.

Note that matrix division is not explicitly defined. While many define matrix division as multiplying by the inverse, it is fundamentally different compared to real-numbered division.
  1. The transpose: Transpose a matrix (flip the columns and rows) as follows:
print(sess.run(tf.transpose(C))) 
[[ 0.67124544  0.26766731  0.99068872] 
 [ 0.25006068  0.86560275  0.58411312]] 

Again, it is worth mentioning that reinitializing gives us different values than before.

  1. Determinant: To calculate the determinant, use the following:
print(sess.run(tf.matrix_determinant(D))) 
-38.0 
  1. Inverse: To find the inverse of a square matrix, see the following:
print(sess.run(tf.matrix_inverse(D))) 
[[-0.5        -0.5        -0.5       ] 
 [ 0.15789474  0.05263158  0.21052632] 
 [ 0.39473684  0.13157895  0.02631579]] 
The inverse method is based on the Cholesky decomposition, only if the matrix is symmetric positive definite. If the matrix is not symmetric positive definite then it is based on the LU decomposition.
  1. Decompositions: For the Cholesky decomposition, use the following:
print(sess.run(tf.cholesky(identity_matrix))) 
[[ 1.  0.  1.] 
 [ 0.  1.  0.] 
 [ 0.  0.  1.]] 
  1. Eigenvalues and eigenvectors: For eigenvalues and eigenvectors, use the following code:
print(sess.run(tf.self_adjoint_eig(D)) 
[[-10.65907521  -0.22750691   2.88658212] 
 [  0.21749542   0.63250104  -0.74339638] 
 [  0.84526515   0.2587998    0.46749277] 
 [ -0.4880805    0.73004459   0.47834331]] 

Note that the self_adjoint_eig() function outputs the eigenvalues in the first row and the subsequent vectors in the remaining vectors. In mathematics, this is known as the eigendecomposition of a matrix.

How it works...

TensorFlow provides all the tools for us to get started with numerical computations and adding such computations to our graphs. This notation might seem quite heavy for simple matrix operations. Remember that we are adding these operations to the graph and telling TensorFlow which tensors to run through those operations. While this might seem verbose now, it helps us understand the notation in later chapters when this way of computation will make it easier to accomplish our goals.

You have been reading a chapter from
TensorFlow Machine Learning Cookbook - Second Edition
Published in: Aug 2018
Publisher: Packt
ISBN-13: 9781789131680
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image