Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Machine Learning Using TensorFlow Cookbook

You're reading from   Machine Learning Using TensorFlow Cookbook Create powerful machine learning algorithms with TensorFlow

Arrow left icon
Product type Paperback
Published in Feb 2021
Publisher Packt
ISBN-13 9781800208865
Length 416 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (3):
Arrow left icon
Konrad Banachewicz Konrad Banachewicz
Author Profile Icon Konrad Banachewicz
Konrad Banachewicz
Luca Massaron Luca Massaron
Author Profile Icon Luca Massaron
Luca Massaron
Alexia Audevart Alexia Audevart
Author Profile Icon Alexia Audevart
Alexia Audevart
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Getting Started with TensorFlow 2.x 2. The TensorFlow Way FREE CHAPTER 3. Keras 4. Linear Regression 5. Boosted Trees 6. Neural Networks 7. Predicting with Tabular Data 8. Convolutional Neural Networks 9. Recurrent Neural Networks 10. Transformers 11. Reinforcement Learning with TensorFlow and TF-Agents 12. Taking TensorFlow to Production 13. Other Books You May Enjoy
14. Index

Layering nested operations

In this recipe, we'll learn how to put multiple operations to work; it is important to know how to chain operations together. This will set up layered operations to be executed by our network. In this recipe, we will multiply a placeholder by two matrices and then perform addition. We will feed in two matrices in the form of a three-dimensional NumPy array.

This is another easy-peasy recipe to give you ideas about how to code in TensorFlow using common constructs such as functions or classes, improving readability and code modularity. Even if the final product is a neural network, we're still writing a computer program, and we should abide by programming best practices.

Getting ready

As usual, we just need to import TensorFlow and NumPy, as follows:

import TensorFlow as tf
import NumPy as np 

We're now ready to move forward with our recipe.

How to do it...

We will feed in two NumPy arrays of size 3 x 5. We will multiply each matrix by a constant of size 5 x 1, which will result in a matrix of size 3 x 1. We will then multiply this by a 1 x 1 matrix resulting in a 3 x 1 matrix again. Finally, we add a 3 x 1 matrix at the end, as follows:

  1. First, we create the data to feed in and the corresponding placeholder:
    my_array = np.array([[1., 3., 5., 7., 9.], 
                         [-2., 0., 2., 4., 6.], 
                         [-6., -3., 0., 3., 6.]]) 
    x_vals = np.array([my_array, my_array + 1])
    x_data = tf.Variable(x_vals, dtype=tf.float32)
    
  2. Next, we create the constants that we will use for matrix multiplication and addition:
    m1 = tf.constant([[1.], [0.], [-1.], [2.], [4.]]) 
    m2 = tf.constant([[2.]]) 
    a1 = tf.constant([[10.]]) 
    
  3. Now, we declare the operations to be eagerly executed. As good practice, we create functions that execute the operations we need:
    def prod1(a, b):
        return tf.matmul(a, b)
    def prod2(a, b):
        return tf.matmul(a, b) 
    def add1(a, b):
        return tf.add(a, b)
    
  4. Finally, we nest our functions and display the result:
    result = add1(prod2(prod1(x_data, m1), m2), a1)
    print(result.NumPy()) 
    [[ 102.] 
     [  66.] 
     [  58.]] 
    [[ 114.] 
     [  78.] 
     [  70.]] 
    

Using functions (and also classes, as we are going to cover) will help you write clearer code. That makes debugging more effective and allows easy maintenance and reuse of code.

How it works...

Thanks to eager execution, there's no longer a need to resort to the "kitchen sink" programming style (meaning that you put almost everything in the global scope of the program; see https://stackoverflow.com/questions/33779296/what-is-exact-meaning-of-kitchen-sink-in-programming) that was so common when using TensorFlow 1.x. At the moment, you can adopt either a functional programming style or an object-oriented one, such as the one we present in this brief example, where you can arrange all your operations and computations in a more logical and understandable way:

class Operations():  
    def __init__(self, a):
        self.result = a
    def apply(self, func, b):
        self.result = func(self.result, b)
        return self
        
operation = (Operations(a=x_data)
             .apply(prod1, b=m1)
             .apply(prod2, b=m2)
             .apply(add1, b=a1))
print(operation.result.NumPy())

Classes can help you organize your code and reuse it better than functions, thanks to class inheritance.

There's more...

In all the examples in this recipe, we've had to declare the data shape and know the outcome shape of the operations before we run the data through the operations. This is not always the case. There may be a dimension or two that we do not know beforehand or some that can vary during our data processing. To take this into account, we designate the dimension or dimensions that can vary (or are unknown) as value None.

For example, to initialize a variable to have an unknown amount of rows, we would write the following line and then we can assign values of arbitrary row numbers:

v = tf.Variable(initial_value=tf.random.normal(shape=(1, 5)),
                shape=tf.TensorShape((None, 5)))
v.assign(tf.random.normal(shape=(10, 5)))

It is fine for matrix multiplication to have flexible rows because that won't affect the arrangement of our operations. This will come in handy in later chapters when we are feeding data in multiple batches of varying batch sizes.

While the use of None as a dimension allows us to use variably-sized dimensions, I always recommend that you be as explicit as possible when filling out dimensions. If the size of our data is known in advance, then we should explicitly write that size as the dimensions. The use of None as a dimension is recommended to be limited to the batch size of the data (or however many data points we are computing on at once).

You have been reading a chapter from
Machine Learning Using TensorFlow Cookbook
Published in: Feb 2021
Publisher: Packt
ISBN-13: 9781800208865
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime