Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Deep Learning with Theano

You're reading from   Deep Learning with Theano Perform large-scale numerical and scientific computations efficiently

Arrow left icon
Product type Paperback
Published in Jul 2017
Publisher Packt
ISBN-13 9781786465825
Length 300 pages
Edition 1st Edition
Tools
Arrow right icon
Author (1):
Arrow left icon
Christopher Bourez Christopher Bourez
Author Profile Icon Christopher Bourez
Christopher Bourez
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Theano Basics FREE CHAPTER 2. Classifying Handwritten Digits with a Feedforward Network 3. Encoding Word into Vector 4. Generating Text with a Recurrent Neural Net 5. Analyzing Sentiment with a Bidirectional LSTM 6. Locating with Spatial Transformer Networks 7. Classifying Images with Residual Networks 8. Translating and Explaining with Encoding – decoding Networks 9. Selecting Relevant Inputs or Memories with the Mechanism of Attention 10. Predicting Times Sequences with Advanced RNN 11. Learning from the Environment with Reinforcement 12. Learning Features with Unsupervised Generative Networks 13. Extending Deep Learning with Theano Index

Operations on tensors

We have seen how to create a computation graph composed of symbolic variables and operations, and compile the resulting expression for an evaluation or as a function, either on GPU or on CPU.

As tensors are very important to deep learning, Theano provides lots of operators to work with tensors. Most operators that exist in scientific computing libraries such as NumPy for numerical arrays have their equivalent in Theano and have a similar name, in order to be more familiar to NumPy's users. But contrary to NumPy, expressions written with Theano can be compiled either on CPU or GPU.

This, for example, is the case for tensor creation:

  • T.zeros(), T.ones(), T.eye() operators take a shape tuple as input
  • T.zeros_like(), T.one_like(), T.identity_like() use the shape of the tensor argument
  • T.arange(), T.mgrid(), T.ogrid() are used for range and mesh grid arrays

Let's have a look in the Python shell:

>>> a = T.zeros((2,3))

>>> a.eval()
array([[ 0.,  0.,  0.],
       [ 0.,  0.,  0.]])

>>> b = T.identity_like(a)

>>> b.eval()
array([[ 1.,  0.,  0.],
        [ 0.,  1.,  0.]])

>>> c = T.arange(10)

>>> c.eval()
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])

Information such as the number of dimensions, ndim, and the type, dtype, are defined at tensor creation and cannot be modified later:

>>> c.ndim
1

>>> c.dtype
'int64'

>>> c.type
TensorType(int64, vector)

Some other information, such as shape, is evaluated by the computation graph:

>>> a = T.matrix()

>>> a.shape
Shape.0

>>> a.shape.eval({a: [[1, 2], [1, 3]]})
array([2, 2])

>>> shape_fct = theano.function([a],a.shape)

>>> shape_fct([[1, 2], [1, 3]])
array([2, 2])

>>> n = T.iscalar()

>>> c = T.arange(n)

>>> c.shape.eval({n:10})
array([10])

Dimension manipulation operators

The first type of operator on tensor is for dimension manipulation. This type of operator takes a tensor as input and returns a new tensor:

Operator

Description

T.reshape

Reshape the dimension of the tensor

T.fill

Fill the array with the same value

T.flatten

Return all elements in a 1-dimensional tensor (vector)

T.dimshuffle

Change the order of the dimension, more or less like NumPy's transpose method – the main difference is that it can be used to add or remove broadcastable dimensions (of length 1).

T.squeeze

Reshape by removing dimensions equal to 1

T.transpose

Transpose

T.swapaxes

Swap dimensions

T.sort, T.argsort

Sort tensor, or indices of the order

For example, the reshape operation's output represents a new tensor, containing the same elements in the same order but in a different shape:

>>> a = T.arange(10)

>>> b = T.reshape( a, (5,2) )

>>> b.eval()
array([[0, 1],
       [2, 3], 
       [4, 5],
       [6, 7],
       [8, 9]])

The operators can be chained:

>>> T.arange(10).reshape((5,2))[::-1].T.eval()
array([[8, 6, 4, 2, 0],
       [9, 7, 5, 3, 1]])

Notice the use of traditional [::-1] array access by indices in Python and the .T for T.transpose.

Elementwise operators

The second type of operations on multi-dimensional arrays is elementwise operators.

The first category of elementwise operations takes two input tensors of the same dimensions and applies a function, f, elementwise, which means on all pairs of elements with the same coordinates in the respective tensors f([a,b],[c,d]) = [ f(a,c), f(b,d)]. For example, here's multiplication:

>>> a, b = T.matrices('a', 'b')

>>> z = a * b

>>> z.eval({a:numpy.ones((2,2)).astype(theano.config.floatX), b:numpy.diag((3,3)).astype(theano.config.floatX)})
array([[ 3.,  0.],
       [ 0.,  3.]])

The same multiplication can be written as follows:

>>> z = T.mul(a, b)

T.add and T.mul accept an arbitrary number of inputs:

>>> z = T.mul(a, b, a, b)

Some elementwise operators accept only one input tensor f([a,b]) = [f(a),f(b)]):

>>> a = T.matrix()

>>> z = a ** 2 

>>> z.eval({a:numpy.diag((3,3)).astype(theano.config.floatX)})
array([[ 9.,  0.], 
       [ 0.,  9.]])

Lastly, I would like to introduce the mechanism of broadcasting. When the input tensors do not have the same number of dimensions, the missing dimension will be broadcasted, meaning the tensor will be repeated along that dimension to match the dimension of the other tensor. For example, taking one multi-dimensional tensor and a scalar (0-dimensional) tensor, the scalar will be repeated in an array of the same shape as the multi-dimensional tensor so that the final shapes will match and the elementwise operation will be applied, f([a,b], c) = [ f(a,c), f(b,c) ]:

>>> a = T.matrix()

>>> b = T.scalar()

>>> z = a * b

>>> z.eval({a:numpy.diag((3,3)).astype(theano.config.floatX),b:3})
array([[ 6.,  0.],
       [ 0.,  6.]])

Here is a list of elementwise operations:

Operator

Other form

Description

T.add, T.sub, T.mul, T.truediv

+, -, *, /

Add, subtract, multiply, divide

T.pow, T.sqrt

**, T.sqrt

Power, square root

T.exp, T.log

 

Exponential, logarithm

T.cos, T.sin, T.tan

 

Cosine, sine, tangent

T.cosh, T.sinh, T.tanh

 

Hyperbolic trigonometric functions

T.intdiv, T.mod

//, %

Int div, modulus

T.floor, T.ceil, T.round

 

Rounding operators

T.sgn

 

Sign

T.and_, T.xor, T.or_, T.invert

&,^,|,~

Bitwise operators

T.gt, T.lt, T.ge, T.le

>, <, >=, <=

Comparison operators

T.eq, T.neq, T.isclose

 

Equality, inequality, or close with tolerance

T.isnan

 

Comparison with NaN (not a number)

T.abs_

 

Absolute value

T.minimum, T.maximum

 

Minimum and maximum elementwise

T.clip

 

Clip the values between a maximum and a minimum

T.switch

 

Switch

T.cast

 

Tensor type casting

The elementwise operators always return an array with the same size as the input array. T.switch and T.clip accept three inputs.

In particular, T.switch will perform the traditional switch operator elementwise:

>>> cond = T.vector('cond')

>>> x,y = T.vectors('x','y')

>>> z = T.switch(cond, x, y)

>>> z.eval({ cond:[1,0], x:[10,10], y:[3,2] })
array([ 10.,   2.], dtype=float32)

At the same position where cond tensor is true, the result has the x value; otherwise, if it is false, it has the y value.

For the T.switch operator, there is a specific equivalent, ifelse, that takes a scalar condition instead of a tensor condition. It is not an elementwise operation though, and supports lazy evaluation (not all elements are computed if the answer is known before it finishes):

>>> from theano.ifelse import ifelse

>>> z=ifelse(1, 5, 4)

>>> z.eval()
array(5, dtype=int8)

Reduction operators

Another type of operation on tensors is reductions, reducing all elements to a scalar value in most cases, and for that purpose, it is required to scan all the elements of the tensor to compute the output:

Operator

Description

T.max, T.argmax, T.max_and_argmax

Maximum, index of the maximum

T.min, T.argmin

Minimum, index of the minimum

T.sum, T.prod

Sum or product of elements

T.mean, T.var, T.std

Mean, variance, and standard deviation

T.all, T.any

AND and OR operations with all elements

T.ptp

Range of elements (minimum, maximum)

These operations are also available row-wise or column-wise by specifying an axis and the dimension along which the reduction is performed:

>>> a = T.matrix('a')

>>> T.max(a).eval({a:[[1,2],[3,4]]})
array(4.0, dtype=float32)

>>> T.max(a,axis=0).eval({a:[[1,2],[3,4]]})
array([ 3.,  4.], dtype=float32)

>>> T.max(a,axis=1).eval({a:[[1,2],[3,4]]})
array([ 2.,  4.], dtype=float32)

Linear algebra operators

A third category of operations are the linear algebra operators, such as matrix multiplication:

Linear algebra operators

Also called inner product for vectors:

Linear algebra operators

Operator

Description

T.dot

Matrix multiplication/inner product

T.outer

Outer product

There are some generalized (T.tensordot to specify the axis), or batched (batched_dot, batched_tensordot) versions of the operators.

Lastly, a few operators remain and can be very useful, but they do not belong to any of the previous categories: T.concatenate concatenates the tensors along the specified dimension, T.stack creates a new dimension to stack the input tensors, and T.stacklist creates new patterns to stack tensors together:

>>> a = T.arange(10).reshape((5,2))

>>> b = a[::-1]

>>> b.eval()
array([[8, 9],
       [6, 7],
       [4, 5],
       [2, 3],
       [0, 1]])
>>> a.eval()
array([[0, 1],
       [2, 3],
       [4, 5],
       [6, 7],
       [8, 9]])
>>> T.concatenate([a,b]).eval()
array([[0, 1],
       [2, 3],
       [4, 5],
       [6, 7],
       [8, 9],
       [8, 9],
       [6, 7],
       [4, 5],
       [2, 3],
       [0, 1]])
>>> T.concatenate([a,b],axis=1).eval()
array([[0, 1, 8, 9],
       [2, 3, 6, 7],
       [4, 5, 4, 5],
       [6, 7, 2, 3],
       [8, 9, 0, 1]])

>>> T.stack([a,b]).eval()
array([[[0, 1],
        [2, 3],
        [4, 5],
        [6, 7],
        [8, 9]],
       [[8, 9],
        [6, 7],
        [4, 5],
        [2, 3],
        [0, 1]]])

An equivalent of the NumPy expressions a[5:] = 5 and a[5:] += 5 exists as two functions:

>>> a.eval()
array([[0, 1],
       [2, 3],
       [4, 5],
       [6, 7],
       [8, 9]])

>>> T.set_subtensor(a[3:], [-1,-1]).eval()

array([[ 0,  1],
       [ 2,  3],
       [ 4,  5],
       [-1, -1],
       [-1, -1]])

>>> T.inc_subtensor(a[3:], [-1,-1]).eval()
array([[0, 1],
       [2, 3],
       [4, 5],
       [5, 6],
       [7, 8]])

Unlike NumPy's syntax, the original tensor is not modified; instead, a new variable is created that represents the result of that modification. Therefore, the original variable a still refers to the original value, and the returned variable (here unassigned) represents the updated one, and the user should use that new variable in the rest of their computation.

You have been reading a chapter from
Deep Learning with Theano
Published in: Jul 2017
Publisher: Packt
ISBN-13: 9781786465825
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime