Tensors
In Python, some scientific libraries such as NumPy provide multi-dimensional arrays. Theano doesn't replace Numpy, but it works in concert with it. NumPy is used for the initialization of tensors.
To perform the same computation on CPU and GPU, variables are symbolic and represented by the tensor class, an abstraction, and writing numerical expressions consists of building a computation graph of variable nodes and apply nodes. Depending on the platform on which the computation graph will be compiled, tensors are replaced by either of the following:
- A
TensorType
variable, which has to be on CPU - A
GpuArrayType
variable, which has to be on GPU
That way, the code can be written indifferently of the platform where it will be executed.
Here are a few tensor objects:
Object class |
Number of dimensions |
Example |
---|---|---|
|
0-dimensional array |
1, 2.5 |
|
1-dimensional array |
[0,3,20] |
|
2-dimensional array |
[[2,3][1,5]] |
|
3-dimensional array |
[[[2,3][1,5]],[[1,2],[3,4]]] |
Playing with these Theano objects in the Python shell gives us a better idea:
>>> import theano.tensor as T >>> T.scalar() <TensorType(float32, scalar)> >>> T.iscalar() <TensorType(int32, scalar)> >>> T.fscalar() <TensorType(float32, scalar)> >>> T.dscalar() <TensorType(float64, scalar)>
With i
, l
, f
, or d
in front of the object name, you initiate a tensor of a given type, integer32
, integer64
, float32
, or float64
. For real-valued (floating point) data, it is advised to use the direct form T.scalar()
instead of the f
or d
variants since the direct form will use your current configuration for floats:
>>> theano.config.floatX = 'float64'
>>> T.scalar()
<TensorType(float64, scalar)>
>>> T.fscalar()
<TensorType(float32, scalar)>
>>> theano.config.floatX = 'float32'
>>> T.scalar()
<TensorType(float32, scalar)>
Symbolic variables do either of the following:
- Play the role of placeholders, as a starting point to build your graph of numerical operations (such as addition, multiplication): they receive the flow of the incoming data during the evaluation once the graph has been compiled
- Represent intermediate or output results
Symbolic variables and operations are both part of a computation graph that will be compiled either on CPU or GPU for fast execution. Let's write our first computation graph consisting of a simple addition:
>>> x = T.matrix('x') >>> y = T.matrix('y') >>> z = x + y >>> theano.pp(z) '(x + y)' >>> z.eval({x: [[1, 2], [1, 3]], y: [[1, 0], [3, 4]]}) array([[ 2., 2.], [ 4., 7.]], dtype=float32)
First, two symbolic variables, or variable nodes, are created, with the names x
and y
, and an addition operation, an apply node, is applied between both of them to create a new symbolic variable, z
, in the computation graph.
The pretty print function, pp
, prints the expression represented by Theano symbolic variables. Eval
evaluates the value of the output variable, z
, when the first two variables, x
and y
, are initialized with two numerical 2-dimensional arrays.
The following example shows the difference between the variables x
and y
, and their names x
and y
:
>>> a = T.matrix()
>>> b = T.matrix()
>>> theano.pp(a + b)
'(<TensorType(float32, matrix)> + <TensorType(float32, matrix)>)'.
Without names, it is more complicated to trace the nodes in a large graph. When printing the computation graph, names significantly help diagnose problems, while variables are only used to handle the objects in the graph:
>>> x = T.matrix('x')
>>> x = x + x
>>> theano.pp(x)
'(x + x)'
Here, the original symbolic variable, named x
, does not change and stays part of the computation graph. x + x
creates a new symbolic variable we assign to the Python variable x
.
Note also that with the names, the plural form initializes multiple tensors at the same time:
>>> x, y, z = T.matrices('x', 'y', 'z')
Now, let's have a look at the different functions to display the graph.