Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Deep Learning with Theano

You're reading from   Deep Learning with Theano Perform large-scale numerical and scientific computations efficiently

Arrow left icon
Product type Paperback
Published in Jul 2017
Publisher Packt
ISBN-13 9781786465825
Length 300 pages
Edition 1st Edition
Tools
Arrow right icon
Author (1):
Arrow left icon
Christopher Bourez Christopher Bourez
Author Profile Icon Christopher Bourez
Christopher Bourez
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Theano Basics FREE CHAPTER 2. Classifying Handwritten Digits with a Feedforward Network 3. Encoding Word into Vector 4. Generating Text with a Recurrent Neural Net 5. Analyzing Sentiment with a Bidirectional LSTM 6. Locating with Spatial Transformer Networks 7. Classifying Images with Residual Networks 8. Translating and Explaining with Encoding – decoding Networks 9. Selecting Relevant Inputs or Memories with the Mechanism of Attention 10. Predicting Times Sequences with Advanced RNN 11. Learning from the Environment with Reinforcement 12. Learning Features with Unsupervised Generative Networks 13. Extending Deep Learning with Theano Index

Tensors

In Python, some scientific libraries such as NumPy provide multi-dimensional arrays. Theano doesn't replace Numpy, but it works in concert with it. NumPy is used for the initialization of tensors.

To perform the same computation on CPU and GPU, variables are symbolic and represented by the tensor class, an abstraction, and writing numerical expressions consists of building a computation graph of variable nodes and apply nodes. Depending on the platform on which the computation graph will be compiled, tensors are replaced by either of the following:

  • A TensorType variable, which has to be on CPU
  • A GpuArrayType variable, which has to be on GPU

That way, the code can be written indifferently of the platform where it will be executed.

Here are a few tensor objects:

Object class

Number of dimensions

Example

theano.tensor.scalar

0-dimensional array

1, 2.5

theano.tensor.vector

1-dimensional array

[0,3,20]

theano.tensor.matrix

2-dimensional array

[[2,3][1,5]]

theano.tensor.tensor3

3-dimensional array

[[[2,3][1,5]],[[1,2],[3,4]]]

Playing with these Theano objects in the Python shell gives us a better idea:

>>> import theano.tensor as T

>>> T.scalar()
<TensorType(float32, scalar)>

>>> T.iscalar()
<TensorType(int32, scalar)>

>>> T.fscalar()
<TensorType(float32, scalar)>

>>> T.dscalar()
<TensorType(float64, scalar)>

With i, l, f, or d in front of the object name, you initiate a tensor of a given type, integer32, integer64, float32, or float64. For real-valued (floating point) data, it is advised to use the direct form T.scalar() instead of the f or d variants since the direct form will use your current configuration for floats:

>>> theano.config.floatX = 'float64'

>>> T.scalar()
<TensorType(float64, scalar)>

>>> T.fscalar()
<TensorType(float32, scalar)>

>>> theano.config.floatX = 'float32'

>>> T.scalar()
<TensorType(float32, scalar)>

Symbolic variables do either of the following:

  • Play the role of placeholders, as a starting point to build your graph of numerical operations (such as addition, multiplication): they receive the flow of the incoming data during the evaluation once the graph has been compiled
  • Represent intermediate or output results

Symbolic variables and operations are both part of a computation graph that will be compiled either on CPU or GPU for fast execution. Let's write our first computation graph consisting of a simple addition:

>>> x = T.matrix('x')

>>> y = T.matrix('y')

>>> z = x + y

>>> theano.pp(z)
'(x + y)'

>>> z.eval({x: [[1, 2], [1, 3]], y: [[1, 0], [3, 4]]})
array([[ 2.,  2.],
       [ 4.,  7.]], dtype=float32)

First, two symbolic variables, or variable nodes, are created, with the names x and y, and an addition operation, an apply node, is applied between both of them to create a new symbolic variable, z, in the computation graph.

The pretty print function, pp, prints the expression represented by Theano symbolic variables. Eval evaluates the value of the output variable, z, when the first two variables, x and y, are initialized with two numerical 2-dimensional arrays.

The following example shows the difference between the variables x and y, and their names x and y:

>>> a = T.matrix()

>>> b = T.matrix()

>>> theano.pp(a + b)
'(<TensorType(float32, matrix)> + <TensorType(float32, matrix)>)'.

Without names, it is more complicated to trace the nodes in a large graph. When printing the computation graph, names significantly help diagnose problems, while variables are only used to handle the objects in the graph:

>>> x = T.matrix('x')

>>> x = x + x

>>> theano.pp(x)
'(x + x)'

Here, the original symbolic variable, named x, does not change and stays part of the computation graph. x + x creates a new symbolic variable we assign to the Python variable x.

Note also that with the names, the plural form initializes multiple tensors at the same time:

>>> x, y, z = T.matrices('x', 'y', 'z')

Now, let's have a look at the different functions to display the graph.

You have been reading a chapter from
Deep Learning with Theano
Published in: Jul 2017
Publisher: Packt
ISBN-13: 9781786465825
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image