Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Deep Learning with Theano

You're reading from   Deep Learning with Theano Perform large-scale numerical and scientific computations efficiently

Arrow left icon
Product type Paperback
Published in Jul 2017
Publisher Packt
ISBN-13 9781786465825
Length 300 pages
Edition 1st Edition
Tools
Arrow right icon
Author (1):
Arrow left icon
Christopher Bourez Christopher Bourez
Author Profile Icon Christopher Bourez
Christopher Bourez
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Theano Basics FREE CHAPTER 2. Classifying Handwritten Digits with a Feedforward Network 3. Encoding Word into Vector 4. Generating Text with a Recurrent Neural Net 5. Analyzing Sentiment with a Bidirectional LSTM 6. Locating with Spatial Transformer Networks 7. Classifying Images with Residual Networks 8. Translating and Explaining with Encoding – decoding Networks 9. Selecting Relevant Inputs or Memories with the Mechanism of Attention 10. Predicting Times Sequences with Advanced RNN 11. Learning from the Environment with Reinforcement 12. Learning Features with Unsupervised Generative Networks 13. Extending Deep Learning with Theano Index

What this book covers

Chapter 1, Theano Basics, helps the reader to reader learn main concepts of Theano to write code that can compile on different hardware architectures and optimize automatically complex mathematical objective functions.

Chapter 2, Classifying Handwritten Digits with a Feedforward Network, will introduce a simple, well-known and historical example which has been the starting proof of superiority of deep learning algorithms. The initial problem was to recognize handwritten digits.

Chapter 3, Encoding word into Vector, one of the main challenge with neural nets is to connect the real world data to the input of a neural net, in particular for categorical and discrete data. This chapter presents an example on how to build an embedding space through training with Theano.

Such embeddings are very useful in machine translation, robotics, image captioning, and so on because they translate the real world data into arrays of vectors that can be processed by neural nets.

Chapter 4, Generating Text with a Recurrent Neural Net, introduces recurrency in neural nets with a simple example in practice, to generate text.

Recurrent neural nets (RNN) are a popular topic in deep learning, enabling more possibilities for sequence prediction, sequence generation, machine translation, connected objects. Natural Language Processing (NLP) is a second field of interest that has driven the research for new machine learning techniques.

Chapter 5, Analyzing Sentiments with a Bidirectional LSTM, applies embeddings and recurrent layers to a new task of natural language processing, sentiment analysis. It acts as a kind of validation of prior chapters.

In the meantime, it demonstrates an alternative way to build neural nets on Theano, with a higher level library, Keras.

Chapter 6, Locating with Spatial Transformer Networks, applies recurrency to image, to read multiple digits on a page at once. This time, we take the opportunity to rewrite the classification network for handwritten digits images, and our recurrent models, with the help of Lasagne, a library of built-in modules for deep learning with Theano.

Lasagne library helps design neural networks for experimenting faster. With this help, we'll address object localization, a common computer vision challenge, with Spatial Transformer modules to improve our classification scores.

Chapter 7, Classifying Images with Residual Networks, classifies any type of images at the best accuracy. In the mean time, to build more complex nets with ease, we introduce a library based on Theano framework, Lasagne, with many already implemented components to help implement neural nets faster for Theano.

Chapter 8, Translating and Explaining through Encoding – decoding Networks, presents encoding-decoding techniques: applied to text, these techniques are heavily used in machine-translation and simple chatbots systems. Applied to images, they serve scene segmentations and object localization. Last, image captioning is a mixed, encoding images and decoding to texts.

This chapter goes one step further with a very popular high level library, Keras, that simplifies even more the development of neural nets with Theano.

Chapter 9, Selecting Relevant Inputs or Memories with the Mechanism of Attention, for solving more complicated tasks, the machine learning world has been looking for higher level of intelligence, inspired by nature: reasoning, attention and memory. In this chapter, the reader will discover the memory networks on the main purpose of artificial intelligence for natural language processing (NLP): the language understanding.

Chapter 10, Predicting Times Sequence with Advanced RNN, time sequences are an important field where machine learning has been used heavily. This chapter will go for advanced techniques with Recurrent Neural Networks (RNN), to get state-of-art results.

Chapter 11, Learning from the Environment with Reinforcement, reinforcement learning is the vast area of machine learning, which consists in training an agent to behave in an environment (such as a video game) so as to optimize a quantity (maximizing the game score), by performing certain actions in the environment (pressing buttons on the controller) and observing what happens.

Reinforcement learning new paradigm opens a complete new path for designing algorithms and interactions between computers and real world.

Chapter 12, Learning Features with Unsupervised Generative Networks, unsupervised learning consists in new training algorithms that do not require the data to be labeled to be trained. These algorithms try to infer the hidden labels from the data, called the factors, and, for some of them, to generate new synthetic data.

Unsupervised training is very useful in many cases, either when no labeling exists, or when labeling the data with humans is too expensive, or lastly when the dataset is too small and feature engineering would overfit the data. In this last case, extra amounts of unlabeled data train better features as a basis for supervised learning.

Chapter 13, Extending Deep Learning with Theano, extends the set of possibilities in Deep Learning with Theano. It addresses the way to create new operators for the computation graph, either in Python for simplicity, or in C to overcome the Python overhead, either for the CPU or for the GPU. Also, introduces the basic concept of parallel programming for GPU. Lastly, we open the field of General Intelligence, based on the first skills developped in this book, to develop new skills, in a gradual way, to improve itself one step further.

Why Theano?

Investing time and developments on Theano is very valuable and to understand why, it is important to explain that Theano belongs to the best deep learning technologies and is also much more than a deep learning library. Three reasons make of Theano a good choice of investment:

  • It has comparable performance with other numerical or deep learning libraries
  • It comes in a rich Python ecosystem
  • It enables you to evaluate any function constraint by data, given a model, by leaving the freedom to compile a solution for any optimization problem

Let us first focus on the performance of the technology itself. The most popular libraries in deep learning are Theano (for Python), Torch (for Lua), Tensorflow (for Python) and Caffe (for C++ and with a Python wrapper). There has been lot's of benchmarks to compare deep learning technologies.

In Bastien et al 2012 (Theano: new features and speed improvements, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian Goodfellow, Arnaud Bergeron, Nicolas Bouchard, David Warde-Farley, Yoshua Bengio, Nov 2012), Theano made significant progress in speed, but the comparison on different tasks does not point a clear winner among the challenged technologies. Bahrampour et Al. 2016 (Comparative Study of Deep Learning Software Frameworks, Soheil Bahrampour, Naveen Ramakrishnan, Lukas Schott, Mohak Shah, mars 2016) conclude that:

  • For GPU-based deployment of trained convolutional and fully connected networks, Torch is best suited, followed by Theano.
  • For GPU-based training of convolutional and fully connected networks, Theano is fastest for small networks and Torch is fastest for larger networks
  • For GPU-based training and deployment of recurrent networks (LSTM), Theano results in the best performance.
  • For CPU-based training and deployment of any tested deep network architecture, Torch performs the best followed by Theano

These results are confirmed in the open-source rnn-benchmarks (https://github.com/glample/rnn-benchmarks) where for training (forward + backward passes), Theano outperforms Torch and TensorFlow. Also, Theano crushes Torch and TensorFlow for smaller batch sizes with larger numbers of hidden units. For bigger batch size and hidden layer size, the differences are smaller since they rely more on the performance of CUDA, the underlying NVIDIA graphic library common to all frameworks. Last, in up-to-date soumith benchmarks (https://github.com/soumith/convnet-benchmarks), the fftconv in Theano performs the best on CPU, while the best performing convolution implementations on GPU, cuda-convnet2 and fbfft, are CUDA extension, the underlying library. These results should convince the reader that, although results are mixed, Theano plays a leading role in the speed competition.

The second point to prefer Theano rather than Torch is that it comes with a rich ecosystem, taking benefit from the Python ecosystem, but also from a large number of libraries that have been developed for Theano. This book will present two of them, Lasagne, and Keras. Theano and Torch are the most extensible frameworks both in terms of supporting various deep architectures but also in terms of supported libraries. Last, Theano has not a reputation to be complex to debug, contrary to other deep learning libraries.

The third point makes Theano an uncomparable tool for the computer scientist because it is not specific to deep learning. Although Theano presents the same methods for deep learning than other libraries, its underlying principles are very different: in fact, Theano compiles the computation graph on the target architecture. This compilation step makes Theano's specificity, and it should be defined as a mathematical expression compiler, designed with machine learning in mind. The symbolic differentiation is one of the most useful features that Theano offers for implementing non-standard deep architectures. Therefore, Theano is able to address a much larger range of numerical problems, and can be used to find the solution that minimizes any problem expressed with a differentiable loss or energy function, given an existing dataset.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime