Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Deep Learning with Theano

You're reading from   Deep Learning with Theano Perform large-scale numerical and scientific computations efficiently

Arrow left icon
Product type Paperback
Published in Jul 2017
Publisher Packt
ISBN-13 9781786465825
Length 300 pages
Edition 1st Edition
Tools
Arrow right icon
Author (1):
Arrow left icon
Christopher Bourez Christopher Bourez
Author Profile Icon Christopher Bourez
Christopher Bourez
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Theano Basics 2. Classifying Handwritten Digits with a Feedforward Network FREE CHAPTER 3. Encoding Word into Vector 4. Generating Text with a Recurrent Neural Net 5. Analyzing Sentiment with a Bidirectional LSTM 6. Locating with Spatial Transformer Networks 7. Classifying Images with Residual Networks 8. Translating and Explaining with Encoding – decoding Networks 9. Selecting Relevant Inputs or Memories with the Mechanism of Attention 10. Predicting Times Sequences with Advanced RNN 11. Learning from the Environment with Reinforcement 12. Learning Features with Unsupervised Generative Networks 13. Extending Deep Learning with Theano Index

Memory networks


Answering questions or resolving problems given a few facts or a story have led to the design of a new type of networks, memory networks. In this case, the facts or the story are embedded into a memory bank, as if they were inputs. To solve tasks that require the facts to be ordered or to create transitions between the facts, memory networks use a recurrent reasoning process in multiple steps or hops on the memory banks.

First, the query or question q is converted into a constant input embedding:

While, at each step of the reasoning, the facts X to answer the question are embedded into two memory banks, where the embedding coefficients are a function of the timestep:

To compute attention weights:

And:

Selected with the attention:

The output at each reasoning time step is then combined with the identity connection, as seen previously to improve the efficiency of the recurrency:

A linear layer and classification softmax layer are added to the last :

Episodic memory with dynamic memory...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image