Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Deep Learning with TensorFlow. - Second Edition

You're reading from  Deep Learning with TensorFlow. - Second Edition

Product type Book
Published in Mar 2018
Publisher Packt
ISBN-13 9781788831109
Pages 484 pages
Edition 2nd Edition
Languages
Authors (2):
Giancarlo Zaccone Giancarlo Zaccone
Profile icon Giancarlo Zaccone
Md. Rezaul Karim Md. Rezaul Karim
Profile icon Md. Rezaul Karim
View More author details
Toc

Table of Contents (15) Chapters close

Deep Learning with TensorFlow - Second Edition
Contributors
Preface
Other Books You May Enjoy
1. Getting Started with Deep Learning 2. A First Look at TensorFlow 3. Feed-Forward Neural Networks with TensorFlow 4. Convolutional Neural Networks 5. Optimizing TensorFlow Autoencoders 6. Recurrent Neural Networks 7. Heterogeneous and Distributed Computing 8. Advanced TensorFlow Programming 9. Recommendation Systems Using Factorization Machines 10. Reinforcement Learning Index

Working principles of RNNs


In this section, we will first provide some contextual information about RNNs. Then we will see some potential drawbacks of the classical RNN. Finally, we will see an improved variation of RNNs called LSTM to address the drawbacks.

Human beings do not start thinking from scratch. The human mind has so-called persistence of memory, the ability to associate the past with recent information. Traditional neural networks instead ignore past events. Take the movie scenes classifier as an example; it is not possible for a neural network to use past scenes to classify current ones. RNNs were developed to try to solve this problem.

Figure 1: RNNs have loops

In contrast to conventional neural networks, RNNs are networks with a loop that allows the information to be persistent in a neural network. In the preceding diagram, with the network A, at some time t, it receives the input and outputs a value of . So, in the preceding figure, we think of an RNN as multiple copies of...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime