Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Keras Deep Learning Cookbook

You're reading from  Keras Deep Learning Cookbook

Product type Book
Published in Oct 2018
Publisher Packt
ISBN-13 9781788621755
Pages 252 pages
Edition 1st Edition
Languages
Authors (3):
Rajdeep Dua Rajdeep Dua
Profile icon Rajdeep Dua
Sujit Pal Sujit Pal
Profile icon Sujit Pal
Manpreet Singh Ghotra Manpreet Singh Ghotra
Profile icon Manpreet Singh Ghotra
View More author details
Toc

Table of Contents (17) Chapters close

Title Page
Copyright and Credits
Packt Upsell
Contributors
Preface
1. Keras Installation 2. Working with Keras Datasets and Models 3. Data Preprocessing, Optimization, and Visualization 4. Classification Using Different Keras Layers 5. Implementing Convolutional Neural Networks 6. Generative Adversarial Networks 7. Recurrent Neural Networks 8. Natural Language Processing Using Keras Models 9. Text Summarization Using Keras Models 10. Reinforcement Learning 1. Other Books You May Enjoy Index

Optimization with AdaDelta


AdaDelta solves the problem of the decreasing learning rate in AdaGrad. In AdaGrad, the learning rate is computed as 1 divided by the sum of square roots. At each stage, we add another square root to the sum, which causes the denominator to decrease constantly. Now, instead of summing all prior square roots, it uses a sliding window that allows the sum to decrease.

 

 

AdaDelta is an extension of AdaGrad that seeks to reduce its aggressive, monotonically decreasing learning rate. Instead of accumulating all past squared gradients, AdaDelta restricts the window of accumulated past gradients to some fixed size, w.

Instead of inefficiently storing w past squared gradients, the sum of the gradients is recursively defined as a decaying average of all past squared gradients. The running average, E[g2]t, at time step t then depends (as a fraction, γ, similar to the momentum term) only on the previous average and the current gradient:

Where E[g2]t is the squared sum of gradients...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at ₹800/month. Cancel anytime