Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Keras Deep Learning Cookbook

You're reading from  Keras Deep Learning Cookbook

Product type Book
Published in Oct 2018
Publisher Packt
ISBN-13 9781788621755
Pages 252 pages
Edition 1st Edition
Languages
Authors (3):
Rajdeep Dua Rajdeep Dua
Profile icon Rajdeep Dua
Sujit Pal Sujit Pal
Profile icon Sujit Pal
Manpreet Singh Ghotra Manpreet Singh Ghotra
Profile icon Manpreet Singh Ghotra
View More author details
Toc

Table of Contents (17) Chapters close

Title Page
Copyright and Credits
Packt Upsell
Contributors
Preface
1. Keras Installation 2. Working with Keras Datasets and Models 3. Data Preprocessing, Optimization, and Visualization 4. Classification Using Different Keras Layers 5. Implementing Convolutional Neural Networks 6. Generative Adversarial Networks 7. Recurrent Neural Networks 8. Natural Language Processing Using Keras Models 9. Text Summarization Using Keras Models 10. Reinforcement Learning 1. Other Books You May Enjoy Index

Optimization with Adam


SGD, in contrast to batch gradient descent, performs a parameter update for each training example, x(i) and label y(i):

Θ = Θ - η∇Θj(Θ, x(i), y(i))

Adaptive Moment Estimation (Adam) computes adaptive learning rates for each parameter. Like AdaDelta, Adam not only stores the decaying average of past squared gradients but additionally stores the momentum change for each parameter. Adam works well in practice and is one of the most used optimization methods today.

Adam stores the exponentially decaying average of past gradients (mt) in addition to the decaying average of past squared gradients (like Adadelta and RMSprop). Adam behaves like a heavy ball with friction running down the slope leading to a flat minima in the error surface. Decaying averages of past and past squared gradients mt and vt are computed with the following formulas:

mt1mt−1+(1−β1)gt

vt2vt−1+(1−β2)gt

mt and vt are estimates of the first moment (the mean) and the second moment (the uncentered variance...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at ₹800/month. Cancel anytime