Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Deep Learning with Go

You're reading from   Hands-On Deep Learning with Go A practical guide to building and implementing neural network models using Go

Arrow left icon
Product type Paperback
Published in Aug 2019
Publisher Packt
ISBN-13 9781789340990
Length 242 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Darrell Chua Darrell Chua
Author Profile Icon Darrell Chua
Darrell Chua
Gareth Seneque Gareth Seneque
Author Profile Icon Gareth Seneque
Gareth Seneque
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Section 1: Deep Learning in Go, Neural Networks, and How to Train Them
2. Introduction to Deep Learning in Go FREE CHAPTER 3. What Is a Neural Network and How Do I Train One? 4. Beyond Basic Neural Networks - Autoencoders and RBMs 5. CUDA - GPU-Accelerated Training 6. Section 2: Implementing Deep Neural Network Architectures
7. Next Word Prediction with Recurrent Neural Networks 8. Object Recognition with Convolutional Neural Networks 9. Maze Solving with Deep Q-Networks 10. Generative Models with Variational Autoencoders 11. Section 3: Pipeline, Deployment, and Beyond!
12. Building a Deep Learning Pipeline 13. Scaling Deployment 14. Other Books You May Enjoy

Advanced gradient descent algorithms

Now that we have an understanding of SGD and backpropagation, let's look at a number of advanced optimization methods (building on SGD) that offer us some kind of advantage, usually an improvement in training time (or the time it takes to minimize the cost function to the point where our network converges).

These improved methods include a general notion of velocity as an optimization parameter. Quoting from Wibisono and Wilson, in the opening to their paper on Accelerated Methods in Optimization:

"In convex optimization, there is an acceleration phenomenon in which we can boost the convergence rate of certain gradient-based algorithms."

In brief, a number of these advanced algorithms all rely on a similar principle—that they can pass through local optima quickly, carried by their momentum—essentially, a moving...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime