Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Applied Deep Learning and Computer Vision for Self-Driving Cars

You're reading from   Applied Deep Learning and Computer Vision for Self-Driving Cars Build autonomous vehicles using deep neural networks and behavior-cloning techniques

Arrow left icon
Product type Paperback
Published in Aug 2020
Publisher Packt
ISBN-13 9781838646301
Length 332 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (3):
Arrow left icon
Dr. S. Senthamilarasu Dr. S. Senthamilarasu
Author Profile Icon Dr. S. Senthamilarasu
Dr. S. Senthamilarasu
Balu Nair Balu Nair
Author Profile Icon Balu Nair
Balu Nair
Sumit Ranjan Sumit Ranjan
Author Profile Icon Sumit Ranjan
Sumit Ranjan
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Section 1: Deep Learning Foundation and SDC Basics
2. The Foundation of Self-Driving Cars FREE CHAPTER 3. Dive Deep into Deep Neural Networks 4. Implementing a Deep Learning Model Using Keras 5. Section 2: Deep Learning and Computer Vision Techniques for SDC
6. Computer Vision for Self-Driving Cars 7. Finding Road Markings Using OpenCV 8. Improving the Image Classifier with CNN 9. Road Sign Detection Using Deep Learning 10. Section 3: Semantic Segmentation for Self-Driving Cars
11. The Principles and Foundations of Semantic Segmentation 12. Implementing Semantic Segmentation 13. Section 4: Advanced Implementations
14. Behavioral Cloning Using Deep Learning 15. Vehicle Detection Using OpenCV and Deep Learning 16. Next Steps 17. Other Books You May Enjoy

Optimizers

Optimizers define how a neural network learns. They define the value of parameters during the training such that the loss function is at its lowest.

Gradient descent is an optimization algorithm for finding the minima of a function or the minimum value of a cost function. This is useful to us as we want to minimize the cost function. So, to find the local minimum, we take steps proportional to the negative of the gradient.

Let's go through a very simple example in one dimension, shown in the following plot:

Fig 2.17: Gradient descent

On the y axis, we have the cost (the result of the cost function), and on the x axis, we have the particular weight we are trying to choose (we chose the random weight). The weight minimizes the cost function and we can see that, basically, the parameter value is at the bottom of the parabola. We have to minimize the value of the cost function to the minimum value. Finding the minimum is really...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime