Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Applied Deep Learning and Computer Vision for Self-Driving Cars

You're reading from   Applied Deep Learning and Computer Vision for Self-Driving Cars Build autonomous vehicles using deep neural networks and behavior-cloning techniques

Arrow left icon
Product type Paperback
Published in Aug 2020
Publisher Packt
ISBN-13 9781838646301
Length 332 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (3):
Arrow left icon
Dr. S. Senthamilarasu Dr. S. Senthamilarasu
Author Profile Icon Dr. S. Senthamilarasu
Dr. S. Senthamilarasu
Balu Nair Balu Nair
Author Profile Icon Balu Nair
Balu Nair
Sumit Ranjan Sumit Ranjan
Author Profile Icon Sumit Ranjan
Sumit Ranjan
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Section 1: Deep Learning Foundation and SDC Basics
2. The Foundation of Self-Driving Cars FREE CHAPTER 3. Dive Deep into Deep Neural Networks 4. Implementing a Deep Learning Model Using Keras 5. Section 2: Deep Learning and Computer Vision Techniques for SDC
6. Computer Vision for Self-Driving Cars 7. Finding Road Markings Using OpenCV 8. Improving the Image Classifier with CNN 9. Road Sign Detection Using Deep Learning 10. Section 3: Semantic Segmentation for Self-Driving Cars
11. The Principles and Foundations of Semantic Segmentation 12. Implementing Semantic Segmentation 13. Section 4: Advanced Implementations
14. Behavioral Cloning Using Deep Learning 15. Vehicle Detection Using OpenCV and Deep Learning 16. Next Steps 17. Other Books You May Enjoy

Introduction to SDCs

The following is an image of an SDC by WAYMO undergoing testing in Los Altos, California: 

Fig 1.1: A Google SDC

You can check out the image at https://en.wikipedia.org/wiki/File:Waymo_Chrysler_Pacifica_in_Los_Altos,_2017.jpg.

The idea of the autonomous car has existed for decades, but we saw enormous improvement from 2002 onward when the Defense Advanced Research Projects Agency (DARPA) announced the first of its grand challenges, called the DARPA Grand Challenge (2004). That would forever change the world's perception of what autonomous robots can do. The first event was held in 2004 and DARPA offered the winners a one-million-dollar prize if they could build an autonomous vehicle that was able to navigate 142 miles through the Mojave Desert. Although the first event saw only a few teams get off the start line (Carnegie Mellon's red team took first place, having driven only 7 miles), it was clear that the task of driving without any human aid was indeed possible. In the second DARPA Grand Challenge in 2005, five of the 23 teams smashed expectations and successfully completed the track without any human intervention at all. Stanford's vehicle, Stanley, won the challenge, followed by Carnegie Mellon's Sandstorm, an autonomous vehicle. With this, the era of driverless cars had arrived.

Later, the 2007 installment, called the DARPA Urban Challenge, invited universities to show off their autonomous vehicles on busy roads with professional stunt drivers. This time, after a harrowing 30-minute delay that occurred due to a jumbotron screen blocking their vehicle from receiving GPS signals, the Carnegie Mellon team came out on top, while the Stanford Junior vehicle came second.

Collectively, these three grand challenges were truly a watershed moment in the development of SDCs, changing the way the public (and more importantly, the technology and automotive industries) thought about the feasibility of full vehicular autonomy. It was now clear that a massive new market was opening up, and the race was on. Google immediately brought in the team leads from both Carnegie Mellon and Stanford (Chris Thompson and Mike Monte-Carlo, respectively) to push their designs onto public roads. By 2010, Google's SDC had logged over 140 thousand miles in California, and they later wrote in a blog post that they were confident about cutting the number of traffic deaths by half using SDCs. blog by Google: What we're driving at — Sebastian Thrun (https://googleblog.blogspot.com/2010/10/what-were-driving-at.html)

According to a report by the World Health Organization, more than 1.35 million lives are lost every year in road traffic accidents, while 20-50 million end up with non-fatal injuries (https://www.who.int/health-topics/road-safety#tab=tab_1).

As per a study released by the Virginia Tech Transportation Institute (VTTI) and the National Highway Traffic Safety Administration (NHTSA), 80% of car accidents involve human distraction (https://seriousaccidents.com/legal-advice/top-causes-of-car-accidents/driver-distractions/). An SDC can, therefore, become a useful and safe solution for the whole of society to reduce these accidents. In order to propose a path that an intelligent car should follow, we require several software applications to process data using artificial intelligence (AI).

Google succeeded in creating the world's first autonomous car 2 years ago (at the time of writing). The problem with Google's car was its expensive 3D RADAR, which is worth about $75,000.

The 3D RADAR is used for environmental identification, as well as the development of a high-resolution 3D map.

The solution to this cost is to use multiple, cheaper cameras that are mounted to the car to capture images that recognize the lane lines on the road, as well as the real-time position of the car. 

In addition, a driverless car can reduce the distance between cars, thereby reducing the degree of road loads, reducing the number of traffic jams. Furthermore, they greatly reduce the capacity for human errors to occur while driving and allow people with disabilities to drive long distances.

A machine as a driver will never make a mistake; it will be able to calculate the distance between cars very accurately. Parking will be more efficiently spaced, and the fuel consumption of cars will be optimized. 

The driverless car is a vehicle equipped with sensors and cameras for detecting the environment, and it can navigate (almost) without any real-time input from a human. Many companies are investing billions of dollars in order to advance this toward an accessible reality. Now, a world where AI takes control of driving has never been closer.

Nowadays, self-driving car engineers are exploring several different approaches in order to develop an autonomous system. The most successful and popularly used among them are as follows:

  • The robotics approach
  • The deep learning approach

In reality, in the development of SDCs, both robotics and deep learning methods are being actively pursued by developers and engineers.

The robotic approach works by fusing output from a set of sensors to analyze a vehicle's environment directly and prompt it to navigate accordingly. For many years, self-driving automotive engineers have been working on and improving robotic approaches. However, more recently, engineering teams have started developing autonomous vehicles using a deep learning approach.

Deep neural networks enable SDCs to learn how to drive by imitating the behavior of human driving.

The five core components of SDCs are computer vision, sensor fusion, localization, path planning, and control of the vehicle.

In the following diagram, we can see the five core components of SDCs:

Fig 1.2: The five core components of SDCs

Let's take a brief look at these core components:

  • Computer vision can be considered the eyes of the SDC, and it helps us figure out what the world around it looks like.
  • Sensor fusion is a way of incorporating the data from various sensors such as RADAR, LIDAR, and LASER to gain a deeper understanding of our surroundings.
  • Once we have a deeper understanding of what the world around it looks like, we want to know where we are in the world, and localization helps with this.
  • After understanding what the world looks like and where we are in the world, we want to know where we would like to go, so we use path planning to chart the course of our travel. Path planning is built for trajectory execution.
  • Finally, control helps with turning the steering wheel, changing the car's gears, and applying the brakes.

Getting the car to autonomously follow the path you want requires a lot of effort, but researchers have made the possible with the help of advanced systems engineering. Details regarding systems engineering will be provided later in this chapter.

You have been reading a chapter from
Applied Deep Learning and Computer Vision for Self-Driving Cars
Published in: Aug 2020
Publisher: Packt
ISBN-13: 9781838646301
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime