Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Hands-On Vision and Behavior for Self-Driving Cars
Hands-On Vision and Behavior for Self-Driving Cars

Hands-On Vision and Behavior for Self-Driving Cars: Explore visual perception, lane detection, and object classification with Python 3 and OpenCV 4

eBook
€8.99 €32.99
Paperback
€41.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Hands-On Vision and Behavior for Self-Driving Cars

Chapter 1: OpenCV Basics and Camera Calibration

This chapter is an introduction to OpenCV and how to use it in the initial phases of a self-driving car pipeline, to ingest a video stream, and prepare it for the next phases. We will discuss the characteristics of a camera from the point of view of a self-driving car and how to improve the quality of what we get out of it. We will also study how to manipulate the videos and we will try one of the most famous features of OpenCV, object detection, which we will use to detect pedestrians.

With this chapter, you will build a solid foundation on how to use OpenCV and NumPy, which will be very useful later.

In this chapter, we will cover the following topics:

  • OpenCV and NumPy basics
  • Reading, manipulating, and saving images
  • Reading, manipulating, and saving videos
  • Manipulating images
  • How to detect pedestrians with HOG
  • Characteristics of a camera
  • How to perform the camera calibration

Technical requirements

For the instructions and code in this chapter, you need the following:

  • Python 3.7
  • The opencv-Python module
  • The NumPy module

The code for the chapter can be found here:

https://github.com/PacktPublishing/Hands-On-Vision-and-Behavior-for-Self-Driving-Cars/tree/master/Chapter1

The Code in Action videos for this chapter can be found here:

https://bit.ly/2TdfsL7

Introduction to OpenCV and NumPy

OpenCV is a computer vision and machine learning library that has been developed for more than 20 years and provides an impressive number of functionalities. Despite some inconsistencies in the API, its simplicity and the remarkable number of algorithms implemented make it an extremely popular library and an excellent choice for many situations.

OpenCV is written in C++, but there are bindings for Python, Java, and Android.

In this book, we will focus on OpenCV for Python, with all the code tested using OpenCV 4.2.

OpenCV in Python is provided by opencv-python, which can be installed using the following command:

pip install opencv-python

OpenCV can take advantage of hardware acceleration, but to get the best performance, you might need to build it from the source code, with different flags than the default, to optimize it for your target hardware.

OpenCV and NumPy

The Python bindings use NumPy, which increases the flexibility and makes it compatible with many other libraries. As an OpenCV image is a NumPy array, you can use normal NumPy operations to get information about the image. A good understanding of NumPy can improve the performance and reduce the length of your code.

Let's dive right in with some quick examples of what you can do with NumPy in OpenCV.

Image size

The size of the image can be retrieved using the shape attribute:

print("Image size: ", image.shape)

For a grayscale image of 50x50, image.shape() would return the tuple (50, 50), while for an RGB image, the result would be (50, 50, 3).

False friends

In NumPy, the attribute size is the size in bytes of the array; for a 50x50 gray image, it would be 2,500, while for the same image in RGB, it would be 7,500. It's the shape attribute that contains the size of the image – (50, 50) and (50, 50, 3), respectively.

Grayscale images

Grayscale images are represented by a two-dimensional NumPy array. The first index affects the rows (y coordinate) and the second index the columns (x coordinate). The y coordinates have their origin in the top corner of the image and x coordinates have their origin in the left corner of the image.

It is possible to create a black image using np.zeros(), which initializes all the pixels to 0:

black = np.zeros([100,100],dtype=np.uint8)  # Creates a black image

The previous code creates a grayscale image with size (100, 100), composed of 10,000 unsigned bytes (dtype=np.uint8).

To create an image with pixels with a different value than 0, you can use the full() method:

white = np.full([50, 50], 255, dtype=np.uint8)

To change the color of all the pixels at once, it's possible to use the [:] notation:

img[:] = 64        # Change the pixels color to dark gray

To affect only some rows, it is enough to provide a range of rows in the first index:

img[10:20] = 192   # Paints 10 rows with light gray

The previous code changes the color of rows 10-20, including row 10, but excluding row 20.

The same mechanism works for columns; you just need to specify the range in the second index. To instruct NumPy to include a full index, we use the [:] notation that we already encountered:

img[:, 10:20] = 64 # Paints 10 columns with dark gray

You can also combine operations on rows and columns, selecting a rectangular area:

img[90:100, 90:100] = 0  # Paints a 10x10 area with black

It is, of course, possible to operate on a single pixel, as you would do on a normal array:

img[50, 50] = 0  # Paints one pixel with black

It is possible to use NumPy to select a part of an image, also called the Region Of Interest (ROI). For example, the following code copies a 10x10 ROI from the position (90, 90) to the position (80, 80):

roi = img[90:100, 90:100]
img[80:90, 80:90] = roi 

The following is the result of the previous operations:

Figure 1.1 – Some manipulation of images using NumPy slicing

Figure 1.1 – Some manipulation of images using NumPy slicing

To make a copy of an image, you can simply use the copy() method:

image2 = image.copy()

RGB images

RGB images differ from grayscale because they are three-dimensional, with the third index representing the three channels. Please note that OpenCV stores the images in BGR format, not RGB, so channel 0 is blue, channel 1 is green, and channel 2 is red.

Important note

OpenCV stores the images as BGR, not RGB. In the rest of the book, when talking about RGB images, it will only mean that it is a 24-bit color image, but the internal representation will usually be BGR.

To create an RGB image, we need to provide three sizes:

rgb = np.zeros([100, 100, 3],dtype=np.uint8)  

If you were going to run the same code previously used on the grayscale image with the new RGB image (skipping the third index), you would get the same result. This is because NumPy would apply the same color to all the three channels, which results in a shade of gray.

To select a color, it is enough to provide the third index:

rgb[:, :, 2] = 255       # Makes the image red

In NumPy, it is also possible to select rows, columns, or channels that are not contiguous. You can do this by simply providing a tuple with the required indexes. To make the image magenta, you need to set the blue and red channels to 255, which can be achieved with the following code:

rgb[:, :, (0, 2)] = 255  # Makes the image magenta

You can convert an RGB image into grayscale using cvtColor():

gray = cv2.cvtColor(original, cv2.COLOR_BGR2GRAY)

Working with image files

OpenCV provides a very simple way to load images, using imread():

import cv2
image = cv2.imread('test.jpg')

To show the image, you can use imshow(), which accepts two parameters:

  • The name to write on the caption of the window that will show the image
  • The image to be shown

Unfortunately, its behavior is counterintuitive, as it will not show an image unless it is followed by a call to waitKey():

cv2.imshow("Image", image)cv2.waitKey(0)

The call to waitKey() after imshow() will have two effects:

  • It will actually allow OpenCV to show the image provided to imshow().
  • It will wait for the specified amount of milliseconds, or until a key is pressed if the amount of milliseconds passed is <=0. It will wait indefinitely.

An image can be saved on disk using the imwrite() method, which accepts three parameters:

  • The name of the file
  • The image
  • An optional format-dependent parameter:
cv2.imwrite("out.jpg", image)

Sometimes, it can be very useful to combine multiple pictures by putting them next to each other. Some examples in this book will use this feature extensively to compare images.

OpenCV provides two methods for this purpose: hconcat() to concatenate the pictures horizontally and vconcat() to concatenate them vertically, both accepting as a parameter a list of images. Take the following example:

black = np.zeros([50, 50], dtype=np.uint8)white = np.full([50, 50], 255, dtype=np.uint8)cv2.imwrite("horizontal.jpg", cv2.hconcat([white, black]))cv2.imwrite("vertical.jpg", cv2.vconcat([white, black]))

Here's the result:

Figure 1.2 – Horizontal concatenation with hconcat() and vertical concatenation with vconcat()

Figure 1.2 – Horizontal concatenation with hconcat() and vertical concatenation with vconcat()

We could use these two methods to create a chequerboard pattern:

row1 = cv2.hconcat([white, black])row2 = cv2.hconcat([black, white])cv2.imwrite("chess.jpg", cv2.vconcat([row1, row2]))

You will see the following chequerboard:

Figure 1.3 – A chequerboard pattern created using hconcat() in combination with vconcat()

Figure 1.3 – A chequerboard pattern created using hconcat() in combination with vconcat()

After having worked with images, it's time we work with videos.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Explore the building blocks of the visual perception system in self-driving cars
  • Identify objects and lanes to define the boundary of driving surfaces using open-source tools like OpenCV and Python
  • Improve the object detection and classification capabilities of systems with the help of neural networks

Description

The visual perception capabilities of a self-driving car are powered by computer vision. The work relating to self-driving cars can be broadly classified into three components - robotics, computer vision, and machine learning. This book provides existing computer vision engineers and developers with the unique opportunity to be associated with this booming field. You will learn about computer vision, deep learning, and depth perception applied to driverless cars. The book provides a structured and thorough introduction, as making a real self-driving car is a huge cross-functional effort. As you progress, you will cover relevant cases with working code, before going on to understand how to use OpenCV, TensorFlow and Keras to analyze video streaming from car cameras. Later, you will learn how to interpret and make the most of lidars (light detection and ranging) to identify obstacles and localize your position. You’ll even be able to tackle core challenges in self-driving cars such as finding lanes, detecting pedestrian and crossing lights, performing semantic segmentation, and writing a PID controller. By the end of this book, you’ll be equipped with the skills you need to write code for a self-driving car running in a driverless car simulator, and be able to tackle various challenges faced by autonomous car engineers.

Who is this book for?

This book is for software engineers who are interested in learning about technologies that drive the autonomous car revolution. Although basic knowledge of computer vision and Python programming is required, prior knowledge of advanced deep learning and how to use sensors (lidar) is not needed.

What you will learn

  • Understand how to perform camera calibration
  • Become well-versed with how lane detection works in self-driving cars using OpenCV
  • Explore behavioral cloning by self-driving in a video-game simulator
  • Get to grips with using lidars
  • Discover how to configure the controls for autonomous vehicles
  • Use object detection and semantic segmentation to locate lanes, cars, and pedestrians
  • Write a PID controller to control a self-driving car running in a simulator

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Oct 23, 2020
Length: 374 pages
Edition : 1st
Language : English
ISBN-13 : 9781800201934
Category :
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Oct 23, 2020
Length: 374 pages
Edition : 1st
Language : English
ISBN-13 : 9781800201934
Category :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 129.97
Hands-On Vision and Behavior for Self-Driving Cars
€41.99
Modern Computer Vision with PyTorch
€49.99
Applied Deep Learning and Computer Vision for Self-Driving Cars
€37.99
Total 129.97 Stars icon
Banner background image

Table of Contents

16 Chapters
Section 1: OpenCV and Sensors and Signals Chevron down icon Chevron up icon
Chapter 1: OpenCV Basics and Camera Calibration Chevron down icon Chevron up icon
Chapter 2: Understanding and Working with Signals Chevron down icon Chevron up icon
Chapter 3: Lane Detection Chevron down icon Chevron up icon
Section 2: Improving How the Self-Driving Car Works with Deep Learning and Neural Networks Chevron down icon Chevron up icon
Chapter 4: Deep Learning with Neural Networks Chevron down icon Chevron up icon
Chapter 5: Deep Learning Workflow Chevron down icon Chevron up icon
Chapter 6: Improving Your Neural Network Chevron down icon Chevron up icon
Chapter 7: Detecting Pedestrians and Traffic Lights Chevron down icon Chevron up icon
Chapter 8: Behavioral Cloning Chevron down icon Chevron up icon
Chapter 9: Semantic Segmentation Chevron down icon Chevron up icon
Section 3: Mapping and Controls Chevron down icon Chevron up icon
Chapter 10: Steering, Throttle, and Brake Control Chevron down icon Chevron up icon
Chapter 11: Mapping Our Environments Chevron down icon Chevron up icon
Assessments Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.3
(3 Ratings)
5 star 0%
4 star 66.7%
3 star 0%
2 star 33.3%
1 star 0%
Shane Lee Jan 09, 2021
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
What I like:1. This book covers very well and detail use of open source libraries and tools, on OpenCV and NumPy, especially on the coding and development environment setup.2. This books uses pictures to illustrate a topic whenever possible, this is very helpful in the domain of perception and camera based image processing.What I don't like:1. It is short of mathematical fundamentals, in my view, every domain, and chapter covers the goal of solving self-driving technology, should have a corresponding mathematical equations, or models to begin with, before jumping to the use of open source library, or coding. This will really help readers embrace the understanding of the technologies.2. On the mapping and SLAM part, it lacks some of the real scenarios, and problems that needs to be solved in order to benefit the self-driving.What I would like to see:1. It would be great to extend the full details of using open source tools, to cover a balance of 3 pillars - 1) Real-time object detection and perception, 2) real-time absolute positioning with GNSS, and 3)HD mapping.Even though today no one is really covering well on these 3 pillars strongly and tightly enough to render a robust self-driving engine yet, but it is essential to have before we reach the all-weather, all-time, all scenarios self-driving world.
Amazon Verified review Amazon
Amazon Customer Jan 11, 2021
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
What I like: This book is a introductory manual into self driving cars. It basically explains what is what like the module in vision which has the basic pipe line for predicting lanes. If you are working on a similar problem it’s better to have this book handy so that you can figure out what to do next. All the other modules are helpful as well like the one which explains signals can how to read them. This particular section I found is very useful as a controls engineer myself I find myself working with different type of communication interfaces like CAN, SPI etcThis book can also be used as a reference for self driving car course by udacity.What Ibwould like to see: more in depth mathematical proofs and applications for different concepts
Amazon Verified review Amazon
John Smith Aug 25, 2022
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
Consider this line of code from chapter 1 about camera calibration:corners = cv2.cornerSubPix(img_src, corners, (11, 11), (-1, -1), (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001))what is (11,11) or what is (-1,-1). There is absolutely no explanation and you have to consult OpenCV documentation all the time.When I buy a book like this, I expect the author to provide an explanation for all those arguments so that I can save time.It seems OpenCV documentation is more complete and easier to follow.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.