Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering Computer Vision with TensorFlow 2.x

You're reading from   Mastering Computer Vision with TensorFlow 2.x Build advanced computer vision applications using machine learning and deep learning techniques

Arrow left icon
Product type Paperback
Published in May 2020
Publisher Packt
ISBN-13 9781838827069
Length 430 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Krishnendu Kar Krishnendu Kar
Author Profile Icon Krishnendu Kar
Krishnendu Kar
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Section 1: Introduction to Computer Vision and Neural Networks
2. Computer Vision and TensorFlow Fundamentals FREE CHAPTER 3. Content Recognition Using Local Binary Patterns 4. Facial Detection Using OpenCV and CNN 5. Deep Learning on Images 6. Section 2: Advanced Concepts of Computer Vision with TensorFlow
7. Neural Network Architecture and Models 8. Visual Search Using Transfer Learning 9. Object Detection Using YOLO 10. Semantic Segmentation and Neural Style Transfer 11. Section 3: Advanced Implementation of Computer Vision with TensorFlow
12. Action Recognition Using Multitask Deep Learning 13. Object Detection Using R-CNN, SSD, and R-FCN 14. Section 4: TensorFlow Implementation at the Edge and on the Cloud
15. Deep Learning on Edge Devices with CPU/GPU Optimization 16. Cloud Computing Platform for Computer Vision 17. Other Books You May Enjoy

What this book covers

Chapter 1, Computer Vision and TensorFlow Fundamentals, discusses the foundational concepts of computer vision and TensorFlow to prepare you for the later, more advanced chapters of this book. We will look at how to perform image hashing and filtering. Then, we will learn about various methods of feature extraction and image retrieval. Moving on, we will learn about contour-based object detection, histogram of oriented gradients and various feature matching methods. Then, we will look at an overview of the high-level TensorFlow software and its different components and subsystems. The chapter provides many hands-on coding exercises for object detection, image filtering and feature matching.

Chapter 2, Content Recognition Using Local Binary Patterns, discusses local binary feature descriptor and the histogram for the classification of textured and non-textured images. You will learn to tune local binary pattern (LBP) parameters and calculate histogram difference between LBPs to match identical pattern between images. The chapter provides two coding exercises – one for matching flooring patterns and the other for matching face color with foundation color.

Chapter 3, Facial Detection Using OpenCV and CNNs, starts with Viola-Jones face- and key-feature detection and move on to the advanced concept of the neural-network-based facial key points detection and facial expressions recognition. The chapter will end by looking at the advanced concept of 3D face detection. The chapter provides two coding exercise one for OpenCV based face detection in webcam and the other one is a CNN based end to end pipeline for facial key point detection. The end to end neural network pipeline consists of facial image collection by cropping face images from webcam, annotating key points in face image, data ingestion into a CNN, building a CNN model, training and finally evaluating the trained model of key points against face images.

Chapter 4, Deep Learning on Images, delves into how edge detection is used to create convolution operations over volume and how different convolution parameters such as filter size, dimensions, and operation type affect the convolution volume. This chapter will give you a very detailed overview of how a neural network sees an image and how it uses that visualization to classify images. The chapter provides a TensorFlow Keras based coding exercise to construct a neural network and visualize an image as it goes through its different layers. You will then compare the network model's accuracy and visualization to an advanced network such as VGG 16 or Inception.

Chapter 5, Neural Network Architecture and Models, explores different neural network architectures and models. This will give you an understanding of how the concepts learned in the first and fourth chapters are applied in various scenarios by changing the parameters for the convolution, pooling, activation, fully connected, and softmax layers. Hopefully, with these exercises, you will develop an understanding of a range of neural network models, which will give you a solid foundation as a computer vision engineer.

Chapter 6, Visual Search Using Transfer Learning, is where you are going to use TensorFlow to input data into models and develop visual search methods for real-life situations. You will learn how to input images and their categories into the TensorFlow model using the Keras data generator and TensorFlow tf.data API and then cut a portion of pretrained model and add your own model content at the end to develop your own classifier. The idea behind these exercises is to learn how to code in TensorFlow for the neural network models you learned about in the fourth and fifth chapters.

Chapter 7, Object Detection Using YOLO, introducing two single-stage, fast object detection methods—You Only Look Once (YOLO) and RetinaNet. In this chapter, you will learn about different YOLO models, finding out how to change their configuration parameters and make inferences with them. You will also learn how to process your own images to train a custom YOLO v3 model using Darknet.

Chapter 8, Semantic Segmentation and Neural Style Transfer, discusses how deep neural network is used to segment images into spatial regions, thereby producing artificial images and transferring styles from one image to another. We will perform hands on exercise for semantic segmentation using TensorFlow DeepLab and write TensorFlow codes for neural style transfer in Google Colab. We will also generate artificial images using DCGAN and perform image inpainting using OpenCV.

Chapter 9, Action Recognition Using Multitask Deep Learning, explains how to develop multitask neural network models for the recognition of actions, such as the movement of a hand, mouth, head, or leg, to detect the type of action using a vision-based system. This will then be supplemented with a deep neural network model using cell phone accelerometer data to validate the action.

Chapter 10, Object Detection Using R-CNN, SSD, and R-FCN, marks the beginning of an end-to-end (E2E) object detection framework by developing a solid foundation of data ingestion and training pipeline followed by model development. Here, you will gain a deep insight into the various object detection models, such as R-CNN, single-shot detector (SSD), region-based fully convolutional networks (R-FCNs), and Mask R-CNN, and perform hands-on exercises using Google Cloud and Google Colab notebooks. We will also carry out a detailed exercise on how to train your own custom image to develop an object detection model using a TensorFlow object detection API. We will end the chapter with a deep overview of various object tracking methods and a hands-on exercise using Google Colab notebooks.

Chapter 11, Deep Learning on Edge Devices with CPU/GPU Optimization, discusses how to take the generated model and deploy it on edge devices and production systems. This will result in a complete end-to-end TensorFlow object detection model implementation. In particular, TensorFlow models have been developed, converted, and optimized using the TensorFlow Lite and Intel Open Visual Inference and Neural Network Optimization (VINO) architectures and deployed to Raspberry Pi, Android, and iPhone. Although this chapter focuses mainly on object detection on Raspberry Pi, Android, and iPhone, the approach discussed can be extended to image classification, style transfer, and action recognition for any edge devices under consideration.

Chapter 12, Cloud Computing Platform for Computer Vision, discusses how to package your application for training and deployment in Google Cloud Platform (GCP), Amazon Web Services (AWS), and the Microsoft Azure cloud platform. You will learn how to prepare your data, upload to cloud data storage, and begin to monitor the training. You will also learn how to send an image or an image vector to the cloud platform for analysis and get a JSON response back. This chapter discusses a single application as well as running distributed TensorFlow on the compute engine. After training is complete, this chapter will discuss how to evaluate your model and integrate it into your application to operate at scale.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image