Building a tracking vision system for moving objects
In this section, we build a simple tacking vision system. We already learned how to detect a face in an image. Now we try to detect faces on video.
The idea is simple. We change a still image as source to a frame image from a camera. After calling read()
from the VideoCapture
object, we pass the frame image into face_cascade.detectMultiScale()
. Then, we show it a picture dialog. That's it.
For implementation, type these scripts:
import numpy as np import cv2 face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') cap = cv2.VideoCapture(0) while True: # Capture frame-by-frame ret, frame = cap.read() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.3, 5) for (x, y, w, h) in faces: img = cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 255), 2) cv2.imshow('face tracking', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break...