3D face recognition involves measuring the geometry of rigid features in the face. It is typically obtained by generating 3D images using time of flight, a range camera, or getting multiple images from a 360-degree orientation of the object. A conventional 2D camera converts a 3D space into a 2D image, which is why depth sensing is one of the fundamental challenges of computer vision. Time-of-flight-based depth estimation is based on the time it takes for a light pulse to travel from a light source to the object and back to the camera. The light source and image acquisition are synchronized to get depth. Time-of-flight sensors are able to estimate full depth frames in real time. A major issue for the time of flight is the low spatial resolution. The 3D face recognition can be broken down into the following three segments:
- Overview of hardware design...