Pose-Tolerant Face Recognition
Automatic face recognition performance has been steadily improving over years of active research, however it remains significantly affected by a number of external factors such as illumination, pose, expression, occlusion and resolution that can severely alter the appearance of a face and negatively...
Main Author: | |
---|---|
Format: | Others |
Published: |
Research Showcase @ CMU
2013
|
Subjects: | |
Online Access: | http://repository.cmu.edu/dissertations/244 http://repository.cmu.edu/cgi/viewcontent.cgi?article=1240&context=dissertations |
Summary: | Automatic face recognition performance has been steadily improving over years of active research, however it remains significantly affected by a number of external factors such as illumination, pose, expression, occlusion and resolution that can severely alter the appearance of a face and negatively impact recognition scores. The focus of this thesis is the pose problem which remains largely overlooked in most real-world applications. Specifically, we focus on one-to-one matching scenarios where a query face image of a random pose is matched against a set of “mugshot-style” near-frontal gallery images. We argue that in this scenario, a 3D face-modeling geometric approach is essential in tackling the pose problem. For this purpose, we utilize a recent technique for efficient synthesis of 3D face models called 3D General Elastic Model (3DGEM). It solved the pose synthesis problem from a single frontal image, but could not solve the pose correction problem because of missing face data due to self-occlusion. In this thesis, we extend the formulation of 3DGEM and cast this task as an occlusion-removal problem. We propose a sparse feature extraction approach using subspace-modeling and `1-minimization to find a representation of the geometrically 3D-corrected faces that we show is stable under varying pose and resolution. We then show how pose-tolerance can be achieved either in the feature space or in the reconstructed image space. We present two different algorithms that capitalize on the robustness of the sparse feature extracted from the pose-corrected faces to achieve high matching rates that are minimally impacted by the variation in pose. We also demonstrate high verification rates upon matching nonfrontal to non-frontal faces. Furthermore, we show that our pose-correction framework lends itself very conveniently to the task of super-resolution. By building a multiresolution subspace, we apply the same sparse feature extraction technique to achieve single-image superresolution with high magnification rates. We discuss how our layered framework can potentially solve both pose and resolution problems in a unified and systematic approach. The modularity of our framework also keeps it flexible, upgradable and expandable to handle other external factors such as illumination or expressions. We run extensive tests on the MPIE dataset to validate our findings. |
---|