Single View Reconstruction for Human Face and Motion with Priors
Single view reconstruction is fundamentally an under-constrained problem. We aim to develop new approaches to model human face and motion with model priors that restrict the space of possible solutions. First, we develop a novel approach to recover the 3D shape from a single view image under challen...
Main Author: | |
---|---|
Format: | Others |
Published: |
UKnowledge
2010
|
Subjects: | |
Online Access: | http://uknowledge.uky.edu/gradschool_diss/62 http://uknowledge.uky.edu/cgi/viewcontent.cgi?article=1061&context=gradschool_diss |
id |
ndltd-uky.edu-oai-uknowledge.uky.edu-gradschool_diss-1061 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-uky.edu-oai-uknowledge.uky.edu-gradschool_diss-10612015-04-11T05:00:49Z Single View Reconstruction for Human Face and Motion with Priors Wang, Xianwang Single view reconstruction is fundamentally an under-constrained problem. We aim to develop new approaches to model human face and motion with model priors that restrict the space of possible solutions. First, we develop a novel approach to recover the 3D shape from a single view image under challenging conditions, such as large variations in illumination and pose. The problem is addressed by employing the techniques of non-linear manifold embedding and alignment. Specifically, the local image models for each patch of facial images and the local surface models for each patch of 3D shape are learned using a non-linear dimensionality reduction technique, and the correspondences between these local models are then learned by a manifold alignment method. Local models successfully remove the dependency of large training databases for human face modeling. By combining the local shapes, the global shape of a face can be reconstructed directly from a single linear system of equations via least square. Unfortunately, this learning-based approach cannot be successfully applied to the problem of human motion modeling due to the internal and external variations in single view video-based marker-less motion capture. Therefore, we introduce a new model-based approach for capturing human motion using a stream of depth images from a single depth sensor. While a depth sensor provides metric 3D information, using a single sensor, instead of a camera array, results in a view-dependent and incomplete measurement of object motion. We develop a novel two-stage template fitting algorithm that is invariant to subject size and view-point variations, and robust to occlusions. Starting from a known pose, our algorithm first estimates a body configuration through temporal registration, which is used to search the template motion database for a best match. The best match body configuration as well as its corresponding surface mesh model are deformed to fit the input depth map, filling in the part that is occluded from the input and compensating for differences in pose and body-size between the input image and the template. Our approach does not require any makers, user-interaction, or appearance-based tracking. Experiments show that our approaches can achieve good modeling results for human face and motion, and are capable of dealing with variety of challenges in single view reconstruction, e.g., occlusion. 2010-01-01T08:00:00Z text application/pdf http://uknowledge.uky.edu/gradschool_diss/62 http://uknowledge.uky.edu/cgi/viewcontent.cgi?article=1061&context=gradschool_diss University of Kentucky Doctoral Dissertations UKnowledge 3D reconstruction marker-less motion capture face modeling and analysis motion modeling and analysis non-linear dimensionality reduction Computer Sciences |
collection |
NDLTD |
format |
Others
|
sources |
NDLTD |
topic |
3D reconstruction marker-less motion capture face modeling and analysis motion modeling and analysis non-linear dimensionality reduction Computer Sciences |
spellingShingle |
3D reconstruction marker-less motion capture face modeling and analysis motion modeling and analysis non-linear dimensionality reduction Computer Sciences Wang, Xianwang Single View Reconstruction for Human Face and Motion with Priors |
description |
Single view reconstruction is fundamentally an under-constrained problem. We aim to develop new approaches to model human face and motion with model priors that restrict the space of possible solutions. First, we develop a novel approach to recover the 3D shape from a single view image under challenging conditions, such as large variations in illumination and pose. The problem is addressed by employing the techniques of non-linear manifold embedding and alignment. Specifically, the local image models for each patch of facial images and the local surface models for each patch of 3D shape are learned using a non-linear dimensionality reduction technique, and the correspondences between these local models are then learned by a manifold alignment method. Local models successfully remove the dependency of large training databases for human face modeling. By combining the local shapes, the global shape of a face can be reconstructed directly from a single linear system of equations via least square.
Unfortunately, this learning-based approach cannot be successfully applied to the problem of human motion modeling due to the internal and external variations in single view video-based marker-less motion capture. Therefore, we introduce a new model-based approach for capturing human motion using a stream of depth images from a single depth sensor. While a depth sensor provides metric 3D information, using a single sensor, instead of a camera array, results in a view-dependent and incomplete measurement of object motion. We develop a novel two-stage template fitting algorithm that is invariant to subject size and view-point variations, and robust to occlusions. Starting from a known pose, our algorithm first estimates a body configuration through temporal registration, which is used to search the template motion database for a best match. The best match body configuration as well as its corresponding surface mesh model are deformed to fit the input depth map, filling in the part that is occluded from the input and compensating for differences in pose and body-size between the input image and the template. Our approach does not require any makers, user-interaction, or appearance-based tracking.
Experiments show that our approaches can achieve good modeling results for human face and motion, and are capable of dealing with variety of challenges in single view reconstruction, e.g., occlusion. |
author |
Wang, Xianwang |
author_facet |
Wang, Xianwang |
author_sort |
Wang, Xianwang |
title |
Single View Reconstruction for Human Face and Motion with Priors |
title_short |
Single View Reconstruction for Human Face and Motion with Priors |
title_full |
Single View Reconstruction for Human Face and Motion with Priors |
title_fullStr |
Single View Reconstruction for Human Face and Motion with Priors |
title_full_unstemmed |
Single View Reconstruction for Human Face and Motion with Priors |
title_sort |
single view reconstruction for human face and motion with priors |
publisher |
UKnowledge |
publishDate |
2010 |
url |
http://uknowledge.uky.edu/gradschool_diss/62 http://uknowledge.uky.edu/cgi/viewcontent.cgi?article=1061&context=gradschool_diss |
work_keys_str_mv |
AT wangxianwang singleviewreconstructionforhumanfaceandmotionwithpriors |
_version_ |
1716800466874007552 |