Summary: | 碩士 === 淡江大學 === 資訊工程學系碩士班 === 104 === In this study, we proposed a system to transfer the style of portrait lighting. Given a testing image taken under a particular lighting style, the system automatically transfers its lighting condition into another lighting style. Generally speaking, lighting estimation process is difficult, since the light reflected from the facial skin is related to the position of lighting source and the three-dimensional facial geometry. However, only two-dimensional image information is required in the proposed system; first of all, a pseudo database is created to establish a correlation between two different lighting conditions. Consequently, we treated such an estimation process as a domain transfer problem. To distinguish the lighting geometry between these two lighting domains and to preserve personal characteristics, an AdaBoost-based approach is adopted to extract discriminative features. Following, the synthesis process of lighting style transformation is performed in a two-step based manner, where the lighting layer is synthesized first, and then the detailed textures. Especially, these two synthesis steps are formulated as graph models, and the extracted features are used as constraints embedded in the proposed models. The proposed framework was evaluated by Kelco, AR and Yale B face databases. According to our results and analysis, the proposed feature extraction step can significantly improve the final synthesized results. Compare with previous works, the proposed framework is less sensitive to the appearance diversity of training examples, and can apply to testing subjects that are less similar to training database.
|