Robust Visual Tracking Using Co-inference Fusion for Mixed Sequence

碩士 === 國立中央大學 === 通訊工程學系 === 102 === For visual tracking, mixed images cannot be avoided since the transmitted scene may be captured with specular reflections. There are few previous method tackling this important problem, thus this paper proposes a novel robust visual tracking method using co-infer...

Full description

Bibliographic Details
Main Authors: Hsiao-Tzu Chen, 陳筱慈
Other Authors: Chih-Wei Tang
Format: Others
Language:zh-TW
Published: 2014
Online Access:http://ndltd.ncl.edu.tw/handle/96621652535890188650
Description
Summary:碩士 === 國立中央大學 === 通訊工程學系 === 102 === For visual tracking, mixed images cannot be avoided since the transmitted scene may be captured with specular reflections. There are few previous method tackling this important problem, thus this paper proposes a novel robust visual tracking method using co-inference fusion for mixed sequences. Based on the framework of particle filter with compensated motion model, this paper adopts the co-inference method to fuse two types of color measurements of the target. Although both measurements are observed from the same mixed image, one of them is built based on a non-reflection mask, constructed from the illumination image. The proposed scheme adopts reflection separation to derive an illumination image and a reflection image from mixed images before tracking. To satisfy the time-invariant assumption of a reflection image, camera motion is compensated on each mixed image before reflection separation. Finally, the weight of each particle is individually optimized using maximum likelihood. Experimental results show that the proposed scheme effectively improves the tracking accuracy on mixed sequence with camera motion.