Summary: | Abstract A robust object tracking algorithm is proposed in this paper based on an online discriminative appearance modeling mechanism. In contrast with traditional trackers whose computations cover the whole target region and may easily be polluted by the similar background pixels, we divided the target into a number of patches and take the most discriminative one as the tracking basis. With the consideration of both the photometric and spatial information, we construct a discriminative target model on it. Then, a likelihood map can be got by comparing the target model with candidate regions, on which the mean shift procedure is employed for mode seeking. Finally, we update the target model to adapt to the appearance variation. Experimental results on a number of challenging video sequences confirm that the proposed method outperforms the related state-of-the-art trackers.
|