Summary: | Motion estimation is vital in many computer vision applications. Most existing methods require high quality and large quantity of feature correspondence and may fail for images with few textures. In this paper, a photometric alignment method is proposed to obtain better motion estimation result. Since the adopted photometric constraints are usually limited to the required illumination or color consistency assumption, a new generalized content-preserving warp (GCPW) framework, therefore, is designed to perform photometric alignment beyond color consistency. Similar to conventional content-preserving warp (CPW), GCPW is also a mesh-based framework, but it extends CPW by appending a local color transformation model for every mesh quad, which expresses the color transformation from a source image to a target image within the quad. Motion-related mesh vertexes and color-related mapping parameters are optimized jointly in GCPW to get more robust motion estimation results. Evaluation of tens of videos reveals that the proposed method achieves more accurate motion estimation results. More importantly, it is robust to significant color variation. Besides, this paper explores the performance of GCPW in two popular computer vision applications: image stitching and video stabilization. Experimental results demonstrate GCPW’s effectiveness in dealing with typical challenging scenes for these two applications.
|