Wide Angle Virtual View Synthesis Using Two-by-Two Matrix Kinect V2

碩士 === 國立交通大學 === 電子研究所 === 105 === Virtual view synthesis technique uses the color images and depth maps captured by multiple cameras to synthesize a virtual view image. The conventional view synthesis system typically arranges two or more cameras along a baseline line to capture and to synthesize...

Full description

Bibliographic Details
Main Authors: Chang, Huan-Rui, 張桓睿
Other Authors: Hang, Hsueh-Ming
Format: Others
Language:en_US
Published: 2017
Online Access:http://ndltd.ncl.edu.tw/handle/69022361210755264898
Description
Summary:碩士 === 國立交通大學 === 電子研究所 === 105 === Virtual view synthesis technique uses the color images and depth maps captured by multiple cameras to synthesize a virtual view image. The conventional view synthesis system typically arranges two or more cameras along a baseline line to capture and to synthesize the virtual view lies on the baseline. However, in many applications we may want to synthesize a virtual view beyond the baseline, which is called wide-angle view synthesis. Wide-angle view synthesis needs to solve the problems of many small cracks and large disocclusion regions in the synthesized virtual view. In this thesis, we adopt a two-by–two array of Kinect v2 cameras to design our wide-angle view synthesis system. In this thesis, our target is practical real world case wide-angle view synthesis system. There are two key contributions in this thesis. The first is that we solve the synchronization problem among multiple Kinects in capturing images. Without external sync signal, the captured scenes of four Kinects are not exactly at the same time instance. We implement a clock adjustment system to solve this problem based on PC clock synchronization software. The second contribution is that we propose a multi-view blending algorithm that can remove the wrong pixels due to texture-depth misalignment and texture-texture misalignment and provide a clear improvement on the synthesized image quality. The proposed multi-view blending algorithm consists of three parts. The first part picks up the dominate reference view to synthesize the corresponding virtual image pixels. The second part filters out the noises due to texture-depth misalignment and texture-texture misalignment. The third part chooses the best matched color pixels from the four warped virtual views to synthesize the remaining holes. At the end, we examine the quality of synthesized views for various camera tilt and zoom-in/out cases. The proposed multi-view blending algorithm achieves good subjective synthesized image quality in these real world cases.