Summary: | 碩士 === 國立暨南國際大學 === 資訊工程學系 === 91 === Owing to the simplicity in computation and the excellent performance in close-range depth estimation, both depth from defocusing (DFD) and depth from focusing (DFF) have long been considered as two very important visual modules. It is reported from time to time in the literature that the estimation accuracy of a DFF module is better than that of a DFD module. Most people believe that DFF outperforms DFD because DFF uses much more images than DFD does. However, empirical data showed that DFD is less accurate than DFF even when DFD uses about the same amount of images as DFF does. In this work, we will reverse the general belief that DFD is less accurate than DFF. Based on our theoretical error analysis result, we have found a way to greatly improve the accuracy of a DFD module.
In this thesis, the Cram{\`e}r-Rao lower bound of image blur estimation error has been derived analytically. According to the derived error bound, conditions that lead to accurate DFD estimation are identified. It is shown that the traditional DFD methods using only two images may be problematic. To obtain accurate depth estimation from DFD, one would have to acquire a sequence of images focused at different distances. Furthermore, we proposed to integrate the DFF module into the DFD module by using the DFF results to remove severely defocused images and to match proper images into pairs for computing DFD. Real experimental results show that the proposed method outperforms both the DFF-alone and the DFD-alone approaches. The accuracy of the integrated system is about 0.4\% while object distance is about 850 mm. However, since the aforementioned method is developed based on the constant-depth-image-block assumption (which is also adopted in most DFD methods), the depth estimation accuracy would deteriorate when the depth map varies violently. In order to tackle this problem, we propose to use the Cai-Wang wavelet bases for modeling the space-variant image-blur parameter to improve the DFD accuracy. Computer simulation results show that the space-variant approach is also very promising.
Keywords: Depth from Focusing (DFF), Depth from Defocusing (DFD), Cram{\`{e}}r-Rao Lower Bound, Optimal Camera Parameter, Cai-Wang Wavelet Transform, Space-Variant
|