The Study of Automatic 2D-to-3D Depth Estimation

碩士 === 國立成功大學 === 資訊工程學系 === 104 === In the current work, we have proposed an automatic 2D-to-3D depth estimation method. It is primarily based on learning and geometry. Because it is a learning-based algorithm, it has no restriction on the scene of the input image or video. The new method can also...

Full description

Bibliographic Details
Main Authors: Meng-ChengBai, 白孟哲
Other Authors: Shu-Mei Guo
Format: Others
Language:en_US
Published: 2016
Online Access:http://ndltd.ncl.edu.tw/handle/61379628822347936247
id ndltd-TW-104NCKU5392093
record_format oai_dc
spelling ndltd-TW-104NCKU53920932017-10-29T04:35:12Z http://ndltd.ncl.edu.tw/handle/61379628822347936247 The Study of Automatic 2D-to-3D Depth Estimation 2D轉3D自動化深度預測之研究 Meng-ChengBai 白孟哲 碩士 國立成功大學 資訊工程學系 104 In the current work, we have proposed an automatic 2D-to-3D depth estimation method. It is primarily based on learning and geometry. Because it is a learning-based algorithm, it has no restriction on the scene of the input image or video. The new method can also refine the depth estimation result by vanishing point in geometry. In order to accelerate the computing speed of the overall algorithm, we extract the features of 2D images in the database in advance. Next, to find the most similar images within the database by image features, the only necessary thing to do is to compute the feature of the query image and compare the difference between the query image and training images. Such an approach can reduce computing time. Furthermore, the paper also introduces the method of propagating depth value to video with similar scenes. The experimental results show that the depth map of our algorithm is not only closely similar to ground truth, but also the PSNR and VIF (visual information fidelity) show better performance than other reported algorithms. Shu-Mei Guo 郭淑美 2016 學位論文 ; thesis 60 en_US
collection NDLTD
language en_US
format Others
sources NDLTD
description 碩士 === 國立成功大學 === 資訊工程學系 === 104 === In the current work, we have proposed an automatic 2D-to-3D depth estimation method. It is primarily based on learning and geometry. Because it is a learning-based algorithm, it has no restriction on the scene of the input image or video. The new method can also refine the depth estimation result by vanishing point in geometry. In order to accelerate the computing speed of the overall algorithm, we extract the features of 2D images in the database in advance. Next, to find the most similar images within the database by image features, the only necessary thing to do is to compute the feature of the query image and compare the difference between the query image and training images. Such an approach can reduce computing time. Furthermore, the paper also introduces the method of propagating depth value to video with similar scenes. The experimental results show that the depth map of our algorithm is not only closely similar to ground truth, but also the PSNR and VIF (visual information fidelity) show better performance than other reported algorithms.
author2 Shu-Mei Guo
author_facet Shu-Mei Guo
Meng-ChengBai
白孟哲
author Meng-ChengBai
白孟哲
spellingShingle Meng-ChengBai
白孟哲
The Study of Automatic 2D-to-3D Depth Estimation
author_sort Meng-ChengBai
title The Study of Automatic 2D-to-3D Depth Estimation
title_short The Study of Automatic 2D-to-3D Depth Estimation
title_full The Study of Automatic 2D-to-3D Depth Estimation
title_fullStr The Study of Automatic 2D-to-3D Depth Estimation
title_full_unstemmed The Study of Automatic 2D-to-3D Depth Estimation
title_sort study of automatic 2d-to-3d depth estimation
publishDate 2016
url http://ndltd.ncl.edu.tw/handle/61379628822347936247
work_keys_str_mv AT mengchengbai thestudyofautomatic2dto3ddepthestimation
AT báimèngzhé thestudyofautomatic2dto3ddepthestimation
AT mengchengbai 2dzhuǎn3dzìdònghuàshēndùyùcèzhīyánjiū
AT báimèngzhé 2dzhuǎn3dzìdònghuàshēndùyùcèzhīyánjiū
AT mengchengbai studyofautomatic2dto3ddepthestimation
AT báimèngzhé studyofautomatic2dto3ddepthestimation
_version_ 1718558142334238720