DEN: Disentangling and Exchanging Network for Depth Completion
碩士 === 國立中正大學 === 電機工程研究所 === 107 === In this research, we propose a Disentangling and Exchanging Network (DEN) to inpainting the depth channel of an RGB-D image, which is captured by a commodity-grade depth camera. When the environment is large, surfaces are shiny, or strong lighting is abundant, t...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2019
|
Online Access: | http://ndltd.ncl.edu.tw/handle/9nmryq |
id |
ndltd-TW-107CCU00442032 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-107CCU004420322019-11-01T05:28:13Z http://ndltd.ncl.edu.tw/handle/9nmryq DEN: Disentangling and Exchanging Network for Depth Completion 應用於深度圖填補之分離式表徵神經網路 WU, YOU-FENG 吳侑峰 碩士 國立中正大學 電機工程研究所 107 In this research, we propose a Disentangling and Exchanging Network (DEN) to inpainting the depth channel of an RGB-D image, which is captured by a commodity-grade depth camera. When the environment is large, surfaces are shiny, or strong lighting is abundant, the depth channel is often sparse or produced with missing data, while the RGB channels are still dense and store all of the useful information. From this observation, we were thinking about the feasibility of borrowing useful information from RGB image, such as structural information, to complete the obtained sparse depth channel. We started with an assumption that the RGB image and depth image can be decomposed into two main parts: common features and specific features. The common features represent structural information that should be the same in RGB and depth images. The specific features capture the depth information and the color information in depth and RGB images respectively. The structural information extracted from RGB image is then used to guide the depth completion by combining with depth information extracted from depth image. With this idea, we finally came up with a novel architecture for depth completion with #4 types of training objectives. In order to evaluate our architecture, we conducted ablation studies and compared the results with the most advanced CNN-based depth completion methods. Our architecture achieves lowest errors on the ScanNet dataset, outperforms state-of-the-art and also produces qualitatively better results. HOANG, CHING-CHUN 黃敬群 2019 學位論文 ; thesis 64 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立中正大學 === 電機工程研究所 === 107 === In this research, we propose a Disentangling and Exchanging Network (DEN) to inpainting the depth channel of an RGB-D image, which is captured by a commodity-grade depth camera. When the environment is large, surfaces are shiny, or strong lighting is abundant, the depth channel is often sparse or produced with missing data, while the RGB channels are still dense and store all of the useful information. From this observation, we were thinking about the feasibility of borrowing useful information from RGB image, such as structural information, to complete the obtained sparse depth channel. We started with an assumption that the RGB image and depth image can be decomposed into two main parts: common features and specific features. The common features represent structural information that should be the same in RGB and depth images. The specific features capture the depth information and the color information in depth and RGB images respectively. The structural information extracted from RGB image is then used to guide the depth completion by combining with depth information extracted from depth image. With this idea, we finally came up with a novel architecture for depth completion with #4 types of training objectives. In order to evaluate our architecture, we conducted ablation studies and compared the results with the most advanced CNN-based depth completion methods. Our architecture achieves lowest errors on the ScanNet dataset, outperforms state-of-the-art and also produces qualitatively better results.
|
author2 |
HOANG, CHING-CHUN |
author_facet |
HOANG, CHING-CHUN WU, YOU-FENG 吳侑峰 |
author |
WU, YOU-FENG 吳侑峰 |
spellingShingle |
WU, YOU-FENG 吳侑峰 DEN: Disentangling and Exchanging Network for Depth Completion |
author_sort |
WU, YOU-FENG |
title |
DEN: Disentangling and Exchanging Network for Depth Completion |
title_short |
DEN: Disentangling and Exchanging Network for Depth Completion |
title_full |
DEN: Disentangling and Exchanging Network for Depth Completion |
title_fullStr |
DEN: Disentangling and Exchanging Network for Depth Completion |
title_full_unstemmed |
DEN: Disentangling and Exchanging Network for Depth Completion |
title_sort |
den: disentangling and exchanging network for depth completion |
publishDate |
2019 |
url |
http://ndltd.ncl.edu.tw/handle/9nmryq |
work_keys_str_mv |
AT wuyoufeng dendisentanglingandexchangingnetworkfordepthcompletion AT wúyòufēng dendisentanglingandexchangingnetworkfordepthcompletion AT wuyoufeng yīngyòngyúshēndùtútiánbǔzhīfēnlíshìbiǎozhēngshénjīngwǎnglù AT wúyòufēng yīngyòngyúshēndùtútiánbǔzhīfēnlíshìbiǎozhēngshénjīngwǎnglù |
_version_ |
1719285110462742528 |