Adaptive View Sampling for Efficient Synthesis of 3D View Using Calibrated Array Cameras

Recovery of three-dimensional (3D) coordinates using a set of images with texture mapping to generate a 3D mesh has been of great interest in computer graphics and 3D imaging applications. This work aims to propose an approach to adaptive view selection (AVS) that determines the optimal number of im...

Full description

Bibliographic Details
Main Authors: Geonwoo Kim, Deokwoo Lee
Format: Article
Language:English
Published: MDPI AG 2021-01-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/10/1/82
id doaj-7d3e9d61cd5b4e869e32bf1c7696c74a
record_format Article
spelling doaj-7d3e9d61cd5b4e869e32bf1c7696c74a2021-01-05T00:01:38ZengMDPI AGElectronics2079-92922021-01-0110828210.3390/electronics10010082Adaptive View Sampling for Efficient Synthesis of 3D View Using Calibrated Array CamerasGeonwoo Kim0Deokwoo Lee1Department of Computer Engineering, Keimyung University, Daegu 42601, KoreaDepartment of Computer Engineering, Keimyung University, Daegu 42601, KoreaRecovery of three-dimensional (3D) coordinates using a set of images with texture mapping to generate a 3D mesh has been of great interest in computer graphics and 3D imaging applications. This work aims to propose an approach to adaptive view selection (AVS) that determines the optimal number of images to generate the synthesis result using the 3D mesh and textures in terms of computational complexity and image quality (peak signal-to-noise ratio (PSNR)). All 25 images were acquired by a set of cameras in a <inline-formula><math display="inline"><semantics><mrow><mn>5</mn><mo>×</mo><mn>5</mn></mrow></semantics></math></inline-formula> array structure, and rectification had already been performed. To generate the mesh, depth map extraction was carried out by calculating the disparity between the matched feature points. Synthesis was performed by fully exploiting the content included in the images followed by texture mapping. Both the 2D colored images and grey-scale depth images were synthesized based on the geometric relationship between the images, and to this end, three-dimensional synthesis was performed with a smaller number of images, which was less than 25. This work determines the optimal number of images that sufficiently provides a reliable 3D extended view by generating a mesh and image textures. The optimal number of images contributes to an efficient system for 3D view generation that reduces the computational complexity while preserving the quality of the result in terms of the PSNR. To substantiate the proposed approach, experimental results are provided.https://www.mdpi.com/2079-9292/10/1/82array cameraslight field cameraadaptive view selectionview synthesismeshtexture
collection DOAJ
language English
format Article
sources DOAJ
author Geonwoo Kim
Deokwoo Lee
spellingShingle Geonwoo Kim
Deokwoo Lee
Adaptive View Sampling for Efficient Synthesis of 3D View Using Calibrated Array Cameras
Electronics
array cameras
light field camera
adaptive view selection
view synthesis
mesh
texture
author_facet Geonwoo Kim
Deokwoo Lee
author_sort Geonwoo Kim
title Adaptive View Sampling for Efficient Synthesis of 3D View Using Calibrated Array Cameras
title_short Adaptive View Sampling for Efficient Synthesis of 3D View Using Calibrated Array Cameras
title_full Adaptive View Sampling for Efficient Synthesis of 3D View Using Calibrated Array Cameras
title_fullStr Adaptive View Sampling for Efficient Synthesis of 3D View Using Calibrated Array Cameras
title_full_unstemmed Adaptive View Sampling for Efficient Synthesis of 3D View Using Calibrated Array Cameras
title_sort adaptive view sampling for efficient synthesis of 3d view using calibrated array cameras
publisher MDPI AG
series Electronics
issn 2079-9292
publishDate 2021-01-01
description Recovery of three-dimensional (3D) coordinates using a set of images with texture mapping to generate a 3D mesh has been of great interest in computer graphics and 3D imaging applications. This work aims to propose an approach to adaptive view selection (AVS) that determines the optimal number of images to generate the synthesis result using the 3D mesh and textures in terms of computational complexity and image quality (peak signal-to-noise ratio (PSNR)). All 25 images were acquired by a set of cameras in a <inline-formula><math display="inline"><semantics><mrow><mn>5</mn><mo>×</mo><mn>5</mn></mrow></semantics></math></inline-formula> array structure, and rectification had already been performed. To generate the mesh, depth map extraction was carried out by calculating the disparity between the matched feature points. Synthesis was performed by fully exploiting the content included in the images followed by texture mapping. Both the 2D colored images and grey-scale depth images were synthesized based on the geometric relationship between the images, and to this end, three-dimensional synthesis was performed with a smaller number of images, which was less than 25. This work determines the optimal number of images that sufficiently provides a reliable 3D extended view by generating a mesh and image textures. The optimal number of images contributes to an efficient system for 3D view generation that reduces the computational complexity while preserving the quality of the result in terms of the PSNR. To substantiate the proposed approach, experimental results are provided.
topic array cameras
light field camera
adaptive view selection
view synthesis
mesh
texture
url https://www.mdpi.com/2079-9292/10/1/82
work_keys_str_mv AT geonwookim adaptiveviewsamplingforefficientsynthesisof3dviewusingcalibratedarraycameras
AT deokwoolee adaptiveviewsamplingforefficientsynthesisof3dviewusingcalibratedarraycameras
_version_ 1724348772338630656