Color Continuity-Aware & Exposure-Adaptive HDR Synthesis and its Real-Time Application on Autotronics Night Vision Enhancement

碩士 === 國立中正大學 === 電機工程研究所 === 100 === High dynamic range (HDR) synthesis is popular in consumer products such as digital cameras and smart phones to generate high-quality still pictures. However, these techniques are difficult to be applied on video, owing to the strict timing constraints (i.e. 1/30...

Full description

Bibliographic Details
Main Authors: Chun-Hao Hsu, 許鈞豪
Other Authors: Ching-Wei Yeh
Format: Others
Language:zh-TW
Published: 2012
Online Access:http://ndltd.ncl.edu.tw/handle/38810276923215636300
Description
Summary:碩士 === 國立中正大學 === 電機工程研究所 === 100 === High dynamic range (HDR) synthesis is popular in consumer products such as digital cameras and smart phones to generate high-quality still pictures. However, these techniques are difficult to be applied on video, owing to the strict timing constraints (i.e. 1/30 sec for a typical 30fps frame rate) posed on the image sources of various exposure values for HDR synthesis.This thesis tries to synthesize HDR frames based on only two sources, namely “high exposure (HE)” and “low exposure (LE)” respectively. The “color dis-continuity” and “loss of details” problems caused by the limited sources have been properly solved with our proposed “color continuity-aware” and “exposure-adaptive” pixel merge. Moreover, an improved tone mapping algorithm, which combines the advantages of conventional photographic and gradient-based approaches, was been proposed to generate the final result. Simulations on various scenes show the proposed HE/LE algorithm is able to achieve the subjective quality of conventional ones based on multiple source images. (e.g. five images of -2ev, -1ev, 0ev, 1ev,2ev )The proposed HDR video synthesis has then been applied in autotronics night vision enhancement. An alternative algorithm with “auto exposure (AE)” and LE has been proposed instead of the HE/LE approach to solve the both under-exposure problem in dark environments. In other words, the result pixels come directly from AE (i.e. appropriate exposure for most pixels) except those over-exposure ones, which will be augmented by the corresponding pixels in LE to improve the details. LE entropy-aware augmentation and seam smoothing have also been proposed to effectively improve the synthesis quality.Finally, the proposed algorithm has been implemented and verified using the TI OMAP4430 (including dual-core ARM Cortex A9) embedded platform. The source images for HDR synthesis are captured from two USB cameras respectively with software-based alignment. The multithreaded implementation can achieve 10 frames/sec real-time HDR video synthesis.