Pixel-based Multi-focused Image Fusion using Color Appearance Model

碩士 === 大同大學 === 資訊工程學系(所) === 103 === The technology of multi-focus imaging is to fuse multiple images, which have the same scene but with difference focal distance, into one picture with several focusing simultaneously. General multi-focus image fusion technology can only process gray image mostly....

Full description

Bibliographic Details
Main Authors: Wen-hao Wu, 吳文豪
Other Authors: Chen-Chiung Hsieh
Format: Others
Language:zh-TW
Published: 2015
Online Access:http://ndltd.ncl.edu.tw/handle/54411509888283802338
id ndltd-TW-103TTU05392011
record_format oai_dc
spelling ndltd-TW-103TTU053920112016-08-14T04:11:10Z http://ndltd.ncl.edu.tw/handle/54411509888283802338 Pixel-based Multi-focused Image Fusion using Color Appearance Model 使用彩色表現模型進行畫素點為基礎之多焦點影像融合 Wen-hao Wu 吳文豪 碩士 大同大學 資訊工程學系(所) 103 The technology of multi-focus imaging is to fuse multiple images, which have the same scene but with difference focal distance, into one picture with several focusing simultaneously. General multi-focus image fusion technology can only process gray image mostly. In this paper, it analyzes and processes color image and uses color saturation value that is based on Chrominance value divide to Luminance value directly, and it does not measure the focusing with traditional rectangle window mask. And when we analyze most of the focusing pixels, their focusing shapes are similar to star light shapes. Therefore, we name the focusing share Star-light focusing detection, which is used to measure the focusing status of the pixels. In addition, the minimum measurement unit is based on pixels in this discussion; pixel-based fusion can take more details than block-based or region-based fusion. And each pixel value on the fused image shall be the same as the pixel value on original image possibly. Therefore, it will be presented faithfully to the original image. Finally, considering actual application factors in photographing process, such as hand vibration, image distortion caused by optical assembly, wind blowing of the objects in the scene and so on, which make the same object on two original images have displacement or deformation phenomena, when processing fusion, it must consider registration. To verify resultant fused image, we shall use manual method to fuse a mask image and take it as reference to produce the best resultant fused image and calculate PSNR. Furthermore, this paper makes comparison between the fused finial mask image and synthetic fused mask image to calculate the correction rate and error rate. The paper also makes comparison between the two groups of color images produced by current method (state of the art). The obtained mean correction rate and PSNR are 80% and 42dB respectively. Additionally, two other groups of gray images processed by the obtained mean correction rate and PSNR are 83% and 40dB respectively. Method in this paper is superior to most referenced methods. It is proved that Color Saturation and Star-light focusing detection can be applied to image fusion reliably. Chen-Chiung Hsieh 謝禎冏 2015 學位論文 ; thesis 72 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 大同大學 === 資訊工程學系(所) === 103 === The technology of multi-focus imaging is to fuse multiple images, which have the same scene but with difference focal distance, into one picture with several focusing simultaneously. General multi-focus image fusion technology can only process gray image mostly. In this paper, it analyzes and processes color image and uses color saturation value that is based on Chrominance value divide to Luminance value directly, and it does not measure the focusing with traditional rectangle window mask. And when we analyze most of the focusing pixels, their focusing shapes are similar to star light shapes. Therefore, we name the focusing share Star-light focusing detection, which is used to measure the focusing status of the pixels. In addition, the minimum measurement unit is based on pixels in this discussion; pixel-based fusion can take more details than block-based or region-based fusion. And each pixel value on the fused image shall be the same as the pixel value on original image possibly. Therefore, it will be presented faithfully to the original image. Finally, considering actual application factors in photographing process, such as hand vibration, image distortion caused by optical assembly, wind blowing of the objects in the scene and so on, which make the same object on two original images have displacement or deformation phenomena, when processing fusion, it must consider registration. To verify resultant fused image, we shall use manual method to fuse a mask image and take it as reference to produce the best resultant fused image and calculate PSNR. Furthermore, this paper makes comparison between the fused finial mask image and synthetic fused mask image to calculate the correction rate and error rate. The paper also makes comparison between the two groups of color images produced by current method (state of the art). The obtained mean correction rate and PSNR are 80% and 42dB respectively. Additionally, two other groups of gray images processed by the obtained mean correction rate and PSNR are 83% and 40dB respectively. Method in this paper is superior to most referenced methods. It is proved that Color Saturation and Star-light focusing detection can be applied to image fusion reliably.
author2 Chen-Chiung Hsieh
author_facet Chen-Chiung Hsieh
Wen-hao Wu
吳文豪
author Wen-hao Wu
吳文豪
spellingShingle Wen-hao Wu
吳文豪
Pixel-based Multi-focused Image Fusion using Color Appearance Model
author_sort Wen-hao Wu
title Pixel-based Multi-focused Image Fusion using Color Appearance Model
title_short Pixel-based Multi-focused Image Fusion using Color Appearance Model
title_full Pixel-based Multi-focused Image Fusion using Color Appearance Model
title_fullStr Pixel-based Multi-focused Image Fusion using Color Appearance Model
title_full_unstemmed Pixel-based Multi-focused Image Fusion using Color Appearance Model
title_sort pixel-based multi-focused image fusion using color appearance model
publishDate 2015
url http://ndltd.ncl.edu.tw/handle/54411509888283802338
work_keys_str_mv AT wenhaowu pixelbasedmultifocusedimagefusionusingcolorappearancemodel
AT wúwénháo pixelbasedmultifocusedimagefusionusingcolorappearancemodel
AT wenhaowu shǐyòngcǎisèbiǎoxiànmóxíngjìnxínghuàsùdiǎnwèijīchǔzhīduōjiāodiǎnyǐngxiàngrónghé
AT wúwénháo shǐyòngcǎisèbiǎoxiànmóxíngjìnxínghuàsùdiǎnwèijīchǔzhīduōjiāodiǎnyǐngxiàngrónghé
_version_ 1718375603067944960