Automatic visual positioning and template matching using Expectation-Maximization algorithms
碩士 === 元智大學 === 工業工程與管理學系 === 104 === Template matching is very important in image processing for object recognition and image matching. The purpose of template matching is to find the best correspondence between two images. The well-known normalized cross correlation (NCC) is sensitive to translati...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2016
|
Online Access: | http://ndltd.ncl.edu.tw/handle/77130832328056554080 |
id |
ndltd-TW-104YZU05031039 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-104YZU050310392017-08-12T04:35:29Z http://ndltd.ncl.edu.tw/handle/77130832328056554080 Automatic visual positioning and template matching using Expectation-Maximization algorithms 最大期望演算法於圖形比對與影像定位 Yi-Chun Hsieh 謝怡君 碩士 元智大學 工業工程與管理學系 104 Template matching is very important in image processing for object recognition and image matching. The purpose of template matching is to find the best correspondence between two images. The well-known normalized cross correlation (NCC) is sensitive to translation, rotation and scale changes. This research develops Expectation-Maximization (E-M) based algorithms for image positioning in a cluttered background and similarity measurement of objects in a clear background. They can be well tolerated with deformation and incomplete shape of an object. The proposed method starts with edge detection to extract edge points in the image. For an edge point in one image, the E-step in the E-M algorithm uses a fast spiral search to find its corresponding edge point with the shortest distance in the compared image. The weight of the edge point is then inversely proportional to the distance. In the M-step, the center, direction angle and size of the object are calculated from the weighted edge points. The E-step and M-step iterate recursively until convergence. This method can be applied for positioning and inspection of rigid industrial products such as printed-circuit board (PCB) and liquid-crystal display (LCD), and alignment of non-rigid body in motion images. When applied to complex background images, such as PCB images that involve only minor translation and rotation changes, the proposed E-M method can achieve very high positioning accuracy. In the assessment of body motion recognition, the proposed method can correctly label the local difference of motions between the coach and pupil in video images. The weights of individual edge points are also utilized to calculate the similarity assessment value between two compared objects in clear backgrounds. It has been successfully applied to the identification of signature authenticity and counterfeit trademarks. Du-Ming Tsai 蔡篤銘 2016 學位論文 ; thesis 132 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 元智大學 === 工業工程與管理學系 === 104 === Template matching is very important in image processing for object recognition and image matching. The purpose of template matching is to find the best correspondence between two images. The well-known normalized cross correlation (NCC) is sensitive to translation, rotation and scale changes. This research develops Expectation-Maximization (E-M) based algorithms for image positioning in a cluttered background and similarity measurement of objects in a clear background. They can be well tolerated with deformation and incomplete shape of an object. The proposed method starts with edge detection to extract edge points in the image. For an edge point in one image, the E-step in the E-M algorithm uses a fast spiral search to find its corresponding edge point with the shortest distance in the compared image. The weight of the edge point is then inversely proportional to the distance. In the M-step, the center, direction angle and size of the object are calculated from the weighted edge points. The E-step and M-step iterate recursively until convergence.
This method can be applied for positioning and inspection of rigid industrial products such as printed-circuit board (PCB) and liquid-crystal display (LCD), and alignment of non-rigid body in motion images. When applied to complex background images, such as PCB images that involve only minor translation and rotation changes, the proposed E-M method can achieve very high positioning accuracy. In the assessment of body motion recognition, the proposed method can correctly label the local difference of motions between the coach and pupil in video images. The weights of individual edge points are also utilized to calculate the similarity assessment value between two compared objects in clear backgrounds. It has been successfully applied to the identification of signature authenticity and counterfeit trademarks.
|
author2 |
Du-Ming Tsai |
author_facet |
Du-Ming Tsai Yi-Chun Hsieh 謝怡君 |
author |
Yi-Chun Hsieh 謝怡君 |
spellingShingle |
Yi-Chun Hsieh 謝怡君 Automatic visual positioning and template matching using Expectation-Maximization algorithms |
author_sort |
Yi-Chun Hsieh |
title |
Automatic visual positioning and template matching using Expectation-Maximization algorithms |
title_short |
Automatic visual positioning and template matching using Expectation-Maximization algorithms |
title_full |
Automatic visual positioning and template matching using Expectation-Maximization algorithms |
title_fullStr |
Automatic visual positioning and template matching using Expectation-Maximization algorithms |
title_full_unstemmed |
Automatic visual positioning and template matching using Expectation-Maximization algorithms |
title_sort |
automatic visual positioning and template matching using expectation-maximization algorithms |
publishDate |
2016 |
url |
http://ndltd.ncl.edu.tw/handle/77130832328056554080 |
work_keys_str_mv |
AT yichunhsieh automaticvisualpositioningandtemplatematchingusingexpectationmaximizationalgorithms AT xièyíjūn automaticvisualpositioningandtemplatematchingusingexpectationmaximizationalgorithms AT yichunhsieh zuìdàqīwàngyǎnsuànfǎyútúxíngbǐduìyǔyǐngxiàngdìngwèi AT xièyíjūn zuìdàqīwàngyǎnsuànfǎyútúxíngbǐduìyǔyǐngxiàngdìngwèi |
_version_ |
1718515662949711872 |