A Study on Camera Calibration and Image Transformation Techniques and Their Applications
博士 === 國立交通大學 === 資訊科學與工程研究所 === 95 === In the field of computer vision, extracting and analyzing the information contained in the image captured by a camera are performed by a computer program implementing a certain algorithm. One kind of such information is the geometry (its shape or pose) about a...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_US |
Published: |
2007
|
Online Access: | http://ndltd.ncl.edu.tw/handle/12815361625874342387 |
id |
ndltd-TW-095NCTU5394046 |
---|---|
record_format |
oai_dc |
collection |
NDLTD |
language |
en_US |
format |
Others
|
sources |
NDLTD |
description |
博士 === 國立交通大學 === 資訊科學與工程研究所 === 95 === In the field of computer vision, extracting and analyzing the information contained in the image captured by a camera are performed by a computer program implementing a certain algorithm. One kind of such information is the geometry (its shape or pose) about an object in the space. The algorithm to extract such a kind of information usually includes an important procedure called “camera calibration.” The purpose of camera calibration is to construct the relationship between the image plane of the camera and the coordinates system of the object space. The relationship is usually represented by a “mathematical function.” After calibration, a set of parameters representing the “characteristics” of the camera in the coordinate system of the object space is obtained. The characteristics are unique for each distinct optical structure (the focus length of optics, the component lay-out, etc.) of the camera. Different cameras with different optics designs need different “mathematical function” models to describe their features. After completing the “camera calibration” procedure, an algorithm is utilized to transform the information contained in a captured image into the object space to conform the realization model of the human brain.
In this dissertation study, the investigation of camera calibration and image transformation techniques, as well as their applications is conducted. Three new methods are proposed for related topics of image transformations for the omni-directional camera (or just omni-camera) and two novel methods are proposed for the purpose of camera calibration.
Because the field of view (FOV) of an omni-camera is almost near or even beyond a full hemisphere, it is popularly applied in the fields of visual surveillance, and vision-based robot or autonomous vehicle navigation. The captured omni-directional image (or just omni-image) should be rectified into a normal perspective-view or panoramic image for convenient human viewing or image-proof preservation. Current studies focus on the image rectification or image unwarping for a single-view-point (SVP) omni-camera. Studies on non-SVP ones are limited because of the difficulty to analyze their structures. But a non-SVP omni-camera is superior, compared with an SVP one in the aspects of possessing uniform radial resolutions and larger FOVs. These merits make it more suitable in the above application cases. In this study, we develop some suitable solutions to the issue of image unwarping for non-SVP omni-cameras to compensate their inherent deficiencies for fitting the requirement of practical applications. Proposed methods in this study are summarized in the following.
(a) An analytic image unwarping method is proposed for a non-SVP hypercatadioptric camera. The method has extended the image unwarping capability of the existing methods for SVP omni-cameras to tolerate lens/mirror assembly imprecision, which is difficult to overcome in most real applications.
(b) A new method called “edge-preserving 8-directional two-layered weighting interpolation” is proposed for interpolating unfilled pixels in a perspective-view or panoramic image resulting from unwarping an omni-image taken by a non-SVP omni-camera. This method can solve the problem of edge preserving in interpolating the input image which has many irregularly distributed unfilled pixels.
(c) A unified approach to unwarping of omni-images into panoramic or perspective-view images is proposed. The approach is based on a new concept of pano-mapping table, which is created once forever by a simple learning process for an omni-camera of any kind as a summary of the information conveyed by all the camera parameters.
Moreover, for resolving the deficiency of traditional camera applications in the pointing system, we propose two methods.
(d) A robust and accurate calibration method for coordinate transformation between display screens and their images is proposed. The method improves the accuracy of the coordinate transformation to eliminate the shift errors near the image border.
(e) A camera mouse with a vision-based method for computer cursor control using a video camera held in hand in the air is proposed. The main merit of this method is that it requires no complicated camera calibration.
All the proposed methods described above are innovative. Meanwhile, experimental results show the feasibility of the proposed methods, and their effectiveness and superiority to other methods.
|
author2 |
Wen-Hsiang Tsai |
author_facet |
Wen-Hsiang Tsai Sheng-Wen Jeng 鄭勝文 |
author |
Sheng-Wen Jeng 鄭勝文 |
spellingShingle |
Sheng-Wen Jeng 鄭勝文 A Study on Camera Calibration and Image Transformation Techniques and Their Applications |
author_sort |
Sheng-Wen Jeng |
title |
A Study on Camera Calibration and Image Transformation Techniques and Their Applications |
title_short |
A Study on Camera Calibration and Image Transformation Techniques and Their Applications |
title_full |
A Study on Camera Calibration and Image Transformation Techniques and Their Applications |
title_fullStr |
A Study on Camera Calibration and Image Transformation Techniques and Their Applications |
title_full_unstemmed |
A Study on Camera Calibration and Image Transformation Techniques and Their Applications |
title_sort |
study on camera calibration and image transformation techniques and their applications |
publishDate |
2007 |
url |
http://ndltd.ncl.edu.tw/handle/12815361625874342387 |
work_keys_str_mv |
AT shengwenjeng astudyoncameracalibrationandimagetransformationtechniquesandtheirapplications AT zhèngshèngwén astudyoncameracalibrationandimagetransformationtechniquesandtheirapplications AT shengwenjeng shèyǐngjīxiàozhǔnjíyǐngxiàngzhuǎnhuànjìshùyǔqíyīngyòngzhīyánjiū AT zhèngshèngwén shèyǐngjīxiàozhǔnjíyǐngxiàngzhuǎnhuànjìshùyǔqíyīngyòngzhīyánjiū AT shengwenjeng studyoncameracalibrationandimagetransformationtechniquesandtheirapplications AT zhèngshèngwén studyoncameracalibrationandimagetransformationtechniquesandtheirapplications |
_version_ |
1717746166772269056 |
spelling |
ndltd-TW-095NCTU53940462015-10-13T13:56:24Z http://ndltd.ncl.edu.tw/handle/12815361625874342387 A Study on Camera Calibration and Image Transformation Techniques and Their Applications 攝影機校準及影像轉換技術與其應用之研究 Sheng-Wen Jeng 鄭勝文 博士 國立交通大學 資訊科學與工程研究所 95 In the field of computer vision, extracting and analyzing the information contained in the image captured by a camera are performed by a computer program implementing a certain algorithm. One kind of such information is the geometry (its shape or pose) about an object in the space. The algorithm to extract such a kind of information usually includes an important procedure called “camera calibration.” The purpose of camera calibration is to construct the relationship between the image plane of the camera and the coordinates system of the object space. The relationship is usually represented by a “mathematical function.” After calibration, a set of parameters representing the “characteristics” of the camera in the coordinate system of the object space is obtained. The characteristics are unique for each distinct optical structure (the focus length of optics, the component lay-out, etc.) of the camera. Different cameras with different optics designs need different “mathematical function” models to describe their features. After completing the “camera calibration” procedure, an algorithm is utilized to transform the information contained in a captured image into the object space to conform the realization model of the human brain. In this dissertation study, the investigation of camera calibration and image transformation techniques, as well as their applications is conducted. Three new methods are proposed for related topics of image transformations for the omni-directional camera (or just omni-camera) and two novel methods are proposed for the purpose of camera calibration. Because the field of view (FOV) of an omni-camera is almost near or even beyond a full hemisphere, it is popularly applied in the fields of visual surveillance, and vision-based robot or autonomous vehicle navigation. The captured omni-directional image (or just omni-image) should be rectified into a normal perspective-view or panoramic image for convenient human viewing or image-proof preservation. Current studies focus on the image rectification or image unwarping for a single-view-point (SVP) omni-camera. Studies on non-SVP ones are limited because of the difficulty to analyze their structures. But a non-SVP omni-camera is superior, compared with an SVP one in the aspects of possessing uniform radial resolutions and larger FOVs. These merits make it more suitable in the above application cases. In this study, we develop some suitable solutions to the issue of image unwarping for non-SVP omni-cameras to compensate their inherent deficiencies for fitting the requirement of practical applications. Proposed methods in this study are summarized in the following. (a) An analytic image unwarping method is proposed for a non-SVP hypercatadioptric camera. The method has extended the image unwarping capability of the existing methods for SVP omni-cameras to tolerate lens/mirror assembly imprecision, which is difficult to overcome in most real applications. (b) A new method called “edge-preserving 8-directional two-layered weighting interpolation” is proposed for interpolating unfilled pixels in a perspective-view or panoramic image resulting from unwarping an omni-image taken by a non-SVP omni-camera. This method can solve the problem of edge preserving in interpolating the input image which has many irregularly distributed unfilled pixels. (c) A unified approach to unwarping of omni-images into panoramic or perspective-view images is proposed. The approach is based on a new concept of pano-mapping table, which is created once forever by a simple learning process for an omni-camera of any kind as a summary of the information conveyed by all the camera parameters. Moreover, for resolving the deficiency of traditional camera applications in the pointing system, we propose two methods. (d) A robust and accurate calibration method for coordinate transformation between display screens and their images is proposed. The method improves the accuracy of the coordinate transformation to eliminate the shift errors near the image border. (e) A camera mouse with a vision-based method for computer cursor control using a video camera held in hand in the air is proposed. The main merit of this method is that it requires no complicated camera calibration. All the proposed methods described above are innovative. Meanwhile, experimental results show the feasibility of the proposed methods, and their effectiveness and superiority to other methods. Wen-Hsiang Tsai 蔡文祥 2007 學位論文 ; thesis 131 en_US |