Error Analysis of Non-contact Three-dimensional Object Scanning using Smartphone and Line Laser

碩士 === 國立高雄應用科技大學 === 模具工程系碩士班 === 101 === The purpose of this study is to design and apply computer vision methods to acquire the spatial three-dimensional coordinate values of objects in space. The first step is to find the relative relationship between CCD, line laser and calibration pattern, res...

Full description

Bibliographic Details
Main Authors: Hsin-Cyuan Chen, 陳信銓
Other Authors: Hung-Yuan Li
Format: Others
Language:zh-TW
Published: 2013
Online Access:http://ndltd.ncl.edu.tw/handle/37775263324747274702
Description
Summary:碩士 === 國立高雄應用科技大學 === 模具工程系碩士班 === 101 === The purpose of this study is to design and apply computer vision methods to acquire the spatial three-dimensional coordinate values of objects in space. The first step is to find the relative relationship between CCD, line laser and calibration pattern, respectively. By using the designed calibration pattern, the extrinsic and intrinsic parameters of camera can be obtained. Then, the measurement results can be calculated by using the relation of parametric equations. Calculated results will be compared with the actual physical measuring results to verify the feasibility of this method and to explore its further development direction. This study is divided into three parts. The first part is to find the extrinsic and intrinsic parameters of camera which involve rotation, translation, scale factor, focal length and image center of the camera-laser configuration. The second part is to extract the lens distortion. The distortion of calibration plays an interesting role because CCD usually has manufacturing inaccuracy and assembled error. In order to reduce the measurement error, the lens distortion error has to be corrected. The final part is to obtain the point on the intersection of the projection line and the laser plane, which is the spatial coordinate value of object. The experiments have been carried out for different environment conditions, one is to reduce environment light and the other is to increase the object reflection light. Based on the experimental results, to increase the object reflection light significantly outperformed the other environment conditions with respect to the image process. In this experiment, gauge block (21mm, 23mm, 25mm, 50mm and 100mm) were selected to be as benchmark object. The results show that the average error on length is 0.38mm in the X-axis direction, 0.37mm in the Y-axis direction and 0.35mm in the Z-axis direction. Toe lasts were also used as object to be tested. The results showed that the maximum error is 0.33% in EUR36 while 0.218% in EUR37.