Summary: | 碩士 === 國立交通大學 === 資訊工程系 === 87 === Augmented reality technique has been receiving a lot of attention in recent years. And it is a thriving research field that can find various applications. But in most of the current research, it is needed to acquire the related information of real camera. In other words, a camera-calibration process is required. It''s no doubt that such a process limits the scope of applications. Therefore, in this thesis, we propose a semi-automatic way to combine a virtual object with an image sequence taken by a camera without calibration.
We derive the relation between two images through the fundamental matrix, which is obtained from the point correspondences of these two images. Furthermore, we can acquire the projection matrixes of each image under a unique projective space. By using each projection matrix, we can proceed the real scene reconstruction, place the virtual object in the real scene, and solve the occlusion between virtual object and real objects.
To place the virtual object into a video sequence, we need to know the relation between the real scene and the virtual object. Typically, it requires the user to describe the pose of the virtual object in two basic images, then the 3D position of virtual object in the real scene is determined. We use a virtual camera to render the virtual object in the first basic image, then the user use the proposed object placement constrains to decide the relation between real scene and virtual object. Then the virtual object can be automatically rendered in the second image. The user evaluates the goodness of the object placement and invoke, if necessary, an iterative modification of the virtual camera projection matrix in the first image. We also have an algorithm for resolving occlusion between the virtual objects and real objects automatically.
It is quite simple to operate our augmented reality system. We just try to keep the user labor work to an amount as minimum as possible.
|