Summary: | 碩士 === 臺灣大學 === 資訊網路與多媒體研究所 === 99 === Current visual surveillance systems usually include multiple cameras to monitor the activities of targets over a large area. An important issue for the guard or user using the system is to understand a series of events occurring in the environment, for example to track a target walking across multiple cameras. Opposite to the traditional systems switching the camera view from one to another directly, we propose a novel system to ease the mental effort for users to understand the geometry between real cameras and the guidance path by egocentric view transition. During the period of switching cameras, our system synthesizes the virtual views by blending the synthesized foreground texture into the pre-constructed background model and then re-projecting it to the view of virtual camera. An important property of our system is that it can be applied to the situations of where the view fields of transition cameras are not close enough or even exclusive. Such situations have never been taken into consideration in the state-of-the-art view transition techniques. In addition, current view transition systems usually linear interpolate two real cameras position to decide the virtual camera position in the period of view transition. Here, we design a rule to determine the virtual camera position instead of linear interpolation for better visual effect.
|