Summary: | 碩士 === 國立清華大學 === 資訊工程學系 === 95 === The motion capture data used in animation, commercial advertisement, or video games require much manpower to segment the data into distinct behavior. If the size of database is large, the cost of segmentation process becomes inevitably high. Thus, automatic segmentation becomes an important issue in processing human motion data.
In this thesis, we proposed a method for segmenting the mocap data automatically. Our method not only can segment similar motion into a clip, but also gives each segment a text description, which provides a high level interactive environment for animators.
First we propose a new motion representation, in other words, we define two features for each motion, namely, the global feature and local feature. The global features refer to the movements of torso and the local features are movements of the limbs. Based on these features and their signs we can provide textual abstraction of the motion clip, in other words we can understand the motion data by observing the variations of features. Finally we give several empirical examples to show the effectiveness of the proposed method.
|