Summary: | 碩士 === 國立交通大學 === 工學院聲音與音樂創意科技碩士學位學程 === 104 === This paper proposes a sequential framework that progressively extracts the features of music and characterizes music-induced emotions in a predetermined emotion plane. To build-up the emotion plane, 200 Western pop music clips, including four categories of emotion-predefined music with each group of 50 clips are used to train the system. Five feature sets, including onset intensity, timbre, sound volume, mode, and dissonance are extracted from WAV file to represent the characteristics of a music sample. Support vector machine (SVM) algorithm is used to demarcate the boundaries of “Exuberance”, “Contentment”, “Anxious”, and “Depression” on the Thayer’s emotion plane for trained music data. A graphic interface of emotion arousal locus on two-dimensional model of mood is established in Android System to represent the tracking of dynamic emotional transition caused by music. This system enables user to choose playing clips in mobile device based on identifying music emotions. Furthermore, the graphic interface of emotion can also reveal the emotional distribution of user’s music clips in data bank.
The experimental results show the exploiting of human-machine interaction system by the efforts of mathematical analysis, feature extraction and algorithm, classifier training in music data processing. This interactive music selection system provides innovative music tracks playing sequence. With various music genre of different regions in the world, such as the Middle East music, religious music, a customized music emotions coordinate plane based on specified training music samples can be created to show the diversity of music culture.
|