Summary: | 碩士 === 國立臺灣科技大學 === 電子工程系 === 105 === Music emotion recognition (MER) detects and analyzes the relation between human emotion and music clips. MER is helpful in music understanding, music retrieval, and other music-related applications. As volume of online musical contents expands rapidly in recent years, demands for retrieval by emotion have also been emerging. MER needs to take into the characteristics of music psychology into consideration. Although MER has been developed for years, there is currently no well-developed emotion model for music emotion representation.
In this thesis, we propose a music emotion recognition system which is based on two music formats with corresponding machine learning models. More specifically, this system includes WAVE based MER, MIDI based MER and a decision model. WAVE based MER extracts 37 features from wave files and calculates weight of each feature by RreliefF. The selected features sent to support vector machine (SVM) for training are according to sorted weights. The training data for MIDI based MER classifier, deep belief network (DBN), include time dependent and instrument features. Moreover, we also introduce the normalized algebraic product (NAP) as the decision maker for integrating the recognition from both classifiers.
|