Summary: | Emotion recognition based on multi-channel electroencephalograph (EEG) signals is becoming increasingly attractive. However, the conventional methods ignore the spatial characteristics of EEG signals, which also contain salient information related to emotion states. In this paper, a deep learning framework based on a multiband feature matrix (MFM) and a capsule network (CapsNet) is proposed. In the framework, the frequency domain, spatial characteristics, and frequency band characteristics of the multi-channel EEG signals are combined to construct the MFM. Then, the CapsNet model is introduced to recognize emotion states according to the input MFM. Experiments conducted on the dataset for emotion analysis using EEG, physiological, and video signals (DEAP) indicate that the proposed method outperforms most of the common models. The experimental results demonstrate that the three characteristics contained in the MFM were complementary and the capsule network was more suitable for mining and utilizing the three correlation characteristics.
|