Improved Deep Feature Learning by Synchronization Measurements for Multi-Channel EEG Emotion Recognition
Emotion recognition based on multichannel electroencephalogram (EEG) signals is a key research area in the field of affective computing. Traditional methods extract EEG features from each channel based on extensive domain knowledge and ignore the spatial characteristics and global synchronization in...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Hindawi-Wiley
2020-01-01
|
Series: | Complexity |
Online Access: | http://dx.doi.org/10.1155/2020/6816502 |
id |
doaj-e447d1c3d0104fef828adb20665db5e7 |
---|---|
record_format |
Article |
spelling |
doaj-e447d1c3d0104fef828adb20665db5e72020-11-25T03:31:07ZengHindawi-WileyComplexity1076-27871099-05262020-01-01202010.1155/2020/68165026816502Improved Deep Feature Learning by Synchronization Measurements for Multi-Channel EEG Emotion RecognitionHao Chao0Liang Dong1Yongli Liu2Baoyun Lu3School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo 454000, Henan, ChinaSchool of Computer Science and Technology, Henan Polytechnic University, Jiaozuo 454000, Henan, ChinaSchool of Computer Science and Technology, Henan Polytechnic University, Jiaozuo 454000, Henan, ChinaSchool of Computer Science and Technology, Henan Polytechnic University, Jiaozuo 454000, Henan, ChinaEmotion recognition based on multichannel electroencephalogram (EEG) signals is a key research area in the field of affective computing. Traditional methods extract EEG features from each channel based on extensive domain knowledge and ignore the spatial characteristics and global synchronization information across all channels. This paper proposes a global feature extraction method that encapsulates the multichannel EEG signals into gray images. The maximal information coefficient (MIC) for all channels was first measured. Subsequently, an MIC matrix was constructed according to the electrode arrangement rules and represented by an MIC gray image. Finally, a deep learning model designed with two principal component analysis convolutional layers and a nonlinear transformation operation extracted the spatial characteristics and global interchannel synchronization features from the constructed feature images, which were then input to support vector machines to perform the emotion recognition tasks. Experiments were conducted on the benchmark dataset for emotion analysis using EEG, physiological, and video signals. The experimental results demonstrated that the global synchronization features and spatial characteristics are beneficial for recognizing emotions and the proposed deep learning model effectively mines and utilizes the two salient features.http://dx.doi.org/10.1155/2020/6816502 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Hao Chao Liang Dong Yongli Liu Baoyun Lu |
spellingShingle |
Hao Chao Liang Dong Yongli Liu Baoyun Lu Improved Deep Feature Learning by Synchronization Measurements for Multi-Channel EEG Emotion Recognition Complexity |
author_facet |
Hao Chao Liang Dong Yongli Liu Baoyun Lu |
author_sort |
Hao Chao |
title |
Improved Deep Feature Learning by Synchronization Measurements for Multi-Channel EEG Emotion Recognition |
title_short |
Improved Deep Feature Learning by Synchronization Measurements for Multi-Channel EEG Emotion Recognition |
title_full |
Improved Deep Feature Learning by Synchronization Measurements for Multi-Channel EEG Emotion Recognition |
title_fullStr |
Improved Deep Feature Learning by Synchronization Measurements for Multi-Channel EEG Emotion Recognition |
title_full_unstemmed |
Improved Deep Feature Learning by Synchronization Measurements for Multi-Channel EEG Emotion Recognition |
title_sort |
improved deep feature learning by synchronization measurements for multi-channel eeg emotion recognition |
publisher |
Hindawi-Wiley |
series |
Complexity |
issn |
1076-2787 1099-0526 |
publishDate |
2020-01-01 |
description |
Emotion recognition based on multichannel electroencephalogram (EEG) signals is a key research area in the field of affective computing. Traditional methods extract EEG features from each channel based on extensive domain knowledge and ignore the spatial characteristics and global synchronization information across all channels. This paper proposes a global feature extraction method that encapsulates the multichannel EEG signals into gray images. The maximal information coefficient (MIC) for all channels was first measured. Subsequently, an MIC matrix was constructed according to the electrode arrangement rules and represented by an MIC gray image. Finally, a deep learning model designed with two principal component analysis convolutional layers and a nonlinear transformation operation extracted the spatial characteristics and global interchannel synchronization features from the constructed feature images, which were then input to support vector machines to perform the emotion recognition tasks. Experiments were conducted on the benchmark dataset for emotion analysis using EEG, physiological, and video signals. The experimental results demonstrated that the global synchronization features and spatial characteristics are beneficial for recognizing emotions and the proposed deep learning model effectively mines and utilizes the two salient features. |
url |
http://dx.doi.org/10.1155/2020/6816502 |
work_keys_str_mv |
AT haochao improveddeepfeaturelearningbysynchronizationmeasurementsformultichanneleegemotionrecognition AT liangdong improveddeepfeaturelearningbysynchronizationmeasurementsformultichanneleegemotionrecognition AT yongliliu improveddeepfeaturelearningbysynchronizationmeasurementsformultichanneleegemotionrecognition AT baoyunlu improveddeepfeaturelearningbysynchronizationmeasurementsformultichanneleegemotionrecognition |
_version_ |
1715193569755529216 |