Summary: | Emotion recognition is a fundamental task that any affective computing system must perform to adapt to the user’s current mood. The analysis of electroencephalography signals has gained notoriety in studying human emotions because of its non-invasive nature. This paper presents a two-stage deep learning model to recognize emotional states by correlating facial expressions and brain signals. Most of the works related to the analysis of emotional states are based on analyzing large segments of signals, generally as long as the evoked potential lasts, which could cause many other phenomena to be involved in the recognition process. Unlike with other phenomena, such as epilepsy, there is no clearly defined marker of when an event begins or ends. The novelty of the proposed model resides in the use of facial expressions as markers to improve the recognition process. This work uses a facial emotion recognition technique (FER) to create identifiers each time an emotional response is detected and uses them to extract segments of electroencephalography (EEG) records that a priori will be considered relevant for the analysis. The proposed model was tested on the DEAP dataset.
|