Summary: | Multimodal biometric schemes arise as an interesting solution to the multidimensional reinforcement problem for biometric security systems. Along with the performance dimension, these systems should also comply with required levels for other conditions such as permanence, collectability, and circumvention, among others. In response to the demand for a multimodal and synchronous dataset, we introduce in this paper an open-access database of synchronously recorded electroencephalogram signals (EEG), voice signals, and video feed from 51 volunteers, 25 female, 26 male, captured for, but not limited to, biometric purposes. A total of 140 samples were collected from each user when pronouncing single digits in Spanish, giving a total of 7140 instances. EEG signals were captured using a 14-channel Emotiv™ Epoc headset. The resulting set becomes a valuable resource when working on unimodal biometric systems, but significantly more for the evaluation of multimodal variants. Furthermore, the usefulness of the collected signals extends to being exploited by projects in brain-computer interfaces and face recognition to name just a few. As an initial report on data separability of the related samples, five user recognition experiments are presented: a face recognition identifier with an accuracy of 99%, a speaker identification system with accuracy of 94.2%, a bimodal face-speech verification case with Equal Error Rate around 2.64, an EEG identification example, and a bimodal user identification exercise based on EEG and voice modalities with a registered accuracy of 97.6%.
|