Summary: | Human emotions are perceived from multi-modal information including facial expression and voice tone. We aimed to investigate development of neural mechanism for cross-modal perception of emotions. We presented congruent and incongruent combinations of facial expression (happy) and voice tone (happy or angry), and measured EEG to analyze event-related brain potentials for 8-10 month-old infants and adults. Ten repetitions of 10 trials were presented in random order for each participant. Half of them performed 20% congruent (happy face with happy voice) and 80% incongruent (happy face with angry voice) trials, and the others performed 80% congruent and 20% incongruent trials. We employed the oddball paradigm, but did not instruct participants to count a target. The odd-ball (infrequent) stimulus increased the amplitude of P2 and delayed its latency for infants in comparison with the frequent stimulus. When the odd-ball stimulus was also emotionally incongruent, P2 amplitude was more increased and its latency was more delayed than for the odd-ball and emotionally congruent stimulus. However, we did not find difference of P2 amplitude or latency for adults between conditions. These results suggested that the 8–10 month-old infants already have a neural basis for detecting emotional incongruence of facial expression and voice tone.
|