Informational Aspects of Audiovisual Identity Matching
In this study, we investigated what informational aspects of faces could account for the ability to match an individual’s face to their voice, using only static images. In each of the first six experiments, we simultaneously presented one voice recording along with two manipulated images of faces...
Other Authors: | |
---|---|
Format: | Others |
Language: | English |
Published: |
Florida Atlantic University
|
Subjects: | |
Online Access: | http://purl.flvc.org/fau/fd/FA00004688 http://purl.flvc.org/fau/fd/FA00004688 |
Summary: | In this study, we investigated what informational aspects of faces could account
for the ability to match an individual’s face to their voice, using only static images. In
each of the first six experiments, we simultaneously presented one voice recording along
with two manipulated images of faces (e.g. top half of the face, bottom half of the face,
etc.), a target face and distractor face. The participant’s task was to choose which of the
images they thought belonged to the same individual as the voice recording. The voices
remained un-manipulated. In Experiment 7 we used eye tracking in order to determine
which informational aspects of the model’s faces people are fixating while performing
the matching task, as compared to where they fixate when there are no immediate task
demands. We presented a voice recording followed by two static images, a target and
distractor face. The participant’s task was to choose which of the images they thought
belonged to the same individual as the voice recording, while we tracked their total
fixation duration. In the no-task, passive viewing condition, we presented a male’s voice
recording followed sequentially by two static images of female models, or vice versa, counterbalanced across participants. Participant’s results revealed significantly better
than chance performance in the matching task when the images presented were the
bottom half of the face, the top half of the face, the images inverted upside down, when
presented with a low pass filtered image of the face, and when the inner face was
completely blurred out. In Experiment 7 we found that when completing the matching
task, the time spent looking at the outer area of the face increased, as compared to when
the images and voice recordings were passively viewed. When the images were passively
viewed, the time spend looking at the inner area of the face increased. We concluded that
the inner facial features (i.e. eyes, nose, and mouth) are not necessary informational
aspects of the face which allow for the matching ability. The ability likely relies on global
features such as the face shape and size. === Includes bibliography. === Dissertation (Ph.D.)--Florida Atlantic University, 2016. === FAU Electronic Theses and Dissertations Collection |
---|