Investigating representation of visual space for freely moving participants in a virtual expanding room

One of the important and unsolved questions in vision research is how space is represented. Despite extensive research from a variety of disciplines this topic is still poorly understood and, at present, there is no theory that can provide a complete explanation. The traditional view on space repres...

Full description

Bibliographic Details
Main Author: Svarverud, Ellen
Published: University of Reading 2010
Subjects:
Online Access:http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553047
Description
Summary:One of the important and unsolved questions in vision research is how space is represented. Despite extensive research from a variety of disciplines this topic is still poorly understood and, at present, there is no theory that can provide a complete explanation. The traditional view on space representation is a geometric one that assumes a one-to-one mapping between physical and perceived space, but much of the current evidence points towards other solutions. For example, it has been proposed that there may be no single visual representation of a 3D scene that can account for performance in all tasks, suggesting that space representation may take a looser form without a globally consistent map of space. The studies described in this thesis provide evidence that challenges the idea of a single internal representation of space by exploring size and distance judgements under a range of different conditions in a virtual expanding room. In this environment, participants viewed the scene binocularly through a wide field of view head mounted display. They were allowed to move and were consequently provided with veridical information about the scene also from their own movements. Importantly, the scene expanded or contracted four-fold during experiments, giving a unique opportunity to explore how different distance cues contribute to size and distance judgements when biases in both judgements were very large. The available cues were set in conflict: one type of cue, based on stereopsis and motion parallax, gave a veridical signal of the change in the scene, whereas another type of cue was unaffected by the expansion of the scene and, hence, signalled that the room remained constant. The most striking result is that the perceived location of objects does not always follow a transitive ordering with respect to physical space, which is incompatible with a one-to-one mapping between perceived and physical space. Instead, the results can be better explained by a cue combination model. This approach has not previously been used for object location, but was here successfully applied for a range of conditions even when there was a large conflict between the distances signalled by the contributing cues. Further, it appears that perceived size and perceived distance are not based on a single estimate of distance, demonstrating the lack of internal consistency from a different angle. Overall, these experiments provide results that are difficult to reconcile with a model based on geometric reconstruction and suggest that the visual system does not have a single internal representation of space.