Visual depth perception from texture accretion and deletion: a neural model of figure-ground segregation and occlusion

Thesis (Ph.D.)--Boston University === PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and wo...

Full description

Bibliographic Details
Main Author: Barnes, Timothy
Language:en_US
Published: Boston University 2018
Online Access:https://hdl.handle.net/2144/31504
id ndltd-bu.edu-oai-open.bu.edu-2144-31504
record_format oai_dc
spelling ndltd-bu.edu-oai-open.bu.edu-2144-315042019-05-09T03:11:12Z Visual depth perception from texture accretion and deletion: a neural model of figure-ground segregation and occlusion Barnes, Timothy Thesis (Ph.D.)--Boston University PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you. Freezing is an effective defense strategy for some prey, because their predators rely on visual motion to distinguish objects from their surroundings. An object moving over a background progressively covers (deletes) and uncovers (accretes) background texture while simultaneously producing discontinuities in the optic flow field. These events unambiguously specify kinetic occlusion and can produce a crisp edge, depth perception, and figure-ground segregation between identically textured surfaces -- percepts which all disappear without motion. Given two abutting regions of uniform random texture with different motion velocities, one region will appear to be situated farther away and behind the other (i.e., the ground), if its texture is accreted or deleted at the boundary between the regions, irrespective of region and boundary velocities. Consequently, a region with moving texture appears farther away than a stationary region if the boundary is stationary, but it appears closer (i.e. the figure) if the boundary is moving coherently with the moving texture. The perception of kinetic occlusion requires the detection of an unexpected onset or offset of otherwise predictably moving or stationary contrast patches. A computational model of directional selectivity in visual cells is here extended to also detect motion onsets and offsets. The connectivity of these model cells not only affords the detection of local texture accretion and deletion events but also explains results showing that human reaction times differ for motion onsets versus offsets. These theorized cells are placed into a larger computational model of visual areas V1 and V2 to show how interactions between orientation- and direction-selective cells first create a motion-defined boundary and then signal texture accretion or deletion at that boundary. A weak speed-depth bias brings faster-moving texture regions forward in depth. This is consistent with percepts: the faster of two surfaces appears closer when moving parallel to the resulting emergent boundary between them (shearing motion). Activation of model occlusion detectors tuned to a particular velocity results in the model assigning the adjacent surface with a matching velocity to the far depth. These processes together reproduce human psychophysical reports of depth ordering for a representative set of all kinetic occlusion displays. 2031-01-01 2018-10-25T12:44:30Z 2012 2012 Thesis/Dissertation b39007479 https://hdl.handle.net/2144/31504 11719032087894 99176413160001161 en_US Boston University
collection NDLTD
language en_US
sources NDLTD
description Thesis (Ph.D.)--Boston University === PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you. === Freezing is an effective defense strategy for some prey, because their predators rely on visual motion to distinguish objects from their surroundings. An object moving over a background progressively covers (deletes) and uncovers (accretes) background texture while simultaneously producing discontinuities in the optic flow field. These events unambiguously specify kinetic occlusion and can produce a crisp edge, depth perception, and figure-ground segregation between identically textured surfaces -- percepts which all disappear without motion. Given two abutting regions of uniform random texture with different motion velocities, one region will appear to be situated farther away and behind the other (i.e., the ground), if its texture is accreted or deleted at the boundary between the regions, irrespective of region and boundary velocities. Consequently, a region with moving texture appears farther away than a stationary region if the boundary is stationary, but it appears closer (i.e. the figure) if the boundary is moving coherently with the moving texture. The perception of kinetic occlusion requires the detection of an unexpected onset or offset of otherwise predictably moving or stationary contrast patches. A computational model of directional selectivity in visual cells is here extended to also detect motion onsets and offsets. The connectivity of these model cells not only affords the detection of local texture accretion and deletion events but also explains results showing that human reaction times differ for motion onsets versus offsets. These theorized cells are placed into a larger computational model of visual areas V1 and V2 to show how interactions between orientation- and direction-selective cells first create a motion-defined boundary and then signal texture accretion or deletion at that boundary. A weak speed-depth bias brings faster-moving texture regions forward in depth. This is consistent with percepts: the faster of two surfaces appears closer when moving parallel to the resulting emergent boundary between them (shearing motion). Activation of model occlusion detectors tuned to a particular velocity results in the model assigning the adjacent surface with a matching velocity to the far depth. These processes together reproduce human psychophysical reports of depth ordering for a representative set of all kinetic occlusion displays. === 2031-01-01
author Barnes, Timothy
spellingShingle Barnes, Timothy
Visual depth perception from texture accretion and deletion: a neural model of figure-ground segregation and occlusion
author_facet Barnes, Timothy
author_sort Barnes, Timothy
title Visual depth perception from texture accretion and deletion: a neural model of figure-ground segregation and occlusion
title_short Visual depth perception from texture accretion and deletion: a neural model of figure-ground segregation and occlusion
title_full Visual depth perception from texture accretion and deletion: a neural model of figure-ground segregation and occlusion
title_fullStr Visual depth perception from texture accretion and deletion: a neural model of figure-ground segregation and occlusion
title_full_unstemmed Visual depth perception from texture accretion and deletion: a neural model of figure-ground segregation and occlusion
title_sort visual depth perception from texture accretion and deletion: a neural model of figure-ground segregation and occlusion
publisher Boston University
publishDate 2018
url https://hdl.handle.net/2144/31504
work_keys_str_mv AT barnestimothy visualdepthperceptionfromtextureaccretionanddeletionaneuralmodeloffiguregroundsegregationandocclusion
_version_ 1719045169781669888