Interpreting Faces with Neurally Inspired Generative Models
Becoming a face expert takes years of learning and development. Many research programs are devoted to studying face perception, particularly given its prerequisite role in social interaction, yet its fundamental neural operations are poorly understood. One reason is that there are many possible expl...
Main Author: | |
---|---|
Other Authors: | |
Language: | en_ca |
Published: |
2011
|
Subjects: | |
Online Access: | http://hdl.handle.net/1807/29884 |
id |
ndltd-LACETR-oai-collectionscanada.gc.ca-OTU.1807-29884 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-LACETR-oai-collectionscanada.gc.ca-OTU.1807-298842013-04-17T04:19:16ZInterpreting Faces with Neurally Inspired Generative ModelsSusskind, Joshua Matthewfacial expressionsneural networkdeep belief netrestricted boltzmann machine08000633Becoming a face expert takes years of learning and development. Many research programs are devoted to studying face perception, particularly given its prerequisite role in social interaction, yet its fundamental neural operations are poorly understood. One reason is that there are many possible explanations for a change in facial appearance, such as lighting, expression, or identity. Despite general agreement that the brain extracts multiple layers of feature detectors arranged into hierarchies to interpret causes of sensory information, very little work has been done to develop computational models of these processes, especially for complex stimuli like faces. The studies presented in this thesis used nonlinear generative models developed within machine learning to solve several face perception problems. Applying a deep hierarchical neural network, we showed that it is possible to learn representations capable of perceiving facial actions, expressions, and identities, better than similar non-hierarchical architectures. We then demonstrated that a generative architecture can be used to interpret high-level neural activity by synthesizing images in a top-down pass. Using this approach we showed that deep layers of a network can be activated to generate faces corresponding to particular categories. To facilitate training models to learn rich and varied facial features, we introduced a new expression database with the largest number of labeled faces collected to date. We found that a model trained on these images learned to recognize expressions comparably to human observers. Next we considered models trained on pairs of images, making it possible to learn how faces change appearance to take on different expressions. Modeling higher-order associations between images allowed us to efficiently match images of the same type according to a learned pairwise similarity measure. These models performed well on several tasks, including matching expressions and identities, and demonstrated performance superior to competing models. In sum, these studies showed that neural networks that extract highly nonlinear features from images using architectures inspired by the brain can solve difficult face perception tasks with minimal guidance by human experts.Anderson, Adam K.2011-062011-08-31T23:29:39ZNO_RESTRICTION2011-08-31T23:29:39Z2011-08-31Thesishttp://hdl.handle.net/1807/29884en_ca |
collection |
NDLTD |
language |
en_ca |
sources |
NDLTD |
topic |
facial expressions neural network deep belief net restricted boltzmann machine 0800 0633 |
spellingShingle |
facial expressions neural network deep belief net restricted boltzmann machine 0800 0633 Susskind, Joshua Matthew Interpreting Faces with Neurally Inspired Generative Models |
description |
Becoming a face expert takes years of learning and development. Many research programs are devoted to studying face perception, particularly given its prerequisite role in social interaction, yet its fundamental neural operations are poorly understood. One reason is that there are many possible explanations for a change in facial appearance, such as lighting, expression, or identity. Despite general agreement that the brain extracts multiple layers of feature detectors arranged into hierarchies to interpret causes of sensory information, very little work has been done to develop computational models of these processes, especially for complex stimuli like faces. The studies presented in this thesis used nonlinear generative models developed within machine learning to solve several face perception problems. Applying a deep hierarchical neural network, we showed that it is possible to learn representations capable of perceiving facial actions, expressions, and identities, better than similar non-hierarchical architectures. We then demonstrated that a generative architecture can be used to interpret high-level neural activity by synthesizing images in a top-down pass. Using this approach we showed that deep layers of a network can be activated to generate faces corresponding to particular categories. To facilitate training models to learn rich and varied facial features, we introduced a new expression database with the largest number of labeled faces collected to date. We found that a model trained on these images learned to recognize expressions comparably to human observers. Next we considered models trained on pairs of images, making it possible to learn how faces change appearance to take on different expressions. Modeling higher-order associations between images allowed us to efficiently match images of the same type according to a learned pairwise similarity measure. These models performed well on several tasks, including matching expressions and identities, and demonstrated performance superior to competing models. In sum, these studies showed that neural networks that extract highly nonlinear features from images using architectures inspired by the brain can solve difficult face perception tasks with minimal guidance by human experts. |
author2 |
Anderson, Adam K. |
author_facet |
Anderson, Adam K. Susskind, Joshua Matthew |
author |
Susskind, Joshua Matthew |
author_sort |
Susskind, Joshua Matthew |
title |
Interpreting Faces with Neurally Inspired Generative Models |
title_short |
Interpreting Faces with Neurally Inspired Generative Models |
title_full |
Interpreting Faces with Neurally Inspired Generative Models |
title_fullStr |
Interpreting Faces with Neurally Inspired Generative Models |
title_full_unstemmed |
Interpreting Faces with Neurally Inspired Generative Models |
title_sort |
interpreting faces with neurally inspired generative models |
publishDate |
2011 |
url |
http://hdl.handle.net/1807/29884 |
work_keys_str_mv |
AT susskindjoshuamatthew interpretingfaceswithneurallyinspiredgenerativemodels |
_version_ |
1716580578885632000 |