|
|
|
|
LEADER |
02925 am a22002533u 4500 |
001 |
113231 |
042 |
|
|
|a dc
|
100 |
1 |
0 |
|a Isik, Leyla
|e author
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
|e contributor
|
100 |
1 |
0 |
|a McGovern Institute for Brain Research at MIT
|e contributor
|
100 |
1 |
0 |
|a Isik, Leyla
|e contributor
|
100 |
1 |
0 |
|a Tacchetti, Andrea
|e contributor
|
100 |
1 |
0 |
|a Poggio, Tomaso A
|e contributor
|
700 |
1 |
0 |
|a Tacchetti, Andrea
|e author
|
700 |
1 |
0 |
|a Poggio, Tomaso A
|e author
|
245 |
0 |
0 |
|a Invariant recognition drives neural representations of action sequences
|
260 |
|
|
|b Public Library of Science,
|c 2018-01-19T15:06:33Z.
|
856 |
|
|
|z Get fulltext
|u http://hdl.handle.net/1721.1/113231
|
520 |
|
|
|a Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences.
|
520 |
|
|
|a Eugene McDermott Foundation
|
520 |
|
|
|a NVIDIA Corporation
|
520 |
|
|
|a McGovern Institute for Brain Research at MIT
|
655 |
7 |
|
|a Article
|
773 |
|
|
|t PLOS Computational Biology
|