Cross-linguistic regularities and learner biases reflect "core" mechanics.
Recent research in infant cognition and adult vision suggests that the mechanical object relationships may be more salient and naturally attention grabbing than similar but non-mechanical relationships. Here we examine two novel sources of evidence from language related to this hypothesis. In Experi...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2018-01-01
|
Series: | PLoS ONE |
Online Access: | http://europepmc.org/articles/PMC5764231?pdf=render |
id |
doaj-6ade8d9bd2014b6bbf5ee55fadd2360a |
---|---|
record_format |
Article |
spelling |
doaj-6ade8d9bd2014b6bbf5ee55fadd2360a2020-11-25T01:22:07ZengPublic Library of Science (PLoS)PLoS ONE1932-62032018-01-01131e018413210.1371/journal.pone.0184132Cross-linguistic regularities and learner biases reflect "core" mechanics.Brent StricklandEmmanuel ChemlaRecent research in infant cognition and adult vision suggests that the mechanical object relationships may be more salient and naturally attention grabbing than similar but non-mechanical relationships. Here we examine two novel sources of evidence from language related to this hypothesis. In Experiments 1 and 2, we show that adults preferentially infer that the meaning of a novel preposition refers to a mechanical as opposed to a non-mechanical relationship. Experiments 3 and 4 examine cross-linguistic adpositions obtained on a large scale from machines or from experts, respectively. While these methods differ in the ease of data collection relative to the reliability of the data, their results converge: we find that across a range of diverse and historically unrelated languages, adpositions (such as prepositions) referring to the mechanical relationships of containment (e.g "in") and support (e.g. "on") are systematically shorter than closely matched but not mechanical words such as "behind," "beside," "above," "over," "out," and "off." These results first suggest that languages regularly contain traces of core knowledge representations and that cross-linguistic regularities can therefore be a useful and easily accessible form of information that bears on the foundations of non-linguistic thought.http://europepmc.org/articles/PMC5764231?pdf=render |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Brent Strickland Emmanuel Chemla |
spellingShingle |
Brent Strickland Emmanuel Chemla Cross-linguistic regularities and learner biases reflect "core" mechanics. PLoS ONE |
author_facet |
Brent Strickland Emmanuel Chemla |
author_sort |
Brent Strickland |
title |
Cross-linguistic regularities and learner biases reflect "core" mechanics. |
title_short |
Cross-linguistic regularities and learner biases reflect "core" mechanics. |
title_full |
Cross-linguistic regularities and learner biases reflect "core" mechanics. |
title_fullStr |
Cross-linguistic regularities and learner biases reflect "core" mechanics. |
title_full_unstemmed |
Cross-linguistic regularities and learner biases reflect "core" mechanics. |
title_sort |
cross-linguistic regularities and learner biases reflect "core" mechanics. |
publisher |
Public Library of Science (PLoS) |
series |
PLoS ONE |
issn |
1932-6203 |
publishDate |
2018-01-01 |
description |
Recent research in infant cognition and adult vision suggests that the mechanical object relationships may be more salient and naturally attention grabbing than similar but non-mechanical relationships. Here we examine two novel sources of evidence from language related to this hypothesis. In Experiments 1 and 2, we show that adults preferentially infer that the meaning of a novel preposition refers to a mechanical as opposed to a non-mechanical relationship. Experiments 3 and 4 examine cross-linguistic adpositions obtained on a large scale from machines or from experts, respectively. While these methods differ in the ease of data collection relative to the reliability of the data, their results converge: we find that across a range of diverse and historically unrelated languages, adpositions (such as prepositions) referring to the mechanical relationships of containment (e.g "in") and support (e.g. "on") are systematically shorter than closely matched but not mechanical words such as "behind," "beside," "above," "over," "out," and "off." These results first suggest that languages regularly contain traces of core knowledge representations and that cross-linguistic regularities can therefore be a useful and easily accessible form of information that bears on the foundations of non-linguistic thought. |
url |
http://europepmc.org/articles/PMC5764231?pdf=render |
work_keys_str_mv |
AT brentstrickland crosslinguisticregularitiesandlearnerbiasesreflectcoremechanics AT emmanuelchemla crosslinguisticregularitiesandlearnerbiasesreflectcoremechanics |
_version_ |
1725127694170980352 |