Lexical access in sign language: A computational model

Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical a...

Full description

Bibliographic Details
Main Authors: Naomi Kenney Caselli, Ariel M Cohen-Goldberg
Format: Article
Language:English
Published: Frontiers Media S.A. 2014-05-01
Series:Frontiers in Psychology
Subjects:
Online Access:http://journal.frontiersin.org/Journal/10.3389/fpsyg.2014.00428/full
id doaj-062c119568044f8b96f9e2e3f533db64
record_format Article
spelling doaj-062c119568044f8b96f9e2e3f533db642020-11-25T00:00:36ZengFrontiers Media S.A.Frontiers in Psychology1664-10782014-05-01510.3389/fpsyg.2014.0042881594Lexical access in sign language: A computational modelNaomi Kenney Caselli0Ariel M Cohen-Goldberg1Tufts UniversityTufts UniversityPsycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either 1) the time course of sign perception or 2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.http://journal.frontiersin.org/Journal/10.3389/fpsyg.2014.00428/fullSpeech Perceptionlexical accesssign languageneighborhood densityspreading activationsub-lexical processing
collection DOAJ
language English
format Article
sources DOAJ
author Naomi Kenney Caselli
Ariel M Cohen-Goldberg
spellingShingle Naomi Kenney Caselli
Ariel M Cohen-Goldberg
Lexical access in sign language: A computational model
Frontiers in Psychology
Speech Perception
lexical access
sign language
neighborhood density
spreading activation
sub-lexical processing
author_facet Naomi Kenney Caselli
Ariel M Cohen-Goldberg
author_sort Naomi Kenney Caselli
title Lexical access in sign language: A computational model
title_short Lexical access in sign language: A computational model
title_full Lexical access in sign language: A computational model
title_fullStr Lexical access in sign language: A computational model
title_full_unstemmed Lexical access in sign language: A computational model
title_sort lexical access in sign language: a computational model
publisher Frontiers Media S.A.
series Frontiers in Psychology
issn 1664-1078
publishDate 2014-05-01
description Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either 1) the time course of sign perception or 2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.
topic Speech Perception
lexical access
sign language
neighborhood density
spreading activation
sub-lexical processing
url http://journal.frontiersin.org/Journal/10.3389/fpsyg.2014.00428/full
work_keys_str_mv AT naomikenneycaselli lexicalaccessinsignlanguageacomputationalmodel
AT arielmcohengoldberg lexicalaccessinsignlanguageacomputationalmodel
_version_ 1725444378612203520