Temporal processing of lexical tone in lexical access of Chinese spoken characters: an eyetracking study

碩士 === 國立政治大學 === 語言學研究所 === 100 === The present study aims to examine the role of tonal information during Mandarin Chinese spoken character recognition. Two eye-tracking experiments were conducted with the visual world paradigm, which participants heard a Chinese monosyllabic character and used a...

Full description

Bibliographic Details
Main Authors: Syu, Yuan Jhen, 許媛媜
Other Authors: Tsai, Jie Li
Format: Others
Language:en_US
Online Access:http://ndltd.ncl.edu.tw/handle/gr4b3q
Description
Summary:碩士 === 國立政治大學 === 語言學研究所 === 100 === The present study aims to examine the role of tonal information during Mandarin Chinese spoken character recognition. Two eye-tracking experiments were conducted with the visual world paradigm, which participants heard a Chinese monosyllabic character and used a mouse to click on the corresponding character in a visual array of 4 characters on the screen. Experiment 1 manipulated the relationship between the spoken target characters and written characters on the screen, including a target (e.g., /mɔ1/‘touch’), a tonal competitor (the tone was the same as target except segment: e.g., /wa1/‘dig’) or a segmental competitor (the segmental structure was the same with the target except tone: e.g., /mɔ3/ ‘wipe’), and two unrelated distractors (the segments and tone were different from target: e.g., /nu4/ ‘anger’, and /tɕy2/ ‘chrysanthemum’). The fixation proportions on target, competitors and the unrelated distractors were computed during the unfolding of the auditory target stimuli. The results showed tonal difference was detected before the end of auditory stream. However, no early involvement of tonal information was found, which may due to the tonal competitor and target shared no segment from the first phoneme. In order to examine the earlier tonal processing, Experiment 2 manipulated two types of cohort competitors sharing the initial two segments with the target (e.g., /tʰɑŋ1/ “soup”), a cohort-tone competitor, e.g., /tʰaj1/ “fetus” (both tone and initial two segments are the same with target) and a cohort-only competitor e.g., /tʰaj4/ “peaceful” (initial two segments is the same with the target but with different tone). Result showed that tone affected spoken character recognition while processing the two initial segments. In addition, tone could not affect spoken character processing independently, which might be inconsistent with the assumption that tone is a separate level of representation, called “toneme” node, in the modified TRACE model (Malins & Joanisse, 2010; Ye & Connine, 1999; Zhao et al., 2011).