Summary: | 碩士 === 國立政治大學 === 語言學研究所 === 101 === This thesis explores linguistic and gestural representations of viewpoints utilizing the descriptions of third-person past events within Chinese conversational discourse. Following McNeill’s idea that language and gesture are co-expressive in viewpoints, the present study also attempts to investigate whether speakers’ speech-accompanying gesture works in collaboration with language in expressing the same or different viewpoints.
The framework of this study utilizes Koven’s (2002) framework of speaker role inhabitance and McNeill’s (1992) notion of character and observer viewpoint, and defines three viewpoints—speaker, character and observer viewpoint. In analyzing gestural viewpoints, the present study recognizes five gestural features—gestural space, handedness, stroke duration, frequency, and the involvement of other parts of the body as five distinctive criteria for use in identifying different viewpoints.
Quantitative study of linguistic and gestural viewpoints shows that speech-accompanying gesture in the descriptions of third-person past events within conversational contexts displays different patterns from that of those found in language in the distributions of the three viewpoints. Character viewpoint, which is rarely adopted in language, is the most often conveyed viewpoint in gesture. On the other hand, despite the fact that speaker viewpoint is also commonly expressed in language, it rarely occurs in gesture. Observer viewpoint, in addition, is frequently seen in both the linguistic and gestural channels. With respect to the collaborative expressions of viewpoints in language and gesture concerning a description of the same event, quantitative study shows that 64.7% of all gestures produced in the current data represent a viewpoint different from that conveyed in language. Therefore, this study suggests that while language and gesture are co-expressive in terms of viewpoints, gesture more often collaborates with the accompanying speech in representing different viewpoints.
The collaborative expressions of viewpoints in language and gesture suggest how speech and gesture coordinate with each other in organizing information and expressing different viewpoints also lead us to see the cognitive process that underlies both linguistic and gestural modalities within daily human communication. Two hypotheses—the Lexical Semantics and the Interface Hypothesis are referred to in order to provide theoretical accounts for the findings in this study. Each hypothesis is also supported by different pieces of evidence and percentages of gestures produced in the current data. The Interface Hypothesis can further provide an explanation concerning the division of labor between language and gesture in expressing viewpoints, which the Lexical Semantics Hypothesis cannot supply.
|