Summary: | Introduction: Many patients with aphasia, particularly those with nonfluent aphasia, have been observed to be able to sing the lyrics of songs more easily than they can speak the same words (Wan et al., 2010). The observation that not only singing, but even intoning words, can facilitate speech output in nonfluent aphasia patients provided the foundation for Melodic Intonation Therapy (MIT; Sparks, 1974), an intensive therapy lasting ten or more weeks (Schlaug et al., 2008). The current study aims to look at patients with aphasia in their ability to complete song lyrics by either singing, speaking, or humming them.
Methods: Thus far, 11 patients with aphasia and 6 age- matched healthy controls have participated in an experimental stem-completion task examining singing abilities. The task consists of three conditions each consisting of 20 well-known songs and all participants completed all three conditions. Participants heard the first half of a phrase that was either sung in its original format (e.g., “Mary had a Little Lamb”), spoken, or intoned on the syllable “bum”, and were asked to complete the phrase according to the format in which the stimulus was presented, (i.e., either by singing, speaking the words, or humming/singing the melody correspondingly). The task was untimed, though most finished the task within an hour. Each participant completed a survey about their musical experience.
Results: Patients were scored on their ability to complete the melody and words together in the sung condition, only the words in the spoken condition, and only the tune of the song in the melody condition. A parametric t-test indicated no significant difference between groups in the sung condition (mean patients=45.3%, mean controls=68.2%, t-value= -1.96, p=0.0684), though this test almost reached significance. There was also no significant difference between groups in the melody condition (mean patients= 18.2%, mean controls=20.0%, t-value= -0.335, p=0.742). There was, however, a significant difference between groups in the spoken condition (mean patients=30.4%, mean controls=65.2%, t-value= -3.49, p=0.003). A Friedman non-parametric ANOVA showed significant differences in accuracy for all three conditions, sung, spoken, and melody, for patients (χ2= 9.45, df =2, p=0.00885) and controls (χ2= 9.00, df =2, p=0.011). Figure 1 indicates both groups showed higher accuracy for sung and spoken relative to spoken only relative to melody conditions.
Discussion: Results indicate that patients performed more poorly in the spoken condition as compared to controls, and patients and controls performed equally as poorly in the melody condition. Though the results of the parametric t-test for the sung condition do not reveal a significant difference, the average score of controls was higher than that of patients, and this difference may become significant as the sample size continues to grow. In order to retrieve the lyrics to a song, having the words present provides a valuable cue for both groups, especially for patients. Findings may have implications for using music as a more widely implemented tool in speech therapy for aphasia patients, as patients may be able to better access speech/language by using melody as a vehicle.
|