Summary: | 碩士 === 國立臺南大學 === 測驗統計研究所 === 96 === Summarization demonstrates a great potential for improving students’ reading, learning, and writing. Unfortunately, it is largely neglected throughout children’s academic training. The automatic scoring method can provide students with extensive summarization practice without increasing the teacher’s workload. The purpose of this study is to compare the automatic scoring methods with human rating results. Due to school application consideration, three automatic methods are included for comparisons: concept similarity (CS), latent semantic analysis (LSA) and key-word comparison (KWC). Four scientific expository reading passages were used in this study. There were 255 6th graders sampled and each student worked on two passages. The participants revised their summaries after the feedbacks provided. Science reading comprehension and school grades on Mandarin, science and mathematics were also collected for convergent and discrimant validity discussions. Two facets of summary are rated by human raters: major points and structure of the passage. The inter-rater rating correlation coefficients are around 0.90 and 0.80 respectively. On the facet of major points, the correlation coefficients between automatic and human rating are around 0.88 for CS, 0.72 for LSA, and 0.63 for KWC. On the facet of structure, the correlation coefficients are 0.84 for CS, 0.69 for LSA and 0.52 for KWC. The correlation coefficients of automatic rating with science reading comprehension are 0.38 for CS, 0.45 for LSA and 0.21 for KWC. Generally speaking, CS and LSA demonstrate promising potential for further technical and application researches.
|