Exploring the Feedback Quality of an Automated Writing Evaluation System Pigai

The study made an exploration of the feedback quality of an Automated Writing Evaluation system (AWE) Pigai, which has been widely applied in English teaching and learning in China. The study not only focused on the diagnostic precision of the feedback but also investigated the students’ perceptions...

Full description

Bibliographic Details
Main Author: Jianmin Gao
Format: Article
Language:English
Published: Kassel University Press 2021-06-01
Series:International Journal of Emerging Technologies in Learning (iJET)
Subjects:
Online Access:https://online-journals.org/index.php/i-jet/article/view/19657
Description
Summary:The study made an exploration of the feedback quality of an Automated Writing Evaluation system (AWE) Pigai, which has been widely applied in English teaching and learning in China. The study not only focused on the diagnostic precision of the feedback but also investigated the students’ perceptions of the feedback use in their daily writing practices. Taking 104 university students’ final exam essays as the research materials, the paired sample t-test was conducted to compare the mean number of errors identified by Pigai and professional teachers. It was found that Pigai feedback could not so well diagnose the essays as the human feedback given by the experienced teachers, however, it was quite competent in identifying lexical errors. The analysis of students’ perceptions indicated that most students thought Pigai feedback was multi-functional, but it was inadequate in identifying the collocation errors and giving suggestions in syntactic use. The implications and limitations of the study were discussed at the end of the paper.
ISSN:1863-0383