Task-Level Feedback in Interactive Learning Enivonments Using a Rules Based Grading Engine
In order to improve the feedback an intelligent tutoring system provides, the grading engine needs to do more than simply indicate whether a student gives a correct answer or not. Good feedback must provide actionable information with diagnostic value. This means the grading system must be able to...
Main Author: | |
---|---|
Format: | Others |
Published: |
BYU ScholarsArchive
2016
|
Subjects: | |
Online Access: | https://scholarsarchive.byu.edu/etd/6605 https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=7605&context=etd |
Summary: | In order to improve the feedback an intelligent tutoring system provides, the grading engine needs to do more than simply indicate whether a student gives a correct answer or not. Good feedback must provide actionable information with diagnostic value. This means the grading system must be able to determine what knowledge gap or misconception may have caused the student to answer a question incorrectly. This research evaluated the quality of a rules-based grading engine in an automated online homework system by comparing grading engine scores with manually graded scores. The research sought to improve the grading engine by assessing student understanding using knowledge component research. Comparing both the current student scores and the new student scores with the manually graded scores led us to believe the grading engine rules were improved. By better aligning grading engine rules with requisite knowledge components and making revisions to task instructions the quality of the feedback provided would likely be enhanced. |
---|