Peer evaluation system for open-ended questions based on pairwise comparisons

碩士 === 國立臺灣大學 === 電機工程學研究所 === 105 === In recent years, MOOCs and online education have changed the education a lot. Many people regard MOOCs as a revolution since anyone with an Internet connection can learn. However, online courses encounter a new problem: How to grade open-ended questions automat...

Full description

Bibliographic Details
Main Authors: Po Hsun Huang, 黃柏勳
Other Authors: Ho-Lin Chen
Format: Others
Language:zh-TW
Published: 2017
Online Access:http://ndltd.ncl.edu.tw/handle/94k87p
Description
Summary:碩士 === 國立臺灣大學 === 電機工程學研究所 === 105 === In recent years, MOOCs and online education have changed the education a lot. Many people regard MOOCs as a revolution since anyone with an Internet connection can learn. However, online courses encounter a new problem: How to grade open-ended questions automatically and accurately? There are many online education platforms trying to deal with this problem. Some use peer assessment but often leads to inaccurate grades and low-quality feedback [1]. Some develop complex systems, such as peer review, self-evaluation, multiple rounds of evaluation, anonymous feedback, and so on. Some can only be applied to objective questions. But it seems that there’s no education platform can use simple but effective way to deal with the problem currently. In this thesis, several theorems are proposed and an online education platform named PK-Grader is developed. The name means auto-grading by the result of PK. It grades open-ended questions by ordinal peer evaluations and generates not only the score of answers but also the evaluation ability of each student. It also allows teachers to better understand their students and know whether they really get the concept and reach higher category in Bloom’s cognitive domain: evaluation ability. Several theorems of different fields and the combination of pairwise comparison algorithms, active learning methods, and probability models will be introduced to form our algorithm. We prove our auto-grading algorithm’s correctness by testing it with high school students and found that there is a high correlation between the scores from our system and the scores from the teacher (the correlation coefficient is about 0.9). PK- Grader also enabled the teacher to find out the fact that some original scores were wrong and helped the teacher evaluate the assignment more accurately.