Towards the Automatic Classification of Student Answers to Open-ended Questions
One of the main research challenges nowadays in the context of Massive Open Online Courses (MOOCs) is the automation of the evaluation process of text-based assessments effectively. Text-based assessments, such as essay writing, have been proved to be better indicators of higher level of understandi...
Main Author: | |
---|---|
Other Authors: | |
Format: | Others |
Language: | en |
Published: |
Université d'Ottawa / University of Ottawa
2019
|
Subjects: | |
Online Access: | http://hdl.handle.net/10393/39093 http://dx.doi.org/10.20381/ruor-23341 |
id |
ndltd-uottawa.ca-oai-ruor.uottawa.ca-10393-39093 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-uottawa.ca-oai-ruor.uottawa.ca-10393-390932019-04-25T04:45:41Z Towards the Automatic Classification of Student Answers to Open-ended Questions Alvarado Mantecon, Jesus Gerardo Zouaq, Amal Natural Language Processing Machine Learning Educational Data Mining One of the main research challenges nowadays in the context of Massive Open Online Courses (MOOCs) is the automation of the evaluation process of text-based assessments effectively. Text-based assessments, such as essay writing, have been proved to be better indicators of higher level of understanding than machine-scored assessments (E.g. Multiple Choice Questions). Nonetheless, due to the rapid growth of MOOCs, text-based evaluation has become a difficult task for human markers, creating the need of automated systems for grading. In this thesis, we focus on the automated short answer grading task (ASAG), which automatically assesses natural language answers to open-ended questions into correct and incorrect classes. We propose an ensemble supervised machine learning approach that relies on two types of classifiers: a response-based classifier, which centers around feature extraction from available responses, and a reference-based classifier which considers the relationships between responses, model answers and questions. For each classifier, we explored a set of features based on words and entities. For the response-based classifier, we tested and compared 5 features: traditional n-gram models, entity URIs (Uniform Resource Identifier) and entity mentions both extracted using a semantic annotation API, entity mention embeddings based on GloVe and entity URI embeddings extracted from Wikipedia. For the reference-based classifier, we explored fourteen features: cosine similarity between sentence embeddings from student answers and model answers, number of overlapping elements (words, entity URI, entity mention) between student answers and model answers or question text, Jaccard similarity coefficient between student answers and model answers or question text (based on words, entity URI or entity mentions) and a sentence embedding representation. We evaluated our classifiers on three datasets, two of which belong to the SemEval ASAG competition (Dzikovska et al., 2013). Our results show that, in general, reference-based features perform much better than response-based features in terms of accuracy and macro average f1-score. Within the reference-based approach, we observe that the use of S6 embedding representation, which considers question text, student and model answer, generated the best performing models. Nonetheless, their combination with other similarity features helped build more accurate classifiers. As for response-based classifiers, models based on traditional n-gram features remained the best models. Finally, we combined our best reference-based and response-based classifiers using an ensemble learning model. Our ensemble classifiers combining both approaches achieved the best results for one of the evaluation datasets, but underperformed on the remaining two. We also compared the best two classifiers with some of the main state-of-the-art results on the SemEval competition. Our final embedded meta-classifier outperformed the top-ranking result on the SemEval Beetle dataset and our top classifier on SemEval SciEntBank, trained on reference-based features, obtained the 2nd position. In conclusion, the reference-based approach, powered mainly by sentence level embeddings and other similarity features, proved to generate the most efficient models in two out of three datasets and the ensemble model was the best on the SemEval Beetle dataset. 2019-04-24T12:57:05Z 2019-04-24T12:57:05Z 2019-04-24 Thesis http://hdl.handle.net/10393/39093 http://dx.doi.org/10.20381/ruor-23341 en application/pdf Université d'Ottawa / University of Ottawa |
collection |
NDLTD |
language |
en |
format |
Others
|
sources |
NDLTD |
topic |
Natural Language Processing Machine Learning Educational Data Mining |
spellingShingle |
Natural Language Processing Machine Learning Educational Data Mining Alvarado Mantecon, Jesus Gerardo Towards the Automatic Classification of Student Answers to Open-ended Questions |
description |
One of the main research challenges nowadays in the context of Massive Open Online Courses (MOOCs) is the automation of the evaluation process of text-based assessments effectively. Text-based assessments, such as essay writing, have been proved to be better indicators of higher level of understanding than machine-scored assessments (E.g. Multiple Choice Questions). Nonetheless, due to the rapid growth of MOOCs, text-based evaluation has become a difficult task for human markers, creating the need of automated systems for grading.
In this thesis, we focus on the automated short answer grading task (ASAG), which automatically assesses natural language answers to open-ended questions into correct and incorrect classes. We propose an ensemble supervised machine learning approach that relies on two types of classifiers: a response-based classifier, which centers around feature extraction from available responses, and a reference-based classifier which considers the relationships between responses, model answers and questions.
For each classifier, we explored a set of features based on words and entities. For the response-based classifier, we tested and compared 5 features: traditional n-gram models, entity URIs (Uniform Resource Identifier) and entity mentions both extracted using a semantic annotation API, entity mention embeddings based on GloVe and entity URI embeddings extracted from Wikipedia. For the reference-based classifier, we explored fourteen features: cosine similarity between sentence embeddings from student answers and model answers, number of overlapping elements (words, entity URI, entity mention) between student answers and model answers or question text, Jaccard similarity coefficient between student answers and model answers or question text (based on words, entity URI or entity mentions) and a sentence embedding representation.
We evaluated our classifiers on three datasets, two of which belong to the SemEval ASAG competition (Dzikovska et al., 2013). Our results show that, in general, reference-based features perform much better than response-based features in terms of accuracy and macro average f1-score. Within the reference-based approach, we observe that the use of S6 embedding representation, which considers question text, student and model answer, generated the best performing models. Nonetheless, their combination with other similarity features helped build more accurate classifiers. As for response-based classifiers, models based on traditional n-gram features remained the best models.
Finally, we combined our best reference-based and response-based classifiers using an ensemble learning model. Our ensemble classifiers combining both approaches achieved the best results for one of the evaluation datasets, but underperformed on the remaining two. We also compared the best two classifiers with some of the main state-of-the-art results on the SemEval competition. Our final embedded meta-classifier outperformed the top-ranking result on the SemEval Beetle dataset and our top classifier on SemEval SciEntBank, trained on reference-based features, obtained the 2nd position.
In conclusion, the reference-based approach, powered mainly by sentence level embeddings and other similarity features, proved to generate the most efficient models in two out of three datasets and the ensemble model was the best on the SemEval Beetle dataset. |
author2 |
Zouaq, Amal |
author_facet |
Zouaq, Amal Alvarado Mantecon, Jesus Gerardo |
author |
Alvarado Mantecon, Jesus Gerardo |
author_sort |
Alvarado Mantecon, Jesus Gerardo |
title |
Towards the Automatic Classification of Student Answers to Open-ended Questions |
title_short |
Towards the Automatic Classification of Student Answers to Open-ended Questions |
title_full |
Towards the Automatic Classification of Student Answers to Open-ended Questions |
title_fullStr |
Towards the Automatic Classification of Student Answers to Open-ended Questions |
title_full_unstemmed |
Towards the Automatic Classification of Student Answers to Open-ended Questions |
title_sort |
towards the automatic classification of student answers to open-ended questions |
publisher |
Université d'Ottawa / University of Ottawa |
publishDate |
2019 |
url |
http://hdl.handle.net/10393/39093 http://dx.doi.org/10.20381/ruor-23341 |
work_keys_str_mv |
AT alvaradomanteconjesusgerardo towardstheautomaticclassificationofstudentanswerstoopenendedquestions |
_version_ |
1719020813099728896 |