Matching constructed answers for e-assessment

The usage of e-learning has been increasing in education, and significant research in e-assessment in particular has been carried out. However, most current solutions focus on question types that require selection between alternative answers. Answers requiring active construction are used much less...

Full description

Bibliographic Details
Main Author: Tselonis, Christos
Published: University of Manchester 2008
Subjects:
Online Access:http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492126
Description
Summary:The usage of e-learning has been increasing in education, and significant research in e-assessment in particular has been carried out. However, most current solutions focus on question types that require selection between alternative answers. Answers requiring active construction are used much less frequently and computer support for them is usually limited to restricted knowledge domains. Here, the automated evaluation of constructed answers is investigated, in order to produce useful information. Such answer types include mathematical expressions, computer programs and diagrams, the main focus of this work being the last. A generic approach is followed; the models of specimen and candidate answers, and the process of matching them against each other, do not depend on the knowledge domain ofthe question, nor on the type ofthe answer. Consequently, any type of diagram, and even any type of constructed answer, may potentially be modelled and matched using the proposed methods. The trade-off between universal applicability and accuracy is investigated; the results of automated marking using a generic system do not agree with human-awarded marks as closely as those generated by systems with inherently limited scope. However, the information produced may serve as a check on marks awarded by a human, or be used to cluster a large number of answers by similarity, considerably cutting down repeated work on the human's part. Another practical application, to formative assessment, is investigated in detail. The modelling and matching methods are integrated into a diagram editor to provide real-time, interactive, textual and visual feedback. The tool has been put to test by real users on four occasions and the results turned out to be positive. This work demonstrates that a single implementation, handling constructed answers of multiple domains, is not only possible but useful too. The applicability is only constrained by the extent of automation, not the type of answer or the style of question, a conclusion which may encourage the wider uptake of e-assessment.