Ontology validation & utilisation for personalised feedback in education

Virtual Learning Environments provide teachers with a web-based platform to create different types of feedback which vary in the level of details given in the feedback content. Types of feedback can range from a simple correct or vice-versa to a detailed explanation about the reason why the correct...

Full description

Bibliographic Details
Main Author: Demaidi, Mona Nabil
Published: Birmingham City University 2018
Online Access:https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.753290
Description
Summary:Virtual Learning Environments provide teachers with a web-based platform to create different types of feedback which vary in the level of details given in the feedback content. Types of feedback can range from a simple correct or vice-versa to a detailed explanation about the reason why the correct answer is correct and the incorrect answer is incorrect. However, these environments usually follow the ‘one size fits all’ approach and provide all students with the same type of feedback regardless of students’ individual characteristics and the assessment question’s individual characteristics. This approach is likely to negatively affect students’ performance and learning gain. Several personalised feedback frameworks have been proposed which adapt the different types of feedback based on the student characteristics and/or the assessment question characteristics. The frameworks have three drawbacks: firstly, creating the different types of feedback is a time consuming process, as the types of feedback are either hard-coded or auto-generated from a restricted set of solutions created by the teacher or a domain expert; secondly, they are domain dependent and cannot be used to auto-generate feedback across different educational domains; thirdly, they have not attempted any integration which takes into consideration both the characteristics of the assessment questions and the student’s characteristics. This thesis contributes to research carried out on personalised feedback frameworks by proposing a generic novel system which is called the Ontology-based Personalised Feedback Generator (OntoPeFeGe). OntoPeFeGe has three aims: firstly, it uses any pre-existing domain ontology which is a knowledge representation of the educational domain to auto-generate assessment questions with different characteristics, in particular, questions aimed to assess students at different levels in Bloom’s taxonomy1; secondly, it associates each auto-generated question with specialised domain independent types of feedback; thirdly, it provides students with personalised feedback which adapts the types of feedback based on the student and the assessment question characteristics. OntoPeFeGe allowed the integration of student’s characteristics, the assessment question’s characteristics, and the personalised feedback, for the first time. The experimental results applying OntoPeFeGe in a real educational environment revealed that the personalised feedback particularly improved the performance of students with initial low background knowledge. Moreover, the personalised feedback improved students’ learning gain significantly at questions designed to assess the students at high levels in Bloom’s taxonomy. In addition, OntoPeFeGe is the first prototype to quantitatively analyse the quality of auto-generated questions and tests, and to provide question design guidance for developers and researchers working in the field of question generators. OntoPeFeGe could be applied to any educational field captured in an ontology. However, assessing how suitable the ontology is for generating questions and feedback, as well as how it represents the subject domain of interest, is a necessary requirement to using the ontology in OntoPeFeGe. Therefore, this thesis also presents a novel method termed Terminological ONtology Evaluator (TONE) which uses the educational corpus (e.g., textbooks and lecture slides) to evaluate the domain ontologies. TONE has been evaluated experimentally showing its potential as an evaluation method for educational ontologies. 1Bloom’s taxonomy categorises the assessment questions into the following six major levels, which are arranged in a hierarchical order according to the complexity of the cognitive process involved: knowledge, comprehension, application, analysis, synthesis, and evaluation.