Enhancing data pipelines for forecasting student performance: integrating feature selection with cross-validation
Abstract Educators seek to harness knowledge from educational corpora to improve student performance outcomes. Although prior studies have compared the efficacy of data mining methods (DMMs) in pipelines for forecasting student success, less work has focused on identifying a set of relevant features...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
SpringerOpen
2021-08-01
|
Series: | International Journal of Educational Technology in Higher Education |
Subjects: | |
Online Access: | https://doi.org/10.1186/s41239-021-00279-6 |
id |
doaj-142d78d4550f4ff1b3c61c8926b404a3 |
---|---|
record_format |
Article |
spelling |
doaj-142d78d4550f4ff1b3c61c8926b404a32021-08-22T11:15:57ZengSpringerOpenInternational Journal of Educational Technology in Higher Education2365-94402021-08-0118112310.1186/s41239-021-00279-6Enhancing data pipelines for forecasting student performance: integrating feature selection with cross-validationRoberto Bertolini0Stephen J. Finch1Ross H. Nehm2Department of Applied Mathematics and Statistics, Stony Brook UniversityDepartment of Applied Mathematics and Statistics, Stony Brook UniversityDepartment of Ecology and Evolution, Program in Science Education, Stony Brook UniversityAbstract Educators seek to harness knowledge from educational corpora to improve student performance outcomes. Although prior studies have compared the efficacy of data mining methods (DMMs) in pipelines for forecasting student success, less work has focused on identifying a set of relevant features prior to model development and quantifying the stability of feature selection techniques. Pinpointing a subset of pertinent features can (1) reduce the number of variables that need to be managed by stakeholders, (2) make “black-box” algorithms more interpretable, and (3) provide greater guidance for faculty to implement targeted interventions. To that end, we introduce a methodology integrating feature selection with cross-validation and rank each feature on subsets of the training corpus. This modified pipeline was applied to forecast the performance of 3225 students in a baccalaureate science course using a set of 57 features, four DMMs, and four filter feature selection techniques. Correlation Attribute Evaluation (CAE) and Fisher’s Scoring Algorithm (FSA) achieved significantly higher Area Under the Curve (AUC) values for logistic regression (LR) and elastic net regression (GLMNET), compared to when this pipeline step was omitted. Relief Attribute Evaluation (RAE) was highly unstable and produced models with the poorest prediction performance. Borda’s method identified grade point average, number of credits taken, and performance on concept inventory assessments as the primary factors impacting predictions of student performance. We discuss the benefits of this approach when developing data pipelines for predictive modeling in undergraduate settings that are more interpretable and actionable for faculty and stakeholders.https://doi.org/10.1186/s41239-021-00279-6Data pipelineFeature selectionCross-validationData miningIntroductory biology |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Roberto Bertolini Stephen J. Finch Ross H. Nehm |
spellingShingle |
Roberto Bertolini Stephen J. Finch Ross H. Nehm Enhancing data pipelines for forecasting student performance: integrating feature selection with cross-validation International Journal of Educational Technology in Higher Education Data pipeline Feature selection Cross-validation Data mining Introductory biology |
author_facet |
Roberto Bertolini Stephen J. Finch Ross H. Nehm |
author_sort |
Roberto Bertolini |
title |
Enhancing data pipelines for forecasting student performance: integrating feature selection with cross-validation |
title_short |
Enhancing data pipelines for forecasting student performance: integrating feature selection with cross-validation |
title_full |
Enhancing data pipelines for forecasting student performance: integrating feature selection with cross-validation |
title_fullStr |
Enhancing data pipelines for forecasting student performance: integrating feature selection with cross-validation |
title_full_unstemmed |
Enhancing data pipelines for forecasting student performance: integrating feature selection with cross-validation |
title_sort |
enhancing data pipelines for forecasting student performance: integrating feature selection with cross-validation |
publisher |
SpringerOpen |
series |
International Journal of Educational Technology in Higher Education |
issn |
2365-9440 |
publishDate |
2021-08-01 |
description |
Abstract Educators seek to harness knowledge from educational corpora to improve student performance outcomes. Although prior studies have compared the efficacy of data mining methods (DMMs) in pipelines for forecasting student success, less work has focused on identifying a set of relevant features prior to model development and quantifying the stability of feature selection techniques. Pinpointing a subset of pertinent features can (1) reduce the number of variables that need to be managed by stakeholders, (2) make “black-box” algorithms more interpretable, and (3) provide greater guidance for faculty to implement targeted interventions. To that end, we introduce a methodology integrating feature selection with cross-validation and rank each feature on subsets of the training corpus. This modified pipeline was applied to forecast the performance of 3225 students in a baccalaureate science course using a set of 57 features, four DMMs, and four filter feature selection techniques. Correlation Attribute Evaluation (CAE) and Fisher’s Scoring Algorithm (FSA) achieved significantly higher Area Under the Curve (AUC) values for logistic regression (LR) and elastic net regression (GLMNET), compared to when this pipeline step was omitted. Relief Attribute Evaluation (RAE) was highly unstable and produced models with the poorest prediction performance. Borda’s method identified grade point average, number of credits taken, and performance on concept inventory assessments as the primary factors impacting predictions of student performance. We discuss the benefits of this approach when developing data pipelines for predictive modeling in undergraduate settings that are more interpretable and actionable for faculty and stakeholders. |
topic |
Data pipeline Feature selection Cross-validation Data mining Introductory biology |
url |
https://doi.org/10.1186/s41239-021-00279-6 |
work_keys_str_mv |
AT robertobertolini enhancingdatapipelinesforforecastingstudentperformanceintegratingfeatureselectionwithcrossvalidation AT stephenjfinch enhancingdatapipelinesforforecastingstudentperformanceintegratingfeatureselectionwithcrossvalidation AT rosshnehm enhancingdatapipelinesforforecastingstudentperformanceintegratingfeatureselectionwithcrossvalidation |
_version_ |
1721199939422257152 |