Summary: | The introduction of computerized formative assessments in schools has enabled the monitoring of students’ progress with more flexible test schedules. Currently, the timing and frequency of computerized formative assessments are determined based on districts and school authorities’ agreements with testing organizations, the teachers’ judgment of students’ progress, and grade-level testing guidelines recommended by researchers. However, these practices often result in a rigid test scheduling that disregards the pace at which students acquire knowledge. Furthermore, students are likely to experience the loss of instructional time due to frequent testing. To administer computerized formative assessments efficiently, teachers should be provided systematic guidance on finding an optimal testing schedule based on each student’s progress. In this study, we aim to demonstrate the utility of intelligent recommender systems (IRSs) for generating individualized test schedules for students. Using real data from a large sample of students in grade 2 (n = 355,078) and grade 4 (n = 390,336) who completed the Star Math assessment during the 2017–2018 school year, we developed an IRS and evaluated its performance in finding a balance between data quality and testing frequency. Results indicated that the IRS was able to recommend a fewer number of test administrations for both grade levels, compared with standard practice. Further, the IRS was able to maximize the score difference from one test administration to another by eliminating the test administrations in which students’ scores did not change significantly. Implications for generating personalized schedules to monitor student progress and recommendations for future research are discussed.
|