Policy Evidence by Design: How International Large-Scale Assessments Influence Repetition Rates

Policy Evidence by Design: International Large-Scale Assessments and Grade Repetition Links between international large-scale assessment (ILSA) methodologies, international organization (IO) ideologies, and education policies are not well understood. Framed by statistical constructivism, this articl...

Full description

Bibliographic Details
Main Author: Cardoso, Manuel Enrique
Language:English
Published: 2022
Subjects:
Online Access:https://doi.org/10.7916/d8-xs6b-bp55
id ndltd-columbia.edu-oai-academiccommons.columbia.edu-10.7916-d8-xs6b-bp55
record_format oai_dc
spelling ndltd-columbia.edu-oai-academiccommons.columbia.edu-10.7916-d8-xs6b-bp552021-12-02T05:03:15ZPolicy Evidence by Design: How International Large-Scale Assessments Influence Repetition RatesCardoso, Manuel Enrique2022ThesesEducational tests and measurementsEducation and stateSchool management and organizationInternational educationGrade repetitionProgramme for International Student AssessmentUnescoPolicy Evidence by Design: International Large-Scale Assessments and Grade Repetition Links between international large-scale assessment (ILSA) methodologies, international organization (IO) ideologies, and education policies are not well understood. Framed by statistical constructivism, this article describes two interrelated phenomena. First, OECD/ PISA and UNESCO/TERCE documents show how IOs’ doctrines about the value of education, based on either Human Capital Theory or Human Rights, shape the design of the ILSAs they support. Second, quantitative analyses for four Latin American countries show that differently designed ILSAs disagree on the effectiveness of a specific policy, namely, grade retention: PISA’s achievement gap between repeaters and nonrepeaters doubles TERCE’s. This matters and warrants further research: divergent empirical results could potentially incentivize different education policies, reinforce IOs’ initial policy biases, and provide perverse incentives for countries to modulate retention rates or join an ILSA on spurious motivations. In summary, ILSA designs, shaped by IOs’ educational doctrines, yield different data, potentially inspiring divergent global policy directives and national decisions. When ILSAs met policy: Evolving discourses on grade repetition. This study explores phenomena of ordinalization and scientization of policy discourse, focusing on the case of grade retention in publications by OECD’s PISA and UNESCO’s ERCE (2007-2017), from a sociology of quantification perspective. While prior research shows these ILSAs yield divergent data regarding grade retention’s effectiveness, this study shows similarities in their critical discourse on grade repetition’s effectiveness. Genre analysis finds similarities in how both ILSAs structure their discourse on grade repetition and use references solely to critique it, presenting a partial view of the scholarly landscape. However, horizontal comparisons also find differences across ILSAs in the use of ordinalization (e.g., rankings) in charts, as well as differences in the extent to which their policy discourse embraces scientization. The ILSAs converge in singling out grade repetition as the policy most strongly associated with low performance; this should be interpreted in the context of one key similarity in their design. Policymaking to the test? How ILSAs influence repetition rates Do international large-scale assessments influence education policy? How? Through scripts, lessons, or incentives? For some, they all produce similar outcomes. For others, different assessment data, shaped by different designs, and mediated by international organizations’ policy directives, prompt different policy decisions. For some, participation in these assessments may be linked to lower repetition rates, as per the policy scripts hypothesis inspired by world society theory (WST). For others, assessments’ comparison strategies (age vs. grade) influence repetition in participating countries, according to policy lessons or incentives hypotheses, respectively inspired by educational effectiveness research (EER) and the sociology of quantification, and particularly the notion of retroaction. Fixed-effects panel regression models of eighteen Latin American countries (1992-2017) show that participation in assessments is associated with changing repetition rates in primary and secondary, while controlling for other factors. The findings show statistically significant differences between some assessment types. The conclusions spur new questions, delineating a future agenda.Englishhttps://doi.org/10.7916/d8-xs6b-bp55
collection NDLTD
language English
sources NDLTD
topic Educational tests and measurements
Education and state
School management and organization
International education
Grade repetition
Programme for International Student Assessment
Unesco
spellingShingle Educational tests and measurements
Education and state
School management and organization
International education
Grade repetition
Programme for International Student Assessment
Unesco
Cardoso, Manuel Enrique
Policy Evidence by Design: How International Large-Scale Assessments Influence Repetition Rates
description Policy Evidence by Design: International Large-Scale Assessments and Grade Repetition Links between international large-scale assessment (ILSA) methodologies, international organization (IO) ideologies, and education policies are not well understood. Framed by statistical constructivism, this article describes two interrelated phenomena. First, OECD/ PISA and UNESCO/TERCE documents show how IOs’ doctrines about the value of education, based on either Human Capital Theory or Human Rights, shape the design of the ILSAs they support. Second, quantitative analyses for four Latin American countries show that differently designed ILSAs disagree on the effectiveness of a specific policy, namely, grade retention: PISA’s achievement gap between repeaters and nonrepeaters doubles TERCE’s. This matters and warrants further research: divergent empirical results could potentially incentivize different education policies, reinforce IOs’ initial policy biases, and provide perverse incentives for countries to modulate retention rates or join an ILSA on spurious motivations. In summary, ILSA designs, shaped by IOs’ educational doctrines, yield different data, potentially inspiring divergent global policy directives and national decisions. When ILSAs met policy: Evolving discourses on grade repetition. This study explores phenomena of ordinalization and scientization of policy discourse, focusing on the case of grade retention in publications by OECD’s PISA and UNESCO’s ERCE (2007-2017), from a sociology of quantification perspective. While prior research shows these ILSAs yield divergent data regarding grade retention’s effectiveness, this study shows similarities in their critical discourse on grade repetition’s effectiveness. Genre analysis finds similarities in how both ILSAs structure their discourse on grade repetition and use references solely to critique it, presenting a partial view of the scholarly landscape. However, horizontal comparisons also find differences across ILSAs in the use of ordinalization (e.g., rankings) in charts, as well as differences in the extent to which their policy discourse embraces scientization. The ILSAs converge in singling out grade repetition as the policy most strongly associated with low performance; this should be interpreted in the context of one key similarity in their design. Policymaking to the test? How ILSAs influence repetition rates Do international large-scale assessments influence education policy? How? Through scripts, lessons, or incentives? For some, they all produce similar outcomes. For others, different assessment data, shaped by different designs, and mediated by international organizations’ policy directives, prompt different policy decisions. For some, participation in these assessments may be linked to lower repetition rates, as per the policy scripts hypothesis inspired by world society theory (WST). For others, assessments’ comparison strategies (age vs. grade) influence repetition in participating countries, according to policy lessons or incentives hypotheses, respectively inspired by educational effectiveness research (EER) and the sociology of quantification, and particularly the notion of retroaction. Fixed-effects panel regression models of eighteen Latin American countries (1992-2017) show that participation in assessments is associated with changing repetition rates in primary and secondary, while controlling for other factors. The findings show statistically significant differences between some assessment types. The conclusions spur new questions, delineating a future agenda.
author Cardoso, Manuel Enrique
author_facet Cardoso, Manuel Enrique
author_sort Cardoso, Manuel Enrique
title Policy Evidence by Design: How International Large-Scale Assessments Influence Repetition Rates
title_short Policy Evidence by Design: How International Large-Scale Assessments Influence Repetition Rates
title_full Policy Evidence by Design: How International Large-Scale Assessments Influence Repetition Rates
title_fullStr Policy Evidence by Design: How International Large-Scale Assessments Influence Repetition Rates
title_full_unstemmed Policy Evidence by Design: How International Large-Scale Assessments Influence Repetition Rates
title_sort policy evidence by design: how international large-scale assessments influence repetition rates
publishDate 2022
url https://doi.org/10.7916/d8-xs6b-bp55
work_keys_str_mv AT cardosomanuelenrique policyevidencebydesignhowinternationallargescaleassessmentsinfluencerepetitionrates
_version_ 1723963436950355968