Summary: | Doctor Educationis === The study aims to investigate the fundamental validity issues that can affect the DR Congo English state examination, a national exit test administered to high school final year students for certification. The study aspires to generate an understanding of the potential issues that affect the
construct validity of a test within the epistemological stance that supports a strong relationship between test construct and test context.
The study draws its theoretical underpinning from three theories: the validity theory that provides a theoretical ground necessary for understanding the quality of tests needed for assessing students’ reading abilities; the construction-integration theory that provides an understanding of how texts used in reading assessments are processed and understood by the examinees; and the strategic competence theory that explains how examinees deploy strategies to complete test tasks, and the extent to which these strategies tap into the reading construct.
Furthermore, the study proposes a reading model that signposts the social context of testing; therefore, conceptualizing reading as both a cognitive and a social process. As research design, the study adopts an exploratory design using both qualitative and quantitative data. Besides, the study uses protocol analysis and content analysis methodologies. While the former provides an understanding of the cognitive processes that mediate the reading construct and test performance so as to explore the different strategies examinees use to answer the English state examination (henceforth termed ESE) test questions, the latter examines the content of the different ESE papers so as to identify the different textual and item features that potentially impact on examinees’ performance on the ESE tasks. As instruments, the study uses a concurrent strategies questionnaire administered to 496 student-participants, a contextual
questionnaire administered to 26 student-participants, a contextual questionnaire administered to 27 teacher-participants, and eight tests administered to 496 student-participants. The findings indicate that, the ESE appears to be less appropriate to the ESE context as the majority of ESE test items target careful reading than expeditious reading; on the one hand, and reading at global level than to reading at local level; on the other hand. The findings also indicate that the ESE tasks hardly take account of the text structure and the underlined cognitive demands appropriate to the text types. Besides, the ESE fails to include other critical aspects of the reading construct. Finally, the findings also indicate that the ESE constructors may not be capable to construct an ESE with five functioning distractors as expected. Moreover, the inclusion of the implicit option 6 overlaps with the conceptual meaning of this option. The entire process of the present study has generated some insights that can advance our understanding of the construct validity of reading tests. These insights are: (a) the concept of validity is an evolving and context-dependent concept, (b) reading construct cannot be examined outside the actual context of reading activity, (c) elimination of distractors can sometimes be a construct-relevant strategy, (d) construct underrepresentation is a context-dependent concept, and (e) a reading test cannot be valid in all contexts. The suggested proposal for the improvement of the ESE requires the Congolese government through its Department of Education to (a) always conduct validation studies to justify the use of the ESE, (b) always consider the actual context of reading activity while developing the ESE, (c) revisit the meanings and interpretations of the ESE scores, (d) ensure the appropriateness of tasks
to be included in the ESE, (e) ensure the construct representativeness of the ESE tasks, (f) revisit the number of questions to be included in the ESE, (g) avoid bias in the ESE texts in order to ensure fairness, (h) diversify the genres of ESE texts, (i) ensure the coherence of ESE texts through the use of transitions and cohesive devices, (j) ensure that the order of test questions is in alignment with the order of text information, (k) revisit the structure and length of the texts to be included in the ESE, (l) revisit the number of alternatives to be included in the ESE, and (m) reconsider the use of the implicit alternative 6.
|