Differences Across Levels in the Language of Agency and Ability in Rating Scales for Large-Scale Second Language Writing Assessments

While large-scale language and writing assessments benefit from a wealth of literature on the reliability and validity of specific tests and rating procedures, there is comparatively less literature that explores the specific language of second language writing rubrics. This paper provides an analys...

Full description

Bibliographic Details
Main Author: Anderson Salena Sampson
Format: Article
Language:English
Published: Sciendo 2017-12-01
Series:Studia Anglica Posnaniensia
Subjects:
Online Access:https://doi.org/10.1515/stap-2017-0006
id doaj-fbb5dfd610d94e9abf7acfd2c4054699
record_format Article
spelling doaj-fbb5dfd610d94e9abf7acfd2c40546992021-09-05T14:00:48ZengSciendoStudia Anglica Posnaniensia0081-62722082-51022017-12-0152214717210.1515/stap-2017-0006stap-2017-0006Differences Across Levels in the Language of Agency and Ability in Rating Scales for Large-Scale Second Language Writing AssessmentsAnderson Salena Sampson0Valparaiso UniversityWhile large-scale language and writing assessments benefit from a wealth of literature on the reliability and validity of specific tests and rating procedures, there is comparatively less literature that explores the specific language of second language writing rubrics. This paper provides an analysis of the language of performance descriptors for the public versions of the TOEFL and IELTS writing assessment rubrics, with a focus on linguistic agency encoded by agentive verbs and language of ability encoded by modal verbs can and cannot. While the IELTS rubrics feature more agentive verbs than the TOEFL rubrics, both pairs of rubrics feature uneven syntax across the band or score descriptors with either more agentive verbs for the highest scores, more nominalization for the lowest scores, or language of ability exclusively in the lowest scores. These patterns mirror similar patterns in the language of college-level classroom-based writing rubrics, but they differ from patterns seen in performance descriptors for some large-scale admissions tests. It is argued that the lack of syntactic congruity across performance descriptors in the IELTS and TOEFL rubrics may reflect a bias in how actual student performances at different levels are characterized.https://doi.org/10.1515/stap-2017-0006rating scalessecond language writingwriting assessmentperformance descriptorslinguistic agency
collection DOAJ
language English
format Article
sources DOAJ
author Anderson Salena Sampson
spellingShingle Anderson Salena Sampson
Differences Across Levels in the Language of Agency and Ability in Rating Scales for Large-Scale Second Language Writing Assessments
Studia Anglica Posnaniensia
rating scales
second language writing
writing assessment
performance descriptors
linguistic agency
author_facet Anderson Salena Sampson
author_sort Anderson Salena Sampson
title Differences Across Levels in the Language of Agency and Ability in Rating Scales for Large-Scale Second Language Writing Assessments
title_short Differences Across Levels in the Language of Agency and Ability in Rating Scales for Large-Scale Second Language Writing Assessments
title_full Differences Across Levels in the Language of Agency and Ability in Rating Scales for Large-Scale Second Language Writing Assessments
title_fullStr Differences Across Levels in the Language of Agency and Ability in Rating Scales for Large-Scale Second Language Writing Assessments
title_full_unstemmed Differences Across Levels in the Language of Agency and Ability in Rating Scales for Large-Scale Second Language Writing Assessments
title_sort differences across levels in the language of agency and ability in rating scales for large-scale second language writing assessments
publisher Sciendo
series Studia Anglica Posnaniensia
issn 0081-6272
2082-5102
publishDate 2017-12-01
description While large-scale language and writing assessments benefit from a wealth of literature on the reliability and validity of specific tests and rating procedures, there is comparatively less literature that explores the specific language of second language writing rubrics. This paper provides an analysis of the language of performance descriptors for the public versions of the TOEFL and IELTS writing assessment rubrics, with a focus on linguistic agency encoded by agentive verbs and language of ability encoded by modal verbs can and cannot. While the IELTS rubrics feature more agentive verbs than the TOEFL rubrics, both pairs of rubrics feature uneven syntax across the band or score descriptors with either more agentive verbs for the highest scores, more nominalization for the lowest scores, or language of ability exclusively in the lowest scores. These patterns mirror similar patterns in the language of college-level classroom-based writing rubrics, but they differ from patterns seen in performance descriptors for some large-scale admissions tests. It is argued that the lack of syntactic congruity across performance descriptors in the IELTS and TOEFL rubrics may reflect a bias in how actual student performances at different levels are characterized.
topic rating scales
second language writing
writing assessment
performance descriptors
linguistic agency
url https://doi.org/10.1515/stap-2017-0006
work_keys_str_mv AT andersonsalenasampson differencesacrosslevelsinthelanguageofagencyandabilityinratingscalesforlargescalesecondlanguagewritingassessments
_version_ 1717811397528649728