Error deduction and descriptors – A comparison of two methods of translation test assessment

This paper examines two assessment methodologies used for large-scale translating and interpreting accreditation testing: error analysis / deduction and descriptors. A report by the Royal Melbourne Institute of Technology (RMIT University) (Turner and Ozolins, 2007) showed that the UK Institute of...

Full description

Bibliographic Details
Main Authors: Barry Turner, Miranda Lai, Neng Huang
Format: Article
Language:English
Published: Western Sydney University 2010-07-01
Series:Translation and Interpreting : the International Journal of Translation and Interpreting Research
Subjects:
Online Access:http://www.trans-int.org/index.php/transint/article/view/42/66
Description
Summary:This paper examines two assessment methodologies used for large-scale translating and interpreting accreditation testing: error analysis / deduction and descriptors. A report by the Royal Melbourne Institute of Technology (RMIT University) (Turner and Ozolins, 2007) showed that the UK Institute of Linguists and the American Translators Association are among international testing bodies that have moved or are moving towards using descriptors or combining negative marking and descriptors. This paper explores whether the Australian National Accreditation Authority for Translators and Interpreters (NAATI) might be able to move to a descriptor approach to assessment without risk to the reliability or accountability of its public examination system. The NAATI assessment system is used as a benchmark to compare it with assessment outcomes using the descriptor-based translation component of the U.K. Institute of Linguists Diploma of Public Service Interpreting (DPSI). The most significant finding of the research is that there was a high correlation between assessment outcomes in the two assessment systems, indicating that a descriptor system might be as reliable and accountable as the current NAATI system.
ISSN:1836-9324