Evidence of Construct-Related Validity for Assessment Centers: More Pieces of the Inferential Pie

Much research has been conducted on the topic of the construct-related validity of assessment centers, however a definitive conclusion has yet to be drawn. The central question of this debate is whether assessment centers are measuring the dimensions they are designed to measure. The present study a...

Full description

Bibliographic Details
Main Author: Archuleta, Kathryn
Other Authors: Arthur, Jr., Winfred
Format: Others
Language:English
Published: 2010
Subjects:
Online Access:http://hdl.handle.net/1969.1/ETD-TAMU-2009-05-579
id ndltd-tamu.edu-oai-repository.tamu.edu-1969.1-ETD-TAMU-2009-05-579
record_format oai_dc
spelling ndltd-tamu.edu-oai-repository.tamu.edu-1969.1-ETD-TAMU-2009-05-5792013-01-08T10:41:07ZEvidence of Construct-Related Validity for Assessment Centers: More Pieces of the Inferential PieArchuleta, Kathrynassessment centerconstruct validityevaluation methodopportunity to behavepersonalityMuch research has been conducted on the topic of the construct-related validity of assessment centers, however a definitive conclusion has yet to be drawn. The central question of this debate is whether assessment centers are measuring the dimensions they are designed to measure. The present study attempted to provide more evidence toward the improvement of construct-related validity. The first hypothesis involved determining whether opportunity to observe and opportunity to behave influenced discriminant and convergent validity. The second hypothesis addressed the debate over evaluation method and examined which method, within-exercise or within-dimension, yielded more favorable internal construct-related validity evidence. The third hypothesis explored the call for exercise scoring in assessment centers and compared the criterion-related validity of exercise versus dimension scores within the same assessment center. Finally, the fourth objective looked at the relationship of the stability of the dimensions with internal construct-related validity, specifically convergent validity evidence. A developmental assessment center used in two applied settings supplied the data. Two administrations of the assessment center were conducted for low to mid-level managers in a state agency (N = 31). Five administrations were conducted in a professional graduate school of public administration that prepares students for leadership and managerial positions in government and public service (N = 108). The seven administrations yielded a total sample size of 139 participants. Analysis of multi-trait-multi-method (MTMM) matrices revealed that, as hypothesized, a lack of opportunity to behave within exercises, operationalized using behavior counts, yielded poor discriminant validity. Assessor ratings of opportunity to observe and behave did not produce hypothesized results. Consistent with the second hypothesis, secondary assessors, who represented the within-dimension evaluation method, provided ratings that demonstrated better construct-related validity evidence than the ratings provided by primary assessors, who represented the within-exercise method. Correlation and regression analyses of the dimension/performance relationships and the exercise/performance relationships revealed neither dimensions nor exercises to be the better predictor of supervisor ratings of performance. Using MTMM, partial support was found for the fourth objective: those dimensions that were more stable across exercises yielded better convergent validity evidence versus those dimensions that were more situationally specific. However the differences were not statistically significant or large. Overall results of this study suggest that there are some areas of design and implementation that can affect the construct-related validity of assessment centers, and researchers should continue to search for ways to improve assessment center construct-related validity, but should also look for ways other than MTMM to assess validity.Arthur, Jr., Winfred2010-07-15T00:12:47Z2010-07-23T21:44:11Z2010-07-15T00:12:47Z2010-07-23T21:44:11Z2009-052010-07-14May 2009BookThesisElectronic Dissertationtextapplication/pdfhttp://hdl.handle.net/1969.1/ETD-TAMU-2009-05-579eng
collection NDLTD
language English
format Others
sources NDLTD
topic assessment center
construct validity
evaluation method
opportunity to behave
personality
spellingShingle assessment center
construct validity
evaluation method
opportunity to behave
personality
Archuleta, Kathryn
Evidence of Construct-Related Validity for Assessment Centers: More Pieces of the Inferential Pie
description Much research has been conducted on the topic of the construct-related validity of assessment centers, however a definitive conclusion has yet to be drawn. The central question of this debate is whether assessment centers are measuring the dimensions they are designed to measure. The present study attempted to provide more evidence toward the improvement of construct-related validity. The first hypothesis involved determining whether opportunity to observe and opportunity to behave influenced discriminant and convergent validity. The second hypothesis addressed the debate over evaluation method and examined which method, within-exercise or within-dimension, yielded more favorable internal construct-related validity evidence. The third hypothesis explored the call for exercise scoring in assessment centers and compared the criterion-related validity of exercise versus dimension scores within the same assessment center. Finally, the fourth objective looked at the relationship of the stability of the dimensions with internal construct-related validity, specifically convergent validity evidence. A developmental assessment center used in two applied settings supplied the data. Two administrations of the assessment center were conducted for low to mid-level managers in a state agency (N = 31). Five administrations were conducted in a professional graduate school of public administration that prepares students for leadership and managerial positions in government and public service (N = 108). The seven administrations yielded a total sample size of 139 participants. Analysis of multi-trait-multi-method (MTMM) matrices revealed that, as hypothesized, a lack of opportunity to behave within exercises, operationalized using behavior counts, yielded poor discriminant validity. Assessor ratings of opportunity to observe and behave did not produce hypothesized results. Consistent with the second hypothesis, secondary assessors, who represented the within-dimension evaluation method, provided ratings that demonstrated better construct-related validity evidence than the ratings provided by primary assessors, who represented the within-exercise method. Correlation and regression analyses of the dimension/performance relationships and the exercise/performance relationships revealed neither dimensions nor exercises to be the better predictor of supervisor ratings of performance. Using MTMM, partial support was found for the fourth objective: those dimensions that were more stable across exercises yielded better convergent validity evidence versus those dimensions that were more situationally specific. However the differences were not statistically significant or large. Overall results of this study suggest that there are some areas of design and implementation that can affect the construct-related validity of assessment centers, and researchers should continue to search for ways to improve assessment center construct-related validity, but should also look for ways other than MTMM to assess validity.
author2 Arthur, Jr., Winfred
author_facet Arthur, Jr., Winfred
Archuleta, Kathryn
author Archuleta, Kathryn
author_sort Archuleta, Kathryn
title Evidence of Construct-Related Validity for Assessment Centers: More Pieces of the Inferential Pie
title_short Evidence of Construct-Related Validity for Assessment Centers: More Pieces of the Inferential Pie
title_full Evidence of Construct-Related Validity for Assessment Centers: More Pieces of the Inferential Pie
title_fullStr Evidence of Construct-Related Validity for Assessment Centers: More Pieces of the Inferential Pie
title_full_unstemmed Evidence of Construct-Related Validity for Assessment Centers: More Pieces of the Inferential Pie
title_sort evidence of construct-related validity for assessment centers: more pieces of the inferential pie
publishDate 2010
url http://hdl.handle.net/1969.1/ETD-TAMU-2009-05-579
work_keys_str_mv AT archuletakathryn evidenceofconstructrelatedvalidityforassessmentcentersmorepiecesoftheinferentialpie
_version_ 1716504527993044992