Analysis of the Psychometric Properties of Two Different Concept-Map Assessment Tasks
The ability to make sense of a wide array of stimuli presupposes the human tendency to organize information in a meaningful way. Efforts to assess the degree to which students organize information meaningfully have been hampered by several factors including the idiosyncratic way in which individuals...
Main Author: | |
---|---|
Format: | Others |
Published: |
BYU ScholarsArchive
2008
|
Subjects: | |
Online Access: | https://scholarsarchive.byu.edu/etd/1352 https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=2351&context=etd |
Summary: | The ability to make sense of a wide array of stimuli presupposes the human tendency to organize information in a meaningful way. Efforts to assess the degree to which students organize information meaningfully have been hampered by several factors including the idiosyncratic way in which individuals represent their knowledge either with words or visually. Concept maps have been used as tools by researchers and educators alike to assist students in understanding the conceptual interrelationships within a subject domain. One concept-map assessment in particular known as the construct-a-map task has shown great promise in facilitating reliable and valid inferences from student concept-map ratings. With all of its promise, however, the construct-a-map task is burdened with several rating difficulties. One challenge in particular is that no published rubric has been developed that accounts for the degree to which individual propositions are important to an understanding of the overall topic or theme of the map. This study represents an attempt to examine the psychometric properties of two construct-a-map tasks designed to overcome in part this rating difficulty. The reliability of the concept-map ratings was calculated using a person-by-rater-by-occasion fully crossed design. This design made it possible to use generalizability theory to identify and estimate the variance in the ratings contributed by the three factors mentioned, the interaction effects, and unexplained error. The criterion validity of the concept-map ratings was examined by computing Pearson correlations between concept-map and essay ratings and concept-map and interview transcript ratings. The generalizability coefficients for student mean ratings were moderate to very high: .73 and .94 for the first concept-mapping task and .74 and .87 for the second concept-mapping task. A relatively large percentage of the rating variability was contributed by the object of measurement. Both tasks correlated highly with essay and interview ratings: .62 to .81. |
---|