Summary: | Abstract
Objective – This paper analyzes the design process for a toolkit for appraising emerging and established bibliographic reference generators and managers for a particular student population. Others looking to adapt or draw from the toolkit to meet the needs of users at their own institutions will benefit from this exploration of how one team developed and streamlined the process of assessment.
Methods – The authors implemented an extensive initial evaluation using a checklist and comprehensive rubric to review and select reference tools. This work was guided by a matrix of categories from Marino (2012), Bates (2015), and other literature. As the tools were assessed using the toolkit, the components of the toolkit were evaluated and revised. Toolkit revisions were based on evaluators’ feedback and lessons learned during the testing process.
Results – Fifty-three tools were screened using a checklist that reviewed features, including cost and referencing styles. Eighteen tools were thoroughly evaluated using the comprehensive rubric by multiple researchers to minimize bias. From this secondary testing, tools were recommended for use within this environment. Ultimately the process of creating an assessment toolkit allowed the researchers to develop a streamlined process for further testing. The toolkit includes a checklist to reduce the list of potential tools, a rubric for features, a rubric to evaluate qualitative criteria, and an instrument for scoring.
Conclusion – User needs and the campus environment are critical considerations for the selection of reference tools. For this project, researchers developed a comprehensive rubric and testing procedure to ensure consistency and validity of data. The streamlined process in turn enabled library staff to provide evidence based recommendations for the most suitable manager or generator to meet the needs of individual programs.
|