Summary: | This poster presents development and pilot-testing of an electronic survey evaluating behavior analysts perceptions and use of Evidence-Based Practices (EBPs). Demographic information, ratings of quality indicators (Horner et al 2005) in evaluating and choosing behavior interventions, and resources identifying EBPs were assessed. Fourteen graduate trainees in a behavior analysis certification program were participants. Test-retest intra-rater agreement was assessed approximately 2 weeks apart and varied considerably across survey items. Overall exact agreement was 71 % while agreement within 1 rating point was 91 %. The most highly rated quality indicators were clear descriptions of baseline and intervention conditions, measures of inter-observer agreement, and repeated measures of target behaviors. Lowest rated were group experimental designs, statistical analyses, and numbers of participants. Surprisingly, multiple studies with 20+ participants, integrated intervention packages, and written intervention manuals were not as highly rated. Most frequently reported EBP sources were professional society websites, university courses, practitioner journals, and professional peer-reviewed journals. Least frequently reported were webinars by private entities, non-peer-reviewed journals, and government websites (e.g., What Works Clearing House). Participants identified time constraints, difficulty finding research relevant to their current situation and technical rather than practical nature of research as impediments to keeping current with EBPs
|