Additional comparisons of randomization-test procedures for single-case multiple-baseline designs: Alternative effect types

A number of randomization statistical procedures have been developed to analyze the results from single-case multiple-baseline intervention investigations. In a previous simulation study, comparisons of the various procedures revealed distinct differences among them in their ability to detect immedi...

Full description

Bibliographic Details
Main Authors: Levin, Joel R., Ferron, John M., Gafurov, Boris S.
Other Authors: University of Arizona
Language:en
Published: PERGAMON-ELSEVIER SCIENCE LTD 2017
Subjects:
Online Access:http://hdl.handle.net/10150/625957
http://arizona.openrepository.com/arizona/handle/10150/625957
id ndltd-arizona.edu-oai-arizona.openrepository.com-10150-625957
record_format oai_dc
spelling ndltd-arizona.edu-oai-arizona.openrepository.com-10150-6259572017-11-04T03:00:30Z Additional comparisons of randomization-test procedures for single-case multiple-baseline designs: Alternative effect types Levin, Joel R. Ferron, John M. Gafurov, Boris S. University of Arizona Single-case intervention research Multiple-baseline design Randomization statistical tests Alternative effect types A number of randomization statistical procedures have been developed to analyze the results from single-case multiple-baseline intervention investigations. In a previous simulation study, comparisons of the various procedures revealed distinct differences among them in their ability to detect immediate abrupt intervention effects of moderate size, with some procedures (typically those with randomized intervention start points) exhibiting power that was both respectable and superior to other procedures (typically those with single fixed intervention start points). In Investigation 1 of the present follow-up simulation study, we found that when the same randomization-test procedures were applied to either delayed abrupt or immediate gradual intervention effects: (1) the powers of all of the procedures were severely diminished; and (2) in contrast to the previous study's results, the single fixed intervention start-point procedures generally outperformed those with randomized intervention start points. In Investigation 2 we additionally demonstrated that if researchers are able to successfully anticipate the specific alternative effect types, it is possible for them to formulate adjusted versions of the original randomization-test procedures that can recapture substantial proportions of the lost powers. 2017-08 Article Additional comparisons of randomization-test procedures for single-case multiple-baseline designs: Alternative effect types 2017, 63:13 Journal of School Psychology 00224405 10.1016/j.jsp.2017.02.003 http://hdl.handle.net/10150/625957 http://arizona.openrepository.com/arizona/handle/10150/625957 Journal of School Psychology en http://linkinghub.elsevier.com/retrieve/pii/S0022440517300171 © 2017 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved. PERGAMON-ELSEVIER SCIENCE LTD
collection NDLTD
language en
sources NDLTD
topic Single-case intervention research
Multiple-baseline design
Randomization statistical tests
Alternative effect types
spellingShingle Single-case intervention research
Multiple-baseline design
Randomization statistical tests
Alternative effect types
Levin, Joel R.
Ferron, John M.
Gafurov, Boris S.
Additional comparisons of randomization-test procedures for single-case multiple-baseline designs: Alternative effect types
description A number of randomization statistical procedures have been developed to analyze the results from single-case multiple-baseline intervention investigations. In a previous simulation study, comparisons of the various procedures revealed distinct differences among them in their ability to detect immediate abrupt intervention effects of moderate size, with some procedures (typically those with randomized intervention start points) exhibiting power that was both respectable and superior to other procedures (typically those with single fixed intervention start points). In Investigation 1 of the present follow-up simulation study, we found that when the same randomization-test procedures were applied to either delayed abrupt or immediate gradual intervention effects: (1) the powers of all of the procedures were severely diminished; and (2) in contrast to the previous study's results, the single fixed intervention start-point procedures generally outperformed those with randomized intervention start points. In Investigation 2 we additionally demonstrated that if researchers are able to successfully anticipate the specific alternative effect types, it is possible for them to formulate adjusted versions of the original randomization-test procedures that can recapture substantial proportions of the lost powers.
author2 University of Arizona
author_facet University of Arizona
Levin, Joel R.
Ferron, John M.
Gafurov, Boris S.
author Levin, Joel R.
Ferron, John M.
Gafurov, Boris S.
author_sort Levin, Joel R.
title Additional comparisons of randomization-test procedures for single-case multiple-baseline designs: Alternative effect types
title_short Additional comparisons of randomization-test procedures for single-case multiple-baseline designs: Alternative effect types
title_full Additional comparisons of randomization-test procedures for single-case multiple-baseline designs: Alternative effect types
title_fullStr Additional comparisons of randomization-test procedures for single-case multiple-baseline designs: Alternative effect types
title_full_unstemmed Additional comparisons of randomization-test procedures for single-case multiple-baseline designs: Alternative effect types
title_sort additional comparisons of randomization-test procedures for single-case multiple-baseline designs: alternative effect types
publisher PERGAMON-ELSEVIER SCIENCE LTD
publishDate 2017
url http://hdl.handle.net/10150/625957
http://arizona.openrepository.com/arizona/handle/10150/625957
work_keys_str_mv AT levinjoelr additionalcomparisonsofrandomizationtestproceduresforsinglecasemultiplebaselinedesignsalternativeeffecttypes
AT ferronjohnm additionalcomparisonsofrandomizationtestproceduresforsinglecasemultiplebaselinedesignsalternativeeffecttypes
AT gafurovboriss additionalcomparisonsofrandomizationtestproceduresforsinglecasemultiplebaselinedesignsalternativeeffecttypes
_version_ 1718560250063224832