Interaction effects on common measures of sensitivity: choice of measure, type I error, and power

Here we use simulation to assess previously unaddressed problems in the assessment of statistical interactions in detection and recognition tasks. The proportion of hits and false-alarms made by an observer on such tasks is affected by both their sensitivity and bias, and numerous measures have been...

Full description

Bibliographic Details
Main Authors: Cowan, N. (Author), Logie, R.H (Author), Parra, M.A (Author), Rhodes, S. (Author)
Format: Article
Language:English
Published: Springer New York LLC 2019
Subjects:
Online Access:View Fulltext in Publisher
LEADER 02498nam a2200421Ia 4500
001 10.3758-s13428-018-1081-0
008 220511s2019 CNT 000 0 und d
020 |a 1554351X (ISSN) 
245 1 0 |a Interaction effects on common measures of sensitivity: choice of measure, type I error, and power 
260 0 |b Springer New York LLC  |c 2019 
856 |z View Fulltext in Publisher  |u https://doi.org/10.3758/s13428-018-1081-0 
520 3 |a Here we use simulation to assess previously unaddressed problems in the assessment of statistical interactions in detection and recognition tasks. The proportion of hits and false-alarms made by an observer on such tasks is affected by both their sensitivity and bias, and numerous measures have been developed to separate out these two factors. Each of these measures makes different assumptions regarding the underlying process and different predictions as to how false-alarm and hit rates should covary. Previous simulations have shown that choice of an inappropriate measure can lead to inflated type I error rates, or reduced power, for main effects, provided there are differences in response bias between the conditions being compared. Interaction effects pose a particular problem in this context. We show that spurious interaction effects in analysis of variance can be produced, or true interactions missed, even in the absence of variation in bias. Additional simulations show that variation in bias complicates patterns of type I error and power further. This under-appreciated fact has the potential to greatly distort the assessment of interactions in detection and recognition experiments. We discuss steps researchers can take to mitigate their chances of making an error. © 2018, Psychonomic Society, Inc. 
650 0 4 |a analysis of variance 
650 0 4 |a article 
650 0 4 |a Bias 
650 0 4 |a Bias 
650 0 4 |a Detection 
650 0 4 |a human 
650 0 4 |a human experiment 
650 0 4 |a Humans 
650 0 4 |a Interactions 
650 0 4 |a Power 
650 0 4 |a prediction 
650 0 4 |a probability 
650 0 4 |a Probability 
650 0 4 |a Recognition 
650 0 4 |a Recognition, Psychology 
650 0 4 |a scientist 
650 0 4 |a Sensitivity 
650 0 4 |a simulation 
650 0 4 |a statistical bias 
650 0 4 |a Type I error 
650 0 4 |a Type II error 
700 1 |a Cowan, N.  |e author 
700 1 |a Logie, R.H.  |e author 
700 1 |a Parra, M.A.  |e author 
700 1 |a Rhodes, S.  |e author 
773 |t Behavior Research Methods