Rating scales in Web surveys. A test of new drag-and-drop rating procedures

In Web surveys, rating scales measuring the respondents’ attitudes and self-descriptions by means of a series of related statements are commonly presented in grid (or matrix) questions. Despite the benefits of displaying multiple rating scale items neatly arranged and supposedly easy to complete on...

Full description

Bibliographic Details
Main Author: Kunz, Tanja
Format: Others
Language:German
en
Published: 2015
Online Access:https://tuprints.ulb.tu-darmstadt.de/5151/7/Kunz_2015_Rating_scales_in_web_surveys.pdf
Kunz, Tanja <http://tuprints.ulb.tu-darmstadt.de/view/person/Kunz=3ATanja=3A=3A.html> (2015): Rating scales in Web surveys. A test of new drag-and-drop rating procedures.Darmstadt, Technische Universität, [Ph.D. Thesis]
Description
Summary:In Web surveys, rating scales measuring the respondents’ attitudes and self-descriptions by means of a series of related statements are commonly presented in grid (or matrix) questions. Despite the benefits of displaying multiple rating scale items neatly arranged and supposedly easy to complete on a single screen, respondents are often tempted to rely on cognitive shortcuts in order to reduce the extent of cognitive and navigational effort required to answer a set of rating scale items. In order to minimize this risk of cognitive shortcuts resulting in satisfying rather than optimal answers, respondents have to be motivated to spend extra time and effort on the attentive and careful processing of rating scales. A wide range of visual and dynamic features are available in interactive Web surveys allowing for visual enhancement and greater interactivity in the presentation of survey questions. To date, however, only a few studies have systematically examined new rating scale designs using data input methods other than conventional radio buttons. In the present study, two different rating scales were designed using drag-and-drop as a more interactive data input method: Respondents have to drag the response options towards the rating scale items (‘drag-response’), or in the reverse direction, the rating scale items towards the response options (‘drag-item’). In both drag-and-drop rating scales, the visual highlighting of the items and response options as well as the dynamic strengthening of the link between these key components are aimed at encouraging the respondents to process a rating scale more attentively and carefully. The effectiveness of the drag-and-drop rating scales in preventing the respondents’ susceptibility to cognitive shortcuts is assessed on the basis of five systematic response tendencies that are typically accompanied by rating scales, i.e., careless, nondifferentiated, acquiescent, and extreme responding as well as the respondents’ systematic tendency to select one of the first response options, so called primacy effects. Moreover, item missing data, response times, and respondent evaluation are examined. The findings of the present study revealed that although both drag-and-drop scales entail a higher level of respondent burden as indicated by an increase in item missing data and longer response times compared to conventional radio button scales, they promote the respondents’ attentiveness and carefulness towards the response task which is accompanied by the respondents’ reduced susceptibility to cognitive shortcuts in processing rating scales.