Summary: | Responding to recent calls in the literature for cross-country comparisons of evaluation practice, this simulation study investigated (a) evaluators' perspectives on what determines a programme's evaluability, (b) what criteria evaluators prioritise when assessing a programme's evaluability, and (c) the degree to which practice context (developing, developed, or both) and self-reported levels of evaluation experience predict programme evaluability decisions. Valid responses from evaluators practising in the United States of America (n = 94), the United Kingdom (n = 30), Brazil (n = 91) and South Africa (n = 45) were analysed. Q factor analyses using data collected via a Q Sort task revealed four empirically distinct evaluability perspectives. The dominant perspectives were labelled as theory-driven and utilisation-focused. Correspondence analyses demonstrated that participants used different criteria to assess the evaluability of three fictitious evaluation scenarios. Multinomial regression analyses confirmed that practice context and level of experience did not predict the type of evaluability criterion prioritised in any of the scenarios. Evaluators practising in developed countries were more likely to characterise a programme with robust structural features, unfavourable stakeholder characteristics, and unfavourable logistical conditions as evaluable with high difficulty than as evaluable with medium difficulty. Evaluators with limited experience were more likely than unlikely to embark on an evaluation of such a programme. This study represents the first empirical investigation of how evaluators from selected developed and developing countries assess programme evaluability.
|