Evaluating development projects : exploring a synthesis model of the logical framework approach and outcome mapping

Under the current results-driven development agenda, sound evaluation, and a corresponding evaluation toolkit, need to be in place to examine whether and to what extent development interventions have achieved their targeted objectives and results, and to generate lessons for further development lear...

Full description

Bibliographic Details
Main Author: Yang, Ting
Published: University of Sussex 2018
Subjects:
Online Access:https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.759589
Description
Summary:Under the current results-driven development agenda, sound evaluation, and a corresponding evaluation toolkit, need to be in place to examine whether and to what extent development interventions have achieved their targeted objectives and results, and to generate lessons for further development learning and improvement. My review of the literature shows that innovative and appropriate evaluation approaches are needed to address key challenges in evaluation such as the tension between learning and accountability objectives, the need to unpack the mechanisms linking outputs and outcomes or goal, and to add an actor perspective. Irrespective of project type, the Logical Framework Approach (LFA) is often a standard requirement of major official donor agencies on projects they fund, so as to fulfil bureaucratic imperatives. However, it is often considered inadequate in addressing key challenges in development evaluation. Given the dominant status of the LFA with such strong support from donors, it is helpful to seek a ‘middle way': a combination of the LFA with other approaches in order to address some of its inadequacies, while satisfying donor agencies' requirements. A synthesis of the LFA and Outcome Mapping (OM) is one such option. This thesis explores the practical value and usefulness of a synthesis model empirically. Applying the model in two case study aid projects, I found that it serves well as a theory-based evaluation tool with a double-stranded (actor strand and results chain) theory of change. The model helps reconcile learning and accountability and add explanatory power and an explicit actor perspective. It also helps establish causation and enable attribution claims at various results levels with its different elements. The model has some limitations but my results suggest it can be usefully adopted. The choice of its application depends on project evaluation context and purpose in specific cases.