Resources

rss

ILAC Brief 26: Making causal claims

Author: John Mayne
Publication date: 2012

An ongoing challenge in evaluation is the need to make credible causal claims linking observed results to the actions of interventions. This ILAC Brief argues the need for a different perspective on causality, where interventions are seen as contributory causes to certain results.

CDI Conference Report 2014: Improving the use of monitoring & evaluation processes and findings

Author(s)/Organisation: Visser, I; Kusters, C.S.L; Guijt, I; Roefs, M; Buizer, N. 
Date: July 2014

This report summarises the outline and outputs of the conference ‘Improving the Use of Monitoring & Evaluation Processes and Findings’, which took place on March 20-21, 2014.

Monitoring and evaluation of policy influence and advocacy

Author: Tsui, J., Hearn, S. &  and Young, J. 
Publication date: 2014

This ODI working paper explores current trends in monitoring and evaluating policy influence and advocacy; discusses different theories of how policy influence happens; and presents a number of options to monitor and evaluate different aspects of advocacy interventions. Case studies describe how some organisations have used these options in practice to understand their impact and improve their advocacy strategies.

ILAC Brief 26: Making causal claims

Author: John Mayne
Publication date: 2012

An ongoing challenge in evaluation is the need to make credible causal claims linking observed results to the actions of interventions. This ILAC Brief argues the need for a different perspective on causality, where interventions are seen as contributory causes to certain results.

Evaluation Rubrics: How to Ensure Clear and Transparent Assessment That Respects Diverse Lines of Evidence

Authors: Oakden, J.
Publication date: 2013

Independent external evaluators generally have to work within a range of constraints. Often there is less than ideal availability of time, money, or data. This article presents an example of how a team of external evaluators worked around these constraints on an evaluation in the education sector.

Results management in norwegian development cooperation: A practical guide

Author: Norad
Publication date: 2008

Findings from Management Reviews of Norwegian development cooperation reveal a need and a demand for a practical introduction to the main concepts, principles and especially tools of results management. This short guide is an attempt to respond to that demand, the purpose being to increase staff’s knowledge of the main principles of results and risk management and what it means in practical terms throughout the various stages of programme management.

Impact-evaluation guidelines: Designing impact evaluations for agricultural projects

Authors: Winters, P., Salazar, L. & Maffioli, A. (for IDB)
Publication date: 2010

The purpose of this guideline is to provide suggestions on designing impact evaluations for agricultural projects, particularly projects that directly target farmers, and seek to improve agricultural production, productivity and profitability. Specific issues in evaluating agricultural projects are addressed, including the need to use production-based indicators and to carefully consider indirect or spillover effects that are common in agricultural projects.

Using evaluation for a change: Insights from humanitarian practitioners

Authors: Hallam, A. & Bonino, F.
Publication date: 2013

The main objective of this paper is to motivate and encourage humanitarian evaluators, by highlighting and discussing concrete ways to address the challenge of poor or ineffective use of evaluation.

The World Bank: The mystery of the vanishing benefits. Ms. Speedy Analist's introduction to evaluation

Author: Ravallion, M. (for the World Bank)
Publication date: 1999

This World Bank working paper provides an entertaining introduction to the concepts and methods of impact evaluation. The paper is written from the perspective of Ms. Speedy Analist, a fictional character, who trains the readers in how to assess the impact of a social program. Doing so, the paper explains impact evaluation methods, how to use data for impact evaluation, how to form and match comparison groups, sources of bias, and so on.