A guide for planning and strategy development in the face of complexity

Authors: Hummelbrunner, R. & Jones, H.
Publication date: 2013

This Background Note is a guide, explaining how planning and strategy development can be carried out despite complexity. While it is true that complex situations require a greater focus on learning and adaptation, this does not render plan­ning irrelevant.

A guide for managing projects and programmes in the face of complexity

Authors: Hummelbrunner, R. & Jones, H.
Publication date: 2013

Complexity heightens the importance of effective management, but poses challenges for the tools and approaches used most widely to manage international development projects.

This ODI guide, which follows 'A guide to planning and strategy development in the face of complexity', addresses the challenges managers face steering development interventions towards their intended goals in the face of three types of challenges:

DFID working paper 40: Planning evaluability assessments

Author: Dr Davies, R. (for DFID)
Publication date: 2013

The purpose of this synthesis paper is to produce a short practically oriented report that summarises the literature on Evaluability Assessments, and highlights the main issues for consideration in planning an Evaluability Assessment. Eighteen recommendations about the use of Evaluability Assessments are presented.

The paper was commissioned by the Evaluation Department of the UK Department for International Development (DFID) but intended for use both within and beyond DFID.

The World Bank: The mystery of the vanishing benefits. Ms. Speedy Analist's introduction to evaluation

Author: Ravallion, M. (for the World Bank)
Publication date: 1999

This World Bank working paper provides an entertaining introduction to the concepts and methods of impact evaluation. The paper is written from the perspective of Ms. Speedy Analist, a fictional character, who trains the readers in how to assess the impact of a social program. Doing so, the paper explains impact evaluation methods, how to use data for impact evaluation, how to form and match comparison groups, sources of bias, and so on.

An introduction to the use of randomised control trials to evaluate

Author: White, H.
Publication date: 2013, In: Journal of Development Effectiveness, Vol. 5, No. 1, 30–49

This article presents an introduction to Randomised control trials (RCTs). RCTs analyse what difference a programme makes through comparing those in the programme to a control group who do not receive it. Random assignment to the project and control groups overcomes selection bias which will otherwise occur from programme placement or self-selection.


The World Bank: Handbook on Impact Evaluation: Quantitative Methods and Practices

Authors: Khandker, S.R.; Koolwal, G.B. & Samad, H.A. (for The World Bank)
Publication date: 2010

This World Bank handbook provides, for policy and research audiences, a comprehensive overview of steps in designing and evaluating programs amid uncertain and potentially confounding conditions. It draws from a rapidly expanding and broadbased literature on program evaluation—from monitoring and evaluation approaches to experimental and nonexperimental econometric methods for designing and conducting impact evaluations. A diverse set of case studies is included as well.

ODI: How to do a rigorous, evidence-focused literature review in international development

Authors: Hagen-Zanker, J. & Mallett, R. (for ODI)
Publication date: 2013

Building on previous reflections on the utility of systematic reviews in international development research, this paper describes an approach to carrying out literature review that adheres to some of the core principles of 'full' systematic reviews, but that also contains space within the process for innovation and reflexivity.

How Feedback Loops Can Improve Aid (and Maybe Governance)

Author: Whittle, D. (for the Center for Global Development)
Publication Date: 2013

If private markets can produce the iPhone, why can’t aid organizations create and implement development initiatives that are equally innovative and sought after by people around the world? The key difference is feedback loops. Well-functioning private markets excel at providing consumers with a constantly improving stream of high-quality products and services. Why? Because consumers give companies constant feedback on what they like and what they don’t.

Utilization Focused Evaluation: A primer for evaluators

Authors: Ramirez, R. & Brodhead, D.
Publication date: 2013

Utilization Focused Evaluation (UFE) facilitates a learning process in which people in the real world apply evaluation findings and experiences to their work. The focus is on intended users. It is a guiding framework, rather than a methodology. This primer is for practitioner evaluators and project implementers who have heard of UFE and are keen to test-drive the approach.

AEA Webinar: Evaluation Jitters Part Two: Managing an Evaluation - Alice Willard

This webinar is part of a four-part series on monitoring and evaluation. Once an evaluation begins, the evaluation team needs a key organizational contact. This contact is often a junior staff person assigned to 'manage' the evaluation process. Rather than a technical or a senior manager, this contact's job can include helping smooth logistics (setting up meetings), providing routine communication to the organization's management about the progress of the evaluation, and also serving as the 'go to' for challenges and opportunities that come up during an evaluation.