Resources

rss

How do we know if a program made a difference? A guide to statistical methods for program impact evaluation

Authors: Peter M. Lance,  David K. Guilkey, Aiko Hattori & Gustavo Angeles (for MEASURE Evaluation)
Publication date: 2014

There is a long-running, vigorous and evolving methodological debate about the appropriate or optimal way to evaluate the impact of a program. This manual strives not to convince readers of the merits of one particular alternative or another, but instead simply to present the various options in as approachable a fashion as possible and then let them decide for themselves where they stand. In other words, it strives to be impartial.

FANTA: Sampling Guide (with 2012 Addendum)

Authors: Robert Magnini (1999) and Diane Stukel & Megan Deitchler (2012) (for FANTA)
Publication date: Sampling Guide (1999) and Addendum (2012)

These materials provide sampling guidance for baseline and final performance evaluation surveys in the context of USAID Food for Peace Title II development food assistance programs. The guide provides methods and instructions for developing the design of a population-based household survey and provides information on how to randomly select samples of communities, households, and/or individuals for such surveys. It emphasizes the use of probability sampling methods, which are essential to ensure that the survey represents the target population. In the addendum, an updated approach for sample size calculation is provided, which will result in a household sample size that is more likely to achieve the required sample size of children for child-level indicators.

Creating an Analysis Plan

Author: Centers for Disease Control and Prevention (CDC)

Publication date: 2013

The Creating an Analysis Plan training module is one of three modules that will provide you with the skills needed to analyze and interpret quantitative 1 noncommunicable disease (NCD) data. When you apply these quantitative analysis skills, you will turn data into information that can be used to make informed decisions on public health program and policy recommendations.

Indicators. A working aid

Author: Hunter, J. (for GIZ)
Publication date: 2014

This document serves as a guide that aids in selecting and formulating indicators.

Dynamics of Rural Innovation - a primer for emerging professionals

Authors: Pyburn, R. & Woodhill, J. (eds.)
Publication date: 2014

Dynamics of Rural Innovation – a primer for emerging professionals is a co-publication of KIT and Wageningen University’s Centre for Development Innovation (CDI) that brings together the experiences of over 40 conceptual thinkers and development practitioners to articulate lessons on agricultural innovation processes and social learning.

The Women’s Empowerment in Agriculture Index

The Women’s Empowerment in Agriculture Index (WEAI), a groundbreaking tool to measure the empowerment, agency and inclusion of women in the agriculture sector, has celebrated its second anniversary. The WEAI measures the empowerment, agency, and inclusion of women in the agriculture sector in an effort to identify ways to overcome those obstacles and constraints.

UNICEF Impact Evaluation Series: Data Collection & Analysis

...

Impact evaluations need to go beyond assessing the size of the effects (i.e., the average impact) to identify for whom and in what ways a programme or policy has been successful. This video provides an overview of the issues involved in choosing and using data collection and analysis methods for impact evaluations. 

What methods may be used in impact evaluations of humanitarian assistance?

Authors: Puri, J. et alii.
Publication date: 2014

This paper explores the methodological options and challenges associated with collecting and generating high-quality evidence needed to answer important questions on the impact of humanitarian assistance.

Twelve tips for selling randomised controlled trials to reluctant policymakers and programme managers

Author: White, H.
Publication date: 2014

As a sequel to his blog post about the potential pitfalls with randomised controlled trials (RCTs), the author shares twelve tips for selling RCTs to reluctant policymakers and programme managers.