Other

Trends in impact evaluation: Did we ever learn?

- 0 comments

In 2006, the Evaluation Gap Working Group asked, “When will we ever learn?” This week, 3ie’s Drew Cameron, Anjini Mishra, and Annette Brown (hereafter CMB) have published a paper in the Journal of Development Effectiveness that uses data on more than thirty years of published impact evaluations from 3ie’s Impact Evaluation Repository (IER) to answer the question. The full article can be found here

Data Visualization Checklist

- 0 comments

Authors: Evergreen, S. and A. K. Emery
Publication date: 2014

This checklist is meant to be used as a guide for the development of high impact data visualizations. It allows you to rate each aspect of the data visualization by circling the most appropriate number, where 2 points means the  guideline was fully met, 1 means it was partially met, and 0 means it was not met at all. n/a should not be used frequently, but reserved for when the guideline truly does not apply. For example, a pie chart has no axes lines or tick marks to rate. Refer to the Data Visualization Anatomy Chart on the last page for guidance on vocabulary.

The Value of Rubrics in Messy Non-Profit Evaluation Contexts

- 0 comments

This powerpoint presentation by Kate McKegg at the American Evaluation Association Conference 2013 helps to understand how the use of rubrics in the non-profit sector can contribute to organizational learning, internal evaluation capacity and improved client outcomes. Examples are shared of real world use of different kinds of evaluation rubrics in non-profit contexts.

Evaluation Rubrics with E. Jane Davidson

- 0 comments

Have you joined the rubrics revolution? This document gives you a  basic introduction to the "what, why, and how" of evaluation rubrics. In the podcast, you can also listen to a chat with Jane Davidson of Real Evaluation about the use of rubrics in evaluation. 

Evaluation Glossary Mobile App

- 0 comments

This app, provided by Community Solutions, was designed to deal with the inconsistencies in monitoring and evaluation terminology between different funding agencies, sectors, etc. by providing quick and easy access to over 600 terms in evaluation and program planning along with their source and related terms. It is available for both iTunes and Android. A multi-lingual version might follow later. 

Public randomization ceremonies

- 0 comments

Randomization might- at first – sound like a scary word for health policy makers and professionals. They read medical journals and know from their training that randomized trials are scientifically rigorous designs to evaluate the impact of a program. But their first inclination might be to prefer to have the randomized trial in somebody else’s backyard. Randomization seems politically difficult. How to explain it to the people who will have to wait for the new intervention? Will it not create a backlash with the people who are randomly assigned to the control group?

BetterEvaluation Coffee Break Webinar 8/8: Manage an Evaluation – Kerry Bruce

- 0 comments

This webinar, part 8 in an 8-part series on the BetterEvaluation rainbow framework, focuses on tasks to manage an evaluation – including understanding and engaging stakeholders, establishing decision-making processes for the evaluation, deciding who will conduct it, securing resources, defining evaluation standards, documenting agreements (including contractual arrangements) and formal evaluation plans and developing evaluation capacity.

BetterEvaluation Coffee Break Webinar 7/8: Report and Support Use of Findings – Simon Hearn

- 0 comments

This webinar, part 7 in an 8-part series on the BetterEvaluation rainbow framework, focuses on tasks to report findings and to support use of them – including ways of developing reporting media, strategies to improve accessibility, develop recommendations, and assist individuals and organisations to understand and use findings. The webinar outlines the tasks involved, options for carrying out those tasks, and resources from the BetterEvaluation site that can assist any evaluator. All webinars in this particular series are open to the public. 

BetterEvaluation Coffee Break Webinar 6/8: Synthesise Data from One or More Evaluations – Patricia Rogers

- 0 comments

This webinar, part 6 in an 8-part series on the BetterEvaluation rainbow framework, focuses on combining data from one evaluation or from many evaluations to make overall evaluative judgments – including different approaches to systematic review of existing literature. The webinar outlines the tasks involved, options for carrying out those tasks, and resources from the BetterEvaluation site that can assist any evaluator. All webinars in this particular series are open to the public.

BetterEvaluation Coffee Break Webinar 5/8:Understand Causes of Outcomes and Impacts – Jane Davidson

- 0 comments

This webinar, part 5 of an 8-part series on the BetterEvaluation rainbow framework, focuses on tasks to answer causal evaluation questions – using research designs and other strategies to understand the contribution of a program or policy to observed results. The webinar outlines the tasks involved, options for carrying out those tasks, and resources from the BetterEvaluation site that can assist any evaluator. All webinars in this particular series are open to the public.