printUse ctrl + p to print the page

Realist Evaluation: An Overview Report from an Expert Seminar with Dr. Gill Westhorp

Authors: Westhorp, G. et alii.
Publication date: 2011

This report summarises the discussions and presentations of the Expert Seminar ‘Realist Evaluation’, which took place in Wageningen on March 29, 2011. The Expert Seminar was organised by the Wageningen UR Centre for Development Innovation in collaboration with Learning by Design and Context, international cooperation.

Why do development programmes work in one place, or for one group of participants, and not for another? If we find that a programme works, how do we know what to replicate and what to change when we take it somewhere else? How do we make sense of complex patterns of outcomes? These are just some of the questions that realist evaluation is designed to answer.

An expert seminar was held with Dr. Gill Westhorp (Community Matters, Australia) in Wageningen on 29th March. This one-day seminar aimed to get a better understanding of what realist evaluation entails and what we could learn from it in terms of evaluation in the development sector. Some 120 participants attended the seminar, about half of whom from the South.

Realist Evaluation changes the basic outcome of evaluation not by asking what works, or does not work, but by asking what works for whom, in what contexts, in what respects and how. A realist approach assumes that programmes are ‘theories incarnate’. That is, whenever a programme is implemented, it is testing a theory about what ‘might cause change’, even though that theory may not be explicit. For this reason one of the tasks of realist evaluation is to make the theories within a programme explicit by developing clear hypotheses about how, and for whom, programmes might ‘work’. The implementation of the programme and its evaluation will then test those hypotheses. This means collecting data: Not just about programme impacts or the process of programme implementation, but also about the specific aspects of programme context that might impact on programme and about the specific mechanism(s) that might be creating change.

Realist Evaluation tries to investigate how the occurred change has happened. For example, has a change in health condition occurred because apples contain vitamin C? Or because apples are red? Or because they are more healthy than junk food? In case the change has occurred due to a vitamin C deficiency, you could cause the same effect by using oranges, which also contain vitamin C. In case of the red colour, you could choose to use red onions, or if the change is related to not eating junk food, you could use carrot sticks to target the overweight.

In Realist Evaluation, context and the mechanism determine the outcome. Context refers to features of participants, organisation, staffing, history, culture, beliefs, etc. that are required to ‘fire’ (or prevent from firing) the mechanism. Mechanisms are the way in which new resources interact with different ‘reasoning’ to give changed decisions. The outcome is different behaviour, which leads to different short and medium term outcomes. It can be used when the purpose of evaluation includes learning, when a programme is being extended to a new population, when there are confusing patterns of findings from previous evaluations, or when intending to upscale.

Understanding Realist Evaluation requires more than a one-day seminar. Therefore, the report of the seminar with references is presented here.