printUse ctrl + p to print the page

Collaborative outcomes reporting

Aim of the tool
Collaborative Outcomes Reporting (COR) is a participatory approach to impact evaluation based around a performance story that presents evidence of how a programme has contributed to outcomes and impacts. That is then reviewed by both technical experts and programme stakeholders, which may include community members.

When to use it?
COR is especially useful when a programme has emergent or complex outcomes that are not fully defined at the onset of a programme. For this reason a program logic is refreshed at the start of the evaluation process. In addition qualitative inquiry is used to capture unexpected outcomes and deliberative processes are used to make sense of the findings. COR is useful for both internal and external evaluations and learning.

How difficult is it to use it?
Easy – moderate – for experienced users/facilitators

Tool for thought or tool for action?
Tool for thought and for action

Benefits
COR mainly focuses on answering questions about the extent to which an investment contributes to outcomes. COR can help build evaluation capacity because it has a specific mandate for involving project teams and staff in the social inquiry or data gathering phase.

Issues to be aware of
Including stakeholders needs delicate management and careful facilitation.
Assembling the experts in the outcomes panel also requires careful consideration, to avoid conflicts of interest. Whilst experts may - inevitably due to the nature of the evaluand—be connected to the programme in some way, it is important that they do not design the programme themselves nor be a part of it, to maintain neutrality. Furthermore, in highly political evaluands, agreement on outcomes from a panel of experts may be fraught. In this case, an expert evaluation may be a more appropriate approach than COR.

If the programme already has a strong monitoring, evaluation, reporting and improvement framework and process, you may not need all the elements of COR. If there is a clear program logic in place and a comprehensive monitoring system, you may only need to add some components of the COR process, such as the outcomes panel and summit workshop.

Description of the tool
Developed by Jess Dart, COR combines contribution analysis and Multiple Lines and Levels of Evidence (MLLE), mapping existing data and additional data against the program logic to produce a performance story. Performance story reports are essentially a short report about how a programme contributed to outcomes. Although they may vary in content and format, most are short, mention programme context and aims, relate to a plausible results chain, and are backed by empirical evidence. The aim is to tell the ‘story’ of a programme’s performance using multiple-lines of evidence.

A performance story report is usually between 10 and 30 pages long with five parts: 1) Programme context, 2) Evaluation methodology, 3) Results 4) Findings and implications, 5) Index. The figure below illustrates the parts of the performance report and how they correspond to seven steps in preparing the report. Suggested participants are shown for each step.

The seven steps are:

  1. Scoping—inception/planning meetings are held to determine what will be evaluated, develop the program logic (if not already existing), set evaluation questions and identify existing evidence and people to be interviewed.
  2. Evidence gathering—an evidence trawl is conducted to identify existing data that will provide best evidence for expected outcomes. This is followed by the social inquiry process, where interviews are conducted with people who can provide additional information about programme outcomes. Specific questions are asked and recorded to provide stories of significant changes that have occurred as a result of the programme.
  3. Integrated data analysis—quantitative and qualitative data are analysed to identify evidence corresponding to the outcomes in the program logic and integrated within the results chart.
  4. Expert panel—people with expertise in relevant fields and scientific disciplines assess the evidence of outcomes that has been gathered. They also judge and make statements about the extent to which the evidence is adequate to assess the progress the programme is making towards its stated outcomes. The panel may also identify further evidence that may be needed to make a conclusive statement about the achievement of program outcomes. Following the panel meeting, the evaluator integrates all of the analysed evidence and assesses the amount and quality of evidence available for each outcome in the program logic to inform a draft set of recommendations.
  5. Summit meeting—evaluation participants come together to consider and discuss the findings, nominate the stories that best illustrate the impact of the programme and make recommendations for the programme in future.
  6. Integration, report and communications—the evaluator prepares the performance story report, which is a synthesis of all the above steps including recommendations from summit meeting participants. A plan is established to communicate the findings of the evaluation.
  7. Revising the program logic—programme managers, staff and other stakeholders meet to consider the report and revise the program logic as needed to plan for the next phase of the programme. The next phase can incorporate the lessons and recommendations from the previous phase.

COR adds processes of review by an expert panel and stakeholders, sometimes including community members, to check for the credibility of the evidence about what impacts have occurred and the extent to which these can be credibly attributed to the intervention. It is these components of expert panel review (outcomes panel) and a collaborative approach to developing outcomes (through summit workshops) that differentiate COR from other approaches to outcome and impact evaluation.

Example
In Australia there is a big disparity between educational outcomes for Indigenous children compared to non-indigenous children, and in the last 8 years educational outcomes have been either stable or declining. Participatory evaluation was used for the first phase of the Australian “Stronger Smarter Realities Program” (SSR) which ran from 2006 to the end of 2008. This project was about creating systematic and transferable change by arming Australian educators with the belief, skills and capacity to make profound changes to the learning outcomes of Indigenous children. Over 3 years, the project aimed to engage principals, teachers and Indigenous community leaders from 240 schools with high Indigenous student populations, and support them to transform their schools in such a way to deliver dramatically improved educational outcomes for Indigenous students. The programme is based on the premise that this can be achieved by providing a supportive educational environment, by providing excellent teachers and by having high expectations.

The evaluation was completed in 2009 at the end of the first phase of the project by external consultants using a participatory approach. It was guided by two key questions i) to what extent has the SSR project contributed to excellence in Indigenous education in participating schools? And ii) to what extent did the SSR project influence the overall Indigenous education agenda?

The evaluation used the “Collaborative Outcomes Reporting Technique” developed by Jess Dart.

Firstly, a design workshop was held where the theory of change was clarified and evaluation questions developed. This was conducted with programme team members and key stakeholders in a participatory manner. Social inquiry included over 50 semi-structured interviews incorporating the Most Significant Change technique and 3 case studies from Indigenous communities. The data trawl involved collection and analysis of secondary documents and quantitative data on student outcomes from 10 schools. 

The quantitative data, case studies and qualitative summaries were used as evidence to feed into an ‘outcomes panel’ with Indigenous educators who examined the data and created statements about: the extent to which the outcomes had been achieved; the plausible contribution of the programme to these outcomes and the quality of the data. The panel were selected as they were highly respected, had no vested interest in the programme and had an excellent knowledge of Indigenous education policy and practice.
The process culminated in an evaluation summit workshop that saw key stakeholders and staff deliberating over qualitative and quantitative data and creating recommendations. The consultants’ role was collection and synthesized data and facilitation of the sensemaking process with recommendations created by workshop participants.
While the quantitative data was limited in scope, the evaluation was noteworthy as it managed to capture some of the less tangible outcomes concerning ‘breakthrough learning’ and raised expectations for Indigenous children. The programme itself has been very successful and is being scaled-up and delivered on a national scale. This evaluation has been highly influential as evidenced by all the recommendations been successfully implemented, and one philanthropic funder stating that the evaluation was well-balanced and gave them confidence to provide further funding for the programme.

Steps involved in using the tool
COR uses a mixed method approach that involves participation of key stakeholders, generally in 6 process steps. Participation can occur at all stages of this process:

  • Scoping: an inception/planning workshop is held. In this workshop the program logic is clarified, existing data is identified and evaluation questions developed.
  • Data trawl. Can include both primary and secondary data sources. Generally, a data trawl of existing evidence is undertaken. Programme staff may be enlisted to help with the collation of data.
  • Social inquiry. Social inquiry can include any form of data gathering - qualitative or quantitative. If qualitative, volunteers who are given a short training session in interviewing and an interview guide can conduct interviews. This is a very effective way to involve staff in the data where there is sufficient enthusiasm around the process. Otherwise consultants or the evaluation managers conduct all or a proportion of the interviews. In many COR examples, the Most Significant Change (MSC) technique is used at some point in the social inquiry process as a way of capturing stories of change, both expected and unexpected.
  • Data analysis and integration. Quantitative and qualitative data can be analysed together according to the outcomes in the program logic. A “results chart” is often used to integrate different sets and types of data. 
  • Outcomes panel. People with relevant scientific, technical, or sectoral knowledge are brought together and presented with a range of evidence compiled in step 4. They are then asked to assess the contribution of the intervention towards goals given the available knowledge and to explore rival hypotheses that could explain the data. It can be substituted for a citizen’s jury.
  • Summit workshop. At a large workshop key findings and recommendations are synthesised, and examples of changes are identified and added (using material from MSC if available, and MSC processes to select the most significant stories). The summit should involve broad participation of key stakeholders such as programme staff and community members.

COR structure: The report aims to explore and report the extent to which a program has contributed to outcomes. Under COR, reports are short and generally structured in terms of the following sections:

  • A narrative section explaining the program context and rationale.
  • A “results chart” summarising the achievements of a programme against a program logic model.
  • A narrative section describing the implications of the results e.g. the achievements (expected and unexpected), the issues and the recommendations.
  • A section which provides a number of ‘vignettes’ that provide instances of significant change, usually first person narratives.
  • An index providing more detail on the sources of evidence.

Sources and further readings