Evaluation

Foreign aid agencies worldwide are increasing emphasis on measurable goals, metrics, and the performance of investments in development assistance. These efforts can assist aid agencies to develop well-targeted investments to assist vulnerable communities respond to uncertain but potentially damaging global change issues such as water, food and energy security, and climate impacts.

In 2015, we re-evaluated four of the Research for Development (R4D) Alliance case studies, approximately one year after the projects were completed, in order to learn more about how evidence generated through research has assisted, or otherwise, with complex policy and planning challenges across multiple sectors within partner countries. Our approach concentrated on extending the evaluation framework that was developed during the previous Alliance activity to better:

  • Capture the range of project benefits that were not monetised;
  • Identify the ongoing fit of R4D research with the development and policy needs of in-country partners, DFAT and other stakeholders; and
  • Describe the impact to date of Australian aid investments through research and capacity building and its ongoing sustainability.

By re-surveying the same key partners and stakeholders after the completion of the projects we hoped to be able to generate an improved understanding of the effectiveness of the R4D Alliance investment and where future efforts may be best directed.

See below for more information on the 2014 impact evaluation study.

Findings

Overall this study reinforces the findings from 2014. There is stronger evidence to support the impact of our activities with government and non-government partners across the uptake of our work in planning and programming. However, one year since project completion, there remains limited evidence to support project influence on changes in policies at provincial or national government level. There is also limited evidence to suggest impact on Australia’s aid programming to date. Internally, CSIRO is using both the findings and the approach in a range of our domestic R&D international development activities.  Key points are now highlighted:

Impact findings

  • Capacity building activities remain important. Where targeted towards specific skill acquisition, such as better use of climate projection data or training in webGIS systems, the impact pathway is relatively clear. This has been captured and was articulated during follow-up interviews. For example, as a direct result of training and new knowledge, in country partners are able to make more informed decisions.
  • In general, respondents appear to be more realistic (less optimistic) about actual and potential impacts than in 2014 e.g. lower scores for enhanced beneficiary capacity (in three of the four cases). This should be interpreted as a correction rather than a reduction per se.
  • A key drawback identified by participants in the 2014 evaluation surveys was the lack of ongoing resources to extend work through the implementation of preferred adaptation strategies. We suggest that this may have clouded stakeholders’ views of the success of the project.

Methodological and programmatic findings

  • The benefit of having clearly identifiable and differentiated products (e.g. climate projections for Vietnam and the SUD Indonesia projects, webGIS and Rainwater Harvesting Guidebook for SUD Vietnam, and the intercropping trial and alternative seaweed production methodology for the Climate Livelihoods Project), make it significantly easier to draw impact attributions directly back to the projects (and also makes it easier for respondents to remember direct project successes) as respondents have a clearer line of sight to the lineage of impact.
  • The quality of interview data varies significantly across the sample. In a number of cases there was limited or no supporting qualitative statements to support the survey scores, which limited the level of analyses that were possible.
  • Relative to 2014, there appears to be greater levels of polarisation amongst respondents to a number of key questions, including: resourcing; the establishment of new projects; and institutional arrangements. We interpret this as being reflective of a broad sample in which a number of participants are aware of, or have secured funding for, new or ongoing initiatives, whereas others have not. Generally, respondents were clearer and more confident about answering questions related to ‘on-ground’ activities compared to questions related to management and institutional change.
  • Overall, the results suggest that respondents were somewhat more reassured about the collective pathway to impact than they were in 2014, and a number of the results were reinforced to support this. We thought that over time there would be a decline in indicator scores for stage 1 (capacity building) as knowledge passes through the system, which would correspond with increases in scores for stages 2 (program and policy development) and 3 (implementation, adoption and scaling out). This was borne out to some degree, however, the study has also provided new insights about lag times from activity to impact, and the many reasons behind this; and also the manner in which impact pathways diverge.

2014 Impact evaluation study

In the Research for Development (R4D) sector, impacts often accrue sometime after projects have been completed as results are taken up and generate broader societal welfare benefits. This is of course problematic for evaluations that seek to accurately describe impact/pathway to impact at project completion; and has led to some justified criticism of a number of impact evaluation approaches and studies.

To best capture the benefits of the R4D Alliance portfolio of projects, we developed a novel approach to evaluate the impact of the R4D Alliance at project and program level, including practical guidance to support more effective project design and impact for future R4D investment. We did so by evaluating and quantifying the benefits of investment in the R4D projects through the identification of common methodologies that can be replicated and lessons that can be shared. Our approach demonstrates that the impact from investments can be successfully tested for at project completion and greater confidence levels for impact pathways can be achieved. These are essential if confidence in the value of R4D investment is to be maintained, and will contribute to the ongoing quality and effectiveness of Australian aid.

Overall, the impact evaluation results reinforce many of the key findings coming out of the individual projects, and our analysis of R4D practice underscores the strength of the Alliance investment model for R4D. In particular, these findings provide evidence of the value of a mixed methods approach that is able to articulate the value (or otherwise) of projects through a variety of lenses, thus triangulating impact attribution and improving validity.

Publications