8 things we learned from our work on evidence this year

07 November 2017

In 2016, ALNAP launched a new webinar, ‘Bridging the Evidence Gap.’ Organised around key humanitarian challenges, it looks at how leading thinkers are ‘bridging the gap’ between evidence and practice in order to improve humanitarian action. We wanted to take stock of the rise in high-quality research and better data collection in the humanitarian sector, and explore how these activities are connecting to users and decision-makers.

We’ve discovered a lot from the five webinars produced over the last year and, in honour of Humanitarian Evidence Week, here’s what we’ve learned about the state of evidence and its use in humanitarian action today:

 

  1. Evidence is “in”
    From the Sphere handbook to national NGOs in Somalia, humanitarian actors are taking huge strides to improve the quality of data that they use, and to strengthen the link between evidence and decision-making. It is great to witness this, and to make sure this trend continues we need to find better ways to link evidence producers with users. We looked at this issue in our first webinar with the Humanitarian Evidence Programme, the IRC and 3ie, who are bringing together large bodies of evidence to make them easier to access and use by practitioners through gap mapping and evidence synthesis approaches
     
  2. Not everyone knows what we mean by evidence
    While many across the humanitarian sector recognise its importance, there’s not always agreement on how to define high-quality evidence. To ALNAP, evidence is information that supports or contradicts a given proposition. Others understand evidence differently, using definitions imported from the health sector where there is an emphasis on control studies and demonstrating the efficacy of interventions. While we can have different interests in an evidence agenda , it is important to move towards more consistent language to avoid confusion and lower the barriers to evidence use by practitioners.
     
  3. We must not forget about the basics of good data collection
    Throughout the year we heard from people who are trialling new, more rigorous research approaches in order to answer difficult questions, such as what types of programming work most effectively in different sectors or what is an accurate picture of humanitarian presence on the ground in conflict settings. Yet, the answers to these questions can only be as good as the data that informs them. From poor, inconsistent monitoring data collected by operational organisations to the lack of transparent and strong methods used in humanitarian research and evaluations, the quality of our data is often far from satisfactory.
     
  4. Greater use of secondary data enables better decision-making
    It is no secret that humanitarian organisations prefer primary data – they are able to control how this data is collected and they know where it has come from. Yet, one of the key messages from the Bridging the Gap webinar series is the importance of secondary data, closer collaboration on data collection and better data sharing between humanitarian actors at different levels, from donors to field staff. There is a risk of information overload when each humanitarian actor rushes to collect their own primary data, as the focus is often taken away from those crucial analysis tasks that make data useable for decision-making. In our second webinar, Development Initiatives, ECHO and DFID discussed how the use of shared data or more similar data gathering activities would help donors make better, more complementary decisions.
     
  5. The gap still needs closing and there are at least two issues preventing that from happening
    As we touched on earlier, there is a definite gap between those creating evidence and those who want to use it. Users don’t know where to get evidence and producers don’t know what expectations users have. This gap is fuelled by at least two issues:
     
  6. There’s a need to communicate evidence on humanitarian action beyond the sector
    Humanitarians still have a long way to go in making evidence more accessible and digestible. This is true not only within the sector, but also to the general public whose opinions have the power to sway donor governments’ agendas from one year to the next. Our default deliverable for evaluation processes is still thick, hard to read reports lacking even basic design. In this time of greater public scrutiny of humanitarian aid, we need to be prepared to respond with evidence that is solid yet easy to consume and understand by the average tax payer. With the growing number of tools and formats for the presentation of information (e.g. mobile video, podcasts, infographics, interactive websites), the presentation and delivery of compelling evidence should no longer be an afterthought.  
     
  7. Humanitarian evaluations do not always help to paint a bigger picture
    ALNAP is currently conducting an evaluation synthesis for the State of the Humanitarian System 2018 report. During this review of over 100 evaluations, it has become clear that their quality varies dramatically. Many reports have shortcomings either in methodology or execution, or both. But perhaps a more profound question to be raised is the value of evaluations for the improvement of the humanitarian system as a whole. Understandably, most reports focus exclusively on the project or programme at hand, but this poses an important challenge when attempting to synthesise findings and draw conclusions on the performance of the sector.
     
  8. We can’t forget the political side of evidence
    We can often fall into thinking about evidence as a technical issue: put more evidence in, get better decisions out. But getting higher quality evidence shared and used is just as much about politics: who is asking the questions for which evidence is being collected, and what are the incentives for collecting accurate, relevant data. This highlights the importance of ‘Clarity of Context and Method’, a criterion of evidential quality discussed in ALNAP’s 2014 paper on Evidence. Stronger and more clearly communicated methodologies may help in the aggregation of evidence. Also, being more transparent about how research is selected, designed, and funded, can help us understand who is directing the evidence agenda in the sector and think about voices or perspectives that are being excluded from this process.


This blog was posted to coincide with Humanitarian Evidence Week 2017 (HEW17), an initiative led by Evidence Aid, in collaboration with the Centre for Evidence-Based Medicine to promote a more evidence-based approach to humanitarian action. HEW17 will feature webinars, blogs and debates to highlight topics related to generation, use and dissemination of evidence in the humanitarian sector. Find out more here.