Research components overview

This edition of the SOHS, like previous editions, has been created through a synthesis of findings from separate research components using distinct methods. To better facilitate this synthesis and make the process more transparent, ALNAP developed a study matrix with indicators for each of the report’s core questions, used to ensure consistency across the different consultants and research components. It helped to ensure that all key issues were addressed and that the different components addressed these issues in the same way using a common set of questions. This study matrix is available in an online appendix.

Much of the data collection and analysis remains similar to the previous edition, however, there are some key changes to the 2022 SOHS:

  1. In reflection of the fact that the international humanitarian system is but one of multiple sources of support for people in crisis, we have made more effort to describe and capture the ‘systems outside the system’ that individuals and communities draw on to survive and recover, as well as assess how effectively the international humanitarian system takes these efforts into account.
  2. We front-loaded several of our interviews and focus group discussions with aid recipients in three response contexts to ask them what was most important to include in a report assessing the support they receive from the humanitarian system. We then made adjustments to our research questions and data collection on the basis of these findings, leading to a greater emphasis in this SOHS edition on targeting, anti-corruption, do no harm, and accountability to affected populations.
  3. While we still provide a longitudinal overview of system performance against the DAC criteria, to provide a more direct connection to decision-making, this year’s report organised its findings as answers to a core set of policy-relevant questions that we routinely heard being asked of the system by aid recipients and practitioners over the study period.

In an effort to use more empirical evidence rather than perception-based data, the report: commissioned new studies on innovation and mortality; sought to gather more factual evidence on use of different modalities by Cluster leads; and included more peer reviewed academic journal in the literature review component.

Findings are drawn from ten research components, using a combination of primary data collection and secondary data synthesis:

Primary data collection and analysis

  • Country-level research: Focus group discussions and key informant interviews, along with relevant context-specific documentation and observations, were collected in Lebanon, Ethiopia, Yemen, Venezuela, the Democratic Republic of Congo (DRC) and Bangladesh (Cox’s Bazar).
  • Country studies on localisation: ALNAP commissioned two country-level studies specifically examining issues and progress in localisation, in Turkey and Somalia, featuring surveys and in-depth interviews with local and international actors.
  • Aid recipient survey: ALNAP conducted an SMS text message and computer-assisted telephone interview (CATI)-based phone survey of 5,487 aid recipients in six crisis contexts to elicit their assessment of humanitarian performance.
  • Practitioner and host government survey: A web-based survey with 436 completed responses was used to elicit the perceptions of humanitarian practitioners and host-government representatives on humanitarian performance.
  • Key informant interviews (KIIs): Humanitarian leaders and key thinkers were interviewed to assess performance and identify important trends. These interviews were also used to identify potential sources to address key evidence gaps.
  • Organisational mapping and analysis: Data is collected from individual organisations as well as through a desk-based review to provide an overall picture of the number of humanitarian staff and organisations worldwide.
  • Innovation and mortality: ALNAP commissioned original studies to assess the state of data and evidence on mortality in humanitarian settings and the impact of innovation funding in the humanitarian system over the past decade.

Synthesis of secondary data

  • Evaluation synthesis: A synthesis of findings from humanitarian evaluations published between 2018–2022 in the ALNAP HELP library. Over 500 humanitarian evaluations were assessed for inclusion with over 130 evaluations chosen for more in-depth analysis.
  • Financial analysis: ALNAP worked with experts in humanitarian financing to produce and analyse statistics on humanitarian financing and compare this to previous SOHS Report periods.
  • Literature review: ALNAP reviewed over 250 research reports and academic work published within the study period on a set of 15 themes related to humanitarian policy and practice.

Primary data collection and analysis

Country-level research

The country research was managed by The Research People and conducted by several local researchers (who have not been named in the report or methodology for their own safety due to local security concerns), in six crisis affected countries.

The purpose of the field-level studies was to provide a more in-depth assessment of the performance of the humanitarian system in a number of crisis responses. The focus for this element of the research was on collecting detailed, qualitative, perception-based data from a range of respondents in specific locations, in order to build a rich and detailed picture of how the humanitarian system operates on the ground and performs in different crisis contexts. This component of the research additionally sought to understand how aid provided through the international humanitarian system relates to other sources and network of support (including, for example, from local associations, family, diaspora or social figures).

Crisis contexts

The six crisis contexts were selected by ALNAP in consultation with TRP with the aim of achieving geographical diversity across regions, and diversity across the different types of crises that humanitarians respond to.
The contexts were:

  • Bangladesh
  • DRC
  • Ethiopia
  • Lebanon
  • Venezuela
  • Yemen

Data collection primarily took the form of key informant interviews (KIIs) and focus group discussions (FGDs) in each location, as well as the collation of relevant documentation and observations. Data collection was conducted by experienced local Research Associates (RAs) in each context.

Case studies were conducted in a staggered approach. Exploratory research was conducted in DRC, Venezuela and Yemen in mid-2020 and continued until Autumn 2021. The research in Bangladesh, Ethiopia and Lebanon was informed by the initial country research, beginning in late 2020 or early 2021 and continuing until Autumn 2021.

Exploratory research

A small amount of preliminary data collection was conducted in the DRC, Venezuela and Yemen. Two focus groups were conducted with aid recipients in each country with the aim of exploring priorities for the research. FGDs explored participants’ recent interactions with and perceptions of the aid system and their priorities for assessment. These focus groups highlighted recipient’s concerns targeting, anti-corruption, do no harm, and accountability to affected populations. The findings of these FGDs were used to refine the SOHS research matrix to inform the focus of future data collection activities.

Data collection locations

Within each context the RAs collected data in one to three in-country locations (either in-person or remotely, depending on access and health – including COVID-19 restrictions – and safety considerations). The specific location for research in each context was decided using a stakeholder analysis and discussions with RAs on the best locations for the research. The discussions focussed on the Humanitarian Response Plan (HRP) documents and other situation updates. The in-country researchers had a significant amount of leeway to decide the final locations, with a strong emphasis on their own safety. The final research locations are outlined in the table below.

Table 1: Country focus, research locations and crisis type

Country Locations and data collection mode Predominant crisis context in period
Bangladesh Cox’s Bazaar (in person, multiple camp locations) Refugee and COVID-19
DRC Beni, North-Kivu (in person); Uvira-Fizi, South-Kivu (in person) Ebola, conflict and flooding
Ethiopia Tigray (remote) Conflict and food security
Lebanon Borj el Shamali and Shatila camps (in person); North and Akkar Governate (in person); Beirut (in person) Refugee and financial
Venezuela Caracas and surrounding areas (remote) Political and financial
Yemen Aden; Lahj; Taiz (in person) Conflict, displacement and food security

The data collection was undertaken by in-country researchers, who were given leeway in deciding the best people to speak to in each context. In order to maintain a contextually relevant approach and appropriate representation of local actors, amongst other actors consulted, priority was given to local and national actors (including CSOs and NGOs, authorities, and other actors involved in the delivery of aid). In November 2021, 3–5 additional interviews were conducted in each context to ensure that the final sample more accurately reflected the intended sampling frame.
During the research we conducted 177 individual interviews across the six humanitarian crises identified. In addition, we conducted 37 FGDs with 264 participants in each of the contexts identified (excluding Ethiopia). Across KIIs and FGDs we therefore engaged with 441 respondents in total, across all sites.

Table 2: Respondents per type, per country

Type of respondent Target for KIIs Target for FGDs
People affected by crisis   30–36
Government / National Disaster Management Authority (NDMA) 1  
Local Government 2  
National/local civil society actors involved in humanitarian action 6  
National/local civil society actors involved in human rights and democratisation 2  
UN humanitarian agencies 2  
‘Northern’-based International NGOs 3  
Red Crescent Movement (ICRC, IFRC, National Societies) 2  
Donor representatives, including non-DAC 2  
HC/HCT/Cluster leaders 2  
National/local academics and researchers 2  
Private-sector representatives 2  
Military representatives (where relevant) 1  
Development/DRR/peacebuilding actors (Micro Finance Institutions (MFIs), United Nations Development Programme (UNDP), NGOs) 3  
Total 30 30–36


Table 3: Aid recipient respondents by age and gender

Country Female Male 18 - 34 35-54 55+ Age not given Total
Bangladesh 19 22 19 15 4 3 41
DRC 28 23 22 15 7 7 51
Lebanon 32 33 13 18 12 22 65
Venezuela 6 25 5 19 6 1 31
Yemen 38 38 34 25 5 12 76
Total 123 141 93 92 34 45 264


Interview and FGDs

KIIs and FGDs were semi-structured and followed a template that developed over time in response to the SOHS study matrix and emerging themes and gaps. These semi-structured tools consisted of questions relating to the indicators tagged to this component in the study matrix.
Different data collection tools were developed for different respondent groups – for example, slightly different tools were used for different key informants. In addition, to draw out more detailed, specific themes in each context, research tools also varied across locations. All tools were co-developed with RAs in each context and checked with ALNAP before use.
KIIs and FGDs were primarily focussed on gathering perception-based data in relation to each of the SOHS study criteria. However, FGDs with aid recipients additionally explored respondents’ sources of support in times of crisis (open-ended questions, exploring a wider range of actors and networks than those associated with the international humanitarian system). KIIs also explored respondents’ perspectives about which other humanitarian actors are important to them, as well as their perspectives on different sources of financial support (for example, from new international donors, private donors, local philanthropists and others).


The RAs all produce a three-page structured contextual analysis paper which captured their reflections on the local context and key findings relevant to key research questions. Summaries of the sources of support outside the international humanitarian system and aid recipients’ experiences of those sources were also produced.
Transcripts were coded by a team of four researchers using MAX QDA according to a shared coding matrix provided by ALNAP. The data was then analysed according to the strength of the evidence (strong/moderate/weak), based on the quantity and consistency of data on different issues across transcripts. Analysis was shared through a completed findings matrix. The matrix was updated twice to incorporate additional gap-filling interviews and to provide additional information in response to specific questions from the ALNAP team.

Constraints and limitations

Convenience sampling was used to some degree due to the access limitations posed by COVID-19 and ongoing conflict in several contexts.
The sample of respondents was also influenced by the fact not all people or organisations contacted for interview responded to those requests and people within the pre-existing networks of the researchers and ALNAP tended to be more responsive. Finally, security of RAs was a key concern throughout the project and risks made work particularly challenging in some contexts. For example, planned FGDs with aid recipients and in-country KIIs led by a local RA had to be cancelled in Ethiopia due to the conflict and associated political concerns. Instead, the research for Ethiopia was conducted via remote calls by UK-based researchers in TRP’s team.

Country studies on localisation

As part of the SOHS report development, a consortium of partners led by NEAR was engaged by ALNAP to conduct primary data collection and analysis on the performance criterion related to locally led humanitarian action.
The research collected perception-based, outcome and process data in two humanitarian contexts: Somalia and Turkey. The researchers analysed the degree to which humanitarian action is locally led in each context and assessed changes – both positive and negative – in this area of performance over time, comparing between the previous SOHS research period and the current period.
The researchers included:

  • A review of locally led response in the selected two countries using NEAR’s Localisation Performance Measurement Framework.
  • Exploration of how the actions of key local and international humanitarian actors, including government, NGOs, faith-based organisations, and civil society, have contributed to supporting locally led response across the two countries, as well as exploring challenges.
  • Exploration of systemic/ pre-existing challenges and gaps within the sector that limit the opportunities for a locally led response, drawing on existing research and data from various actors.

The research was conducted by national research partners and consultants, with support from NEAR and the Humanitarian Advisor Group (HAG). The research in Somalia was conducted by Khalif Abdullahi Abdirahman and the research in Turkey was conducted by Support to Life. The researchers used a mixed methods approach through a survey in each of the countries, and targeted interviews.


A self-assessment survey was conducted by international and local organisations using an online platform, complemented by in person (phone call, hard copy) follow ups where possible/ required. The survey questionnaire was made available in local languages as appropriate. 31 responses were received in Somalia and 42 in Turkey.

Table 4: Number of survey responses by category of respondent






National government 5  
Local or provincial government 1 5
National or local NGO / Civil Society Organisation 13 25
Private sector 1  
International NGO 5 5
Donor agency/ Foreign mission 2  
UN agency 3 4
Academic/ Research body 1  
IFRC   1
National Red Cross Society   2
Total 31 42



Semi-structured KIIs were conducted with national and international organisations. NEAR developed a KII guide with relevant questions based on the SOHS study matrix, which was translated into local languages. Purposive sampling was used with the aim of gathering information from a range of actors both international and local. 18 interviews were conducted in Somalia and 18 in Turkey.

Table 5: Number of interviews by category of respondent

Actor category





Local/national NGO or civil society 9 9
UN agencies 1 2
International NGO 3 3
Donor 1 1
Government 3 3
Research 1  
Total 18 18


Document review

A review of relevant documents was conducted to capture pre-existing data and to complement the primary data collected through the research.

Data coding and analysis

On completion of data collection, data from interviews was coded in MAXQDA according to the SOHS coding framework. Data from different sources was analysed to ensure the validity of the findings and to develop a more complete understanding of locally led humanitarian action in the two humanitarian settings.


While the sampling approach attempted to capture the experience and views of different organisations and individuals, the number of organisations and people targeted with the surveys and interviews means that the research does not represent the full picture of locally led action in these countries. The research did also not seek to capture the views of crisis-affected populations in these locations. Some of the data was also captured remotely, which may have influenced people’s interpretation of the questions and their engagement in the process.

Aid recipient survey

Following the previous SOHS editions, a remote survey of aid recipients was conducted using mobile phones. The survey was conducted in six countries, including Bangladesh, DRC, Ethiopia, Iraq, Lebanon and one conflict-affected context that cannot be named due to security reasons.

Table 6: Aid recipient respondents by country, gender and age

  Bangladesh DRC Ethiopia Iraq Lebanon Other conflict
Number of
1000 1000 1000 510 1000 972
Male 50% 57% 50% 49% 50% 55%
Female 50% 43% 50% 51% 50% 45%
18–24 34% 34% 33% 33% 15% 21%
25–34 33% 35% 33% 34% 35% 28%
35 and over 34% 31% 33% 33% 50% 52%



For the pooled analysis in the report that summarises result across all the countries, the sample contains only countries with the Bangladesh respondents removed. This is because the majority of respondents in Bangladesh (84%) received aid from the Bangladesh government rather than the international humanitarian system and were therefore largely assessing a different entity to the other contexts. In total 5,487 recipients across the six contexts completed the survey, with the numbers per
country represented in Table 6. The method for conducting research in the sixth unidentified context is not discussed in this document due to security concerns for the enumerators and for the organisation conducting the data collection. This section of the methodology instead focuses on the data collected by GeoPoll in five named countries.

Selection of countries and participants across five crisis contexts

For this edition of The State of the Humanitarian System report, ALNAP commissioned GeoPoll to carry out telephone surveys in Bangladesh, DRC, Ethiopia, Iraq and Lebanon. These countries were chosen predominantly to represent humanitarian responses in a variety of geographical areas and contexts, including a range of crisis types. The selection was also partially influenced by the choice of countries in previous editions to allow some longitudinal comparison, to gather further information related case study countries, and by the feasibility of conducting mobile surveys in those contexts.


The main eligibility criterion for survey respondents was for them or their family to have received humanitarian aid over the past two years. If that was not the case, the respondent would be thanked for their time and told they were ineligible. For each context, ALNAP used relevant HRPs to determine the locations in each contexts (at the first administrative level for each country) that contained the highest number of targeted affected populations. GeoPoll was asked to ensure that respondents to each survey were only located within these locations. This decision was designed to maximize the likelihood that respondents had received humanitarian assistance rather than other forms of ‘development’ support, knowing that sometimes aid recipients have a more holistic view of aid received that does not align with classifications made by the international system between development and humanitarian support.

Participant selection

There were two main methods for participant selection that depended on the level of information GeoPoll already held about phone owners in each context. For some, the location registered to each phone number was already known and it was possible to stratify based on the locations requested by ALNAP. In other locations, GeoPoll used random digit dialling to call phone numbers and then to determine eligibility based upon the list of eligible locations. For Lebanon, the nature of the humanitarian crisis affected the majority of the country, meaning it was not possible to clearly identify areas that were more likely to have humanitarian versus development aid. In that case, anyone would be eligible for the survey, regardless of location, as long as they confirmed they had received humanitarian aid in the past two years. The locations and mode of participant selection are outlined per country in Table 7.

Table 7: Locations, modality and sampling by country

Country Eligible locations for survey Modality of collection Sampling method
Bangladesh Dhaka, Mymensingh, Rajshahi, Rangpur and Sylhet CATI Random digit dialling
DRC Kasai, Kasai Central, Kasai Oriental, Ituri, Lomami, Nord-Kivu and Sud-Kivu SMS Pre-stratification on location
Ethiopia Oromia, Somali and Tigray CATI Pre-stratification on location

Al-Anbar, Diyala, Dohuk, Kirkuk, Nineveh and Saladin

CATI Pre-stratification on location
Lebanon Akkar, Baalbek-Hermel, Bekaa, Beirut,
Mount Lebanon, North, Nabatieh and South
CATI Random digit dialling


Aside from selecting based on location, the GeoPoll team aimed for two other demographic quotas in the sample. They sought to ensure gender parity in respondents across all contexts and the ensure that the age of the sample was split equally across three categories in each context (18–24, 25–34, 35 and up). This was roughly achieved in each context, as shown in Table 6 above. It was, however, difficult to get an equal number of men and women to respond to the text-based survey in DRC (with more men than women replying to the SMS-based survey) and it was challenging to get the split between the age categories in Lebanon (with 18–24 being the
hardest category to target).


In previous SOHS surveys, the majority of surveys were conducted using text messages with the computer-aided telephone interviewing (CATI) methodology used in Iraq in 2018. For the 2021 survey, CATI was used in four out of the five countries. This allowed the team to survey people in contexts where scripts are hard to capture in text message form. It also meant people who owned phones but were not literate had more of an opportunity to participate than when using the SMS modality alone. The SMS modality was used in DRC, while CATI was used in Bangladesh, Ethiopia, Iraq and Lebanon.

Questionnaire structure

ALNAP provided GeoPoll with the content of questions for the survey, which used the same or slightly modified questions from the 2012, 2015 and 2018 editions to provide consistent comparisons over time. The respondents were asked a series of questions on their demographics, whether they were refugees of displaced, the type of crisis they experienced, the type of aid required, the agency providing that aid and a series of questions on the performance of that aid that largely followed the DAC criteria. In this edition, a targeting question asking whether aid went to the people who needed it most was added to the performance of aid questions from previous years. The questions were predominantly closed answer options with the performance questions all asking respondents whether they were satisfied with an aspect of performance with the answer options of ‘no’, ‘partially’ and ‘yes’.

Constraints and limitations

The methodology used for the aid recipient survey suffers from a number of potential biases.

  • Selection bias

As there is no overall, country-level list of aid recipients, it is not possible to conduct a probability sample specially targeting all aid recipients. Rather, GeoPoll targeted the whole population in areas most likely to have received humanitarian aid in a country (sometimes using pre-stratification and sometimes using random digit dialling and screening out on location) and then screened out those who were not aid recipients. It can also be difficult to ensure a clear distinction between humanitarian aid versus other types of aid; while we did attempt to target locations most likely to be receiving humanitarian aid, it is possible that in some contexts receiving multiple
types of aid the sample included some development aid recipients.
The aid recipient survey only includes people with access to mobile phones. The degree to which this reflects the entire population will differ from country to country, depending on the proportion of the population who are mobile subscribers (see Table 8).

Table 8: Rates of mobile phone subscribers across 5 survey countries

  Bangladesh DRC Ethiopia Iraq Lebanon
Rate of mobile phone subscribers per 100 people1 107 46 39 93 63


The fact that only those individuals with access to a mobile phone are able to participate in the survey research introduces important selection biases, when comparing respondents to the whole population. In general, those who have access to phones will tend to be more urban, male, younger2 and of a higher socio-economic status. While the survey sought to maintain a balance of gender and different age groups, this was not possible in all contexts, with an over representation of men in DRC (57% when the aim was 50%) and an underrepresentation of younger age groups in Lebanon (15% for 18–24 when we aimed for 33%. However, the sample for Lebanon was able to ensure 50% above and 50% below 35). Where relevant, the report outlines differences in response to questions related to gender and age.

While using mobile surveys can help to access people in areas where it is unsafe or too costly for in-person enumerators to travel, using mobile phones only to collect data may skew the responses because the modality excludes people without access to phones who may have particular characteristics that affect their experience of receiving humanitarian aid. To explore this effect, ALNAP commissioned Ground Truth Solutions to conduct a study that compared the answers given by different sub-groups of face-to-face respondents to see if they answered aid performance questions differently. The sub-groups included people who had access to phones and were willing and able to answer SMS surveys; people who had access to phones but could only answer voice surveys; and people who had no access to phones and could only answer face-to-face surveys. The study was conducted in five crisis-affected countries. At a country-level, the study found no significant and systematic differences

in answers between phone users and people who did not use phones. There were some significant differences between people who were willing to complete an SMS survey and those who did not own a phone, but this only applied to some questions in some countries and the direction of the effect (i.e. more positive or more negative responses) was not consistent. It is not possible to extrapolate from that study the extent or type of bias induced by the phone sampling in the SOHS mobile phone survey but it is important to recognise that there is likely some deviation in response from the SOHS sample of respondents and the broader population of aid recipients within and across the five SOHS survey countries.

  • Modality effects

The use of phone versus face-to-face surveys may also induce differences in responses. Again, the precise direction of those effects is not clear. For example, some people may be more honest in face-to-face surveys, while others may be subject to social desirability bias that makes them less likely to respond negatively to another person who is physically present. The direction and size of such effects require further study, but it is feasible that the results of the SOHS recipient survey may differ somewhat to data collected in-person. Combining the aid recipient survey data with the data provided by the in-person focus group discussions conducted by The Research People during the synthesis stage of analysis may help to offset that bias in the synthesized report findings.

Response scale

The response scale for the performance questions were the same as previous SOHS reports, seeking to enable a comparison of responses over time. However, the scale offers only three responses: Yes; Partially; and No. While ‘Yes’ is considered a positive response to satisfaction and ‘No’ is a negative response, it is not possible to determine whether a respondent answering ‘Partially’ is quite close to a Yes or quite close to a No. The interpretation of that response will likely be different among individual respondents. As such, the majority of analysis in the SOHS report focuses on the extreme ‘Yes’ or ‘No’ categories.

Practitioner and host government survey

The practitioner and government surveys for this iteration of the SOHS were updated to ensure that the questions asked covered the areas in the study matrix, but without sacrificing the comparability of the survey over time by retaining the majority of previous performance questions and the answer scale. The answer options for the majority of performance questions was: Poor/ Fair/ Good/ Excellent.

The surveys were translated into French, Spanish and Arabic and uploaded to SurveyMonkey for dissemination. The ALNAP team prepared a dissemination plan mapping relevant networks and government officials so that the surveys could reach staff on the ground. The survey was also promoted with targeted social media campaigns. Adverts were posted on ReliefWeb and also disseminated via the ALNAP bulletin to ALNAP members. The surveys were open for five months (from
July 2021 to November 2021) and were completed by 412 practitioners and 24 government representatives from a wide geographical spread.
A statistical expert was commissioned to clean the data and provide descriptive statistics.


The main limitations of this component were the response rate and the response scale. While a range of aid practitioners completed the survey, it was more challenging to obtain responses from host country governments despite several attempts to engage individuals from crisis-affected contexts. The answer scale was designed to mimic the previous SOHS surveys, however, the scale itself is somewhat challenging to interpret. While ‘good’ and ‘excellent’ can be considered positive and ‘poor’ is negative, it is harder to interpret ‘fair’ as either positive or negative or to determine how individual respondents might have interpreted that word. As such, the analysis in the report focuses predominantly on the other answer categories.

Key informant interviews (KIIs)

In addition to the KIIs within the case study countries, ALNAP conducted key informant interviews with a view to understanding the global picture of the humanitarian system. These were conducted online, using either Zoom or Microsoft Teams. This edition of the SOHS took an iterative approach to these global KIIs. The core research team conducted a set of interviews at the beginning of the research in 2020 to understand key issues in humanitarian action for the study period and to ensure the most relevant questions and topics for the contemporary system were included in the study matrix that would inform all other data collection.
21 people were interviewed for this inception stage. The second set of key informant interviews were conducted in late 2021 and early 2022 to explore the findings emerging from the synthesis of all other SOHS research components, thereby testing hypotheses and delving further into key topics identified in different contexts to better understand those trends at the global level. Eighty-two individuals were interviewed at that stage. In addition to the interviews directly conducted for the SOHS based on the study matrix, the analysis also drew on data collected within set of 13 interviews that were conducted in mid-2021 by research consultants to inform the background paper for ALNAP’s 2021 Annual Meeting, which focused predominantly on disruptions to the humanitarian system caused by COVID-19 and the decolonising aid debate. Permission to use the findings from that set of interviews for the SOHS analysis was sought from and granted by relevant key informants.

Table 9: Breakdown of interviewees by type of agency

  Number of individuals interviewed by type
UN agency 24
National RC Societies 1
International NGO 36
National NGOs and networks 6
Academic 6
Policy research/ Think tank 10
Development banks 4
Media 1
Donor 23
Total 117


Interviewees included many of the key types of actor within the sector, including UN agencies, the Red Cross and Crescent Movement (RCRC), international NGOs, national NGOs and networks, donors, development banks and other multilaterals, think-tanks, and academia. These interviews did not include crisis affected populations, whose opinions were solicited within the country research and the remote phone survey (discussed above). The team also sought respondents at different levels of the system and of the organisations and bodies outlined above – from senior leaders to those working at functional, operational or coordination levels in humanitarian programmes. The team also used a snowball approach, asking interviewees to recommend people who had differing views or who represented a particular aspect of a discussion, or who had specific technical or geographic expertise. In all, 103 people were interviewed for the SOHS and a further 14 provided information via their interviews for the annual meeting background paper. The breakdown of interviewees by type of agency is given in Table 9.

Interviews were semi-structured and were based on a protocol derived from the common study framework. Interviewees generally took a global, rather than an operation-specific, view of the performance of the system. Interviews were conducted in English.


The main constraints affecting the global-level key informant interviews were related to sampling and to perceptions. First, the research team sought to interview a range of different informants based on email invitations. However, prospective interviewees were easier to contact and were more likely to respond based on personal connections and relationships. Second, as with the country data collection and survey data, the information was largely perception-based. Where relevant, the researchers asked informants to provide documentation substantiating statements but that was not always possible or appropriate. The timing of some of the final interviews in early 2022 also caused some challenges – it was understandably difficult for people to make the time for interviews who were directly involved in the emerging Ukraine crisis.

Organisational mapping and analysis

Humanitarian Outcomes (HO) collected and analysed data on the composition of the humanitarian system for this report. To describe the composition of the humanitarian system, HO conducted a humanitarian organisational mapping and gathered operational statistics. The data focused on four areas:

  1. Global humanitarian resources
    •  Number and relative sizes of organisations
    • Organisations’ humanitarian expenditure
    • Global estimate of humanitarian personnel (total, nationals, and internationals)
    • Recent changes and trends
  2. Diversity and inclusion within the humanitarian system and organisations
    • Operational differences between types of organisations in compensation and pay scales
    • Globally
    • Within response contexts
    • Diversity of board and senior management 
    • Inclusion of national actors in international coordinating bodies
  3. Trends in insecurity
    • Numbers, types, and locations of major attacks on aid workers
    • Numbers and types of aid worker victims
    • Trends in means of violence and perpetrator groups
    • Attacks on healthcare facilities
  4. Operational presence in humanitarian response contexts
    • Numbers of organisations and personnel operating in emergencies
    • Extent of evenness or disparities in coverage (operational presence relative to numbers of people in need) within and across emergencies

For the above queries, Humanitarian Outcomes used a combination of its pre-existing datasets within its Global Database of Humanitarian Organisations (GDHO) (updated for the study period), the Aid Worker Security Database (AWSD), and external data sources.

Global humanitarian resources

For data on global humanitarian resources, the report used Humanitarian Outcomes’ Global Database of Humanitarian Organisations (GDHO). The GDHO compiles basic organisational and operational information on humanitarian providers, including international non-governmental organisations (grouped by federation), national NGOs that deliver aid within their own borders, UN humanitarian agencies, and the International Red Cross and Red Crescent movement. All the organisations included the database have responded to humanitarian needs in at least one emergency context, individually or in partnership with other organisations, even if their stated mission is not strictly humanitarian.

The GDHO research team populates the database by pulling information from public sources and through direct email queries to organisations that have participated in humanitarian response efforts (identified by UN and INGO partners lists and 3Ws and OCHA FTS data), and updates the figures each year. For each organisation they collect or impute:

  • Overall programme expenditure (including non-huminitarian but excluding HQ costs)
  • Humanitarian expenditure (either as direct figure or percentage of OPE)
  • Overall staff (non-HQ)
  • International staff numbers (if applicable)
  • National staff numbers
  • Humanitarian staff numbers (calculated as a percentage of overall staff according to the organisation’s humanitarian programming percentage)

Within this report, ‘Humanitarian organisations’ are classified as not-for-profit operational organisations that provide material, technical, financial or coordination assistance to people affected by humanitarian crisis. They include dual or multi mandated organisations for which humanitarian assistance is only a part of their remit, but do not include strictly development, religious, human rights, or advocacy organisations that do not play an operational role in humanitarian response. Among these core actors in the humanitarian system, HO differentiate between:

  • UN entities: the members of the Interagency Standing Committee on Humanitarian Affairs (FAO, OCHA, UNDP, UNFPA, UNHCR, UNICEF, UN-Habitat, WFP and WHO) plus IOM and UNRWA
  • International NGOs: NGOs that operate programs in one or more countries outside of their national HQ
  • National NGOs: including national, local, and community-based organisations that participate in the organised humanitarian response via coordination, funding, or partnership
  • International Movement of the Red Cross and Red Crescent: including the ICRC, IFRC and 192 National Societies.

‘Aid Workers’ are employees and associated personnel of humanitarian organisations, as defined above, working in humanitarian response contexts. While ‘Humanitarian staff’ are classified as paid employees of humanitarian organisations undertaking humanitarian work (as opposed to development, political, advocacy, or other non-humanitarian activities.)

The GDHO records the actual published figures for most of the largest humanitarian organisations (which in turn account for the majority of humanitarian financial and staffing resources), but for smaller organisations, published data in the form of annual reports and financial statements becomes increasingly rare. For NGOs where there is only partial information available, the GDHO algorithm imputes missing data based on the organisation’s historical ratios (for example, budget/staffing, national/ international staff, and percent humanitarian expenditure). For organisations where there is no information, it imputes missing figures based on averages of other organisations within the same size tier. These imputations allow HO to estimate totals for the NGO sector. The imputation algorithm has recently been refined for NNGOs, adding a geographic layer. Missing data are now imputed using averages from other similarly sized organisations by subregion rather than global averages.

To estimate global humanitarian staff, HO sum the humanitarian staff numbers of the UN agencies, the International Movement of the Red Cross and Red Crescent, and the international NGOs in size tiers 1–5 (i.e., with budgets of $2 million and up). To this HO add and estimate of staff for the smallest (tier 6) organisations, which are mostly single-country or local and not continually operational in humanitarian response year upon year.

Note that the estimation of National Red Cross and Red Crescent expenditure reflects a partial change in methodology used for calculating total estimates. In the previous SOHS report, HO summed both the staff numbers and the expenditures of the National Societies (available from the IFRC database, IFRC Federation-wide Databank and Reporting System or FDRS) in all but the high-income countries, on the reasoning that those countries are unlikely to require an international humanitarian intervention in response to crisis, and their disproportionately large staff sizes would inflate the global estimate of humanitarian workers. This remains the most logical way to estimate the contribution of National Society staff to the global personnel figures (and we have kept this methodology for staffing estimates), but because it artificially excludes the financial contributions of the high-income country National Societies to international humanitarian response, HO have adopted the following method of estimating expenditures for this report, in consultation with the IFRC data office: Total expenditures are summed for all National Societies, less the amounts that smaller NGOs receive from the larger ones, to reduce double counting. This leaves us with a higher expenditure estimate than we have used previously, but one that is more reflective of the Movement’s contributions.It comes with its own caveat however, emphasised by the IFRC, that this must still be treated as an estimate as opposed to a precise calculation since, unlike with income, FDRS does not disaggregate expenditure to record funds transferred between national societies. In other words, some double counting may still get through.

The organisations identified as the top five largest non-governmental humanitarian actors are those with the largest humanitarian expenditure. Therefore, this group does not include some organisations that have larger overall budgets, but which devote less of them to humanitarian activities (i.e., activities varyingly termed as disaster relief, crisis response, emergency response.) Where organisations did not explicitly differentiate and quantify humanitarian expenditure, the team used their descriptions of programming areas to make this determination.

Finally, despite the lengthy and painstaking process used to calculate estimates that are as close to accurate as possible, it bears reminding that these global figures are still just that: estimates. While HO have actual  figures for most of the largest organisations (which represent the bulk of humanitarian resources), for the majority of NNGOs and CBOs working in humanitarian response the reverse is true, as most do not publish their data or have a web presence. As explained above, the subregion tier averages are based on a small number of organisations in each location that have provided their data, multiplied by the organisations listed as operating in the region from partnership lists, 3Ws, and various fora and rosters. However, HO believe that these averages are roughly representative and directionally correct, and the total estimates are as close as it is possible to get. They therefore repeat their general caveat from previous contributions to the SOHS that, ‘while the model used produces rigorous, systematic estimates for the organisational mapping, they are still just estimates, and should be considered and cited as such.’

Diversity and inclusion within the humanitarian systems and organisations

For data on diversity and inclusion within the humanitarian systems and organisations, HO used internal information contributed by a sample set of organisations that supported the SOHS study by providing operational data from their headquarters and from three response contexts: Afghanistan (or Iraq), Bangladesh, and South Sudan. The organisations provided their information via a standard questionnaire, which have been aggregated for the report. The sample comprised six UN humanitarian entities, six international NGOs, and 33 national NGOs.

Trends in insecurity

For data on trends in insecurity, the report used data from three sources: • The Global Database of Humanitarian Organisations provided data to calculate aid worker attack rates and aid worker fatality rates.
  • The Aid Worker Security Database provided data for Numbers of major attacks in present and prior periods; Numbers and type of aid worker victims and outcomes; Highest incident contexts; Trends in tactics and perpetrators. The database includes reports of major attacks against aid workers from 1997–present. Major attacks were defined as incidents of violence in which one or more aid workers were killed, kidnapped, or seriously injured. The database records: date, country, and specific location; number of aid workers affected (victims); gender of victims; Institutional affiliation of victims (UN/Red Cross/NGO/other); type of staff (national or international); outcome of the incident (victims killed/ wounded/kidnapped); means of violence (e.g., shooting, IED, airstrike); context of attack (ambush, armed incursion, etc.).
  • The Safeguarding Health in Conflict Coalition (SHCC) provided data on attacks against health care facilities. This data is collected for the SHCC by Insecurity Insights and shared through the Humanitarian Data Exchange. Data is available for years 2017–2020 in separate files. The data includes incidents reported by media, partners, and network organisations. It also includes data from the Aid Worker Security Database (AWSD) for global data from international aid agencies coordinating health care programmes; Airwars; the Union of Medical Care and Relief Organisations (UOSSM); the Syrian Network for Human Rights (SNHR) for data on Syria; the Civilian Impact Monitoring Project (CIMP) for data on Yemen; and the Armed Conflict Location and Event Data Project (ACLED). This data was used to estimate the number of health facilities destroyed or damaged in attacks.

Operational presence in humanitarian response contexts

Data on operational presence in humanitarian response contexts was compiled from four different sources.

  • OCHA’s Humanitarian Response Information records operational and emergency-specific data includes Humanitarian Response Plans and Humanitarian Needs Overviews. The report used this information to estimate: 3Ws data (organisations operating by national/ subnational locations); numbers of people in need (county/province); and number of people targeted.
  • OCHA’s Global Humanitarian Overview website summarises information on all humanitarian responses and includes summary tables for 2015–2021. In the report, this database provided information on: funding requirements; people in need/targeted. In the report, this database informed the context on comparative needs in and resources for responses.
  • The Humanitarian Data Exchange (HDX) was used to access all available excel formatted data for OCHA’s country-specific pages. The report used a sample for the presence mapping from Afghanistan, CAR, Iraq, South Sudan and Yemen.
  • Consultations with national humanitarian organisations and networks. 

Innovation and mortality

ALNAP commissioned original studies to assess the state of data and evidence on mortality in humanitarian settings and the impact of innovation funding in the humanitarian system over the past decade. The mortality work was conducted by Francesco Checchi at LSHTM. They conducted a review of availability of mortality information in activated humanitarian crises, an exploration of methodological options for estimation, and a quantitative study using generalised propensity score methods to estimate the impact of the presence of humanitarian assistance on excess mortality using datasets from north-east Nigeria, Somalia and South Sudan.

The innovation study was conducted by Catherine Komuhangi, Hazel Mugo, Lydia Tanner and Ian Gray. The research began with a desk review that included 43 papers that highlighted the major events and trends that have shaped humanitarian innovation in the past decade. They also collated data from eight funders, via direct funder submissions and desk-based searches of publicly available monitoring and evaluation data. Four case studies were also produced and incorporated into the report. More methodological details for these components are provided in the individual study reports.


Synthesis of secondary data

Evaluation synthesis

The evaluation synthesis is designed to condense and synthesise findings from the large number of evaluations conducted throughout the international humanitarian system each year, revealing a broader picture of overall system level performance. It summarises and highlights findings of evaluations undertaken between January 2018 and October 2021.

ALNAP’s M&E research team compiled documents from the ALNAP database of evaluations (HELP library), as well as other public sources, and recorded the findings for each using a specific analysis framework designed for the 2022 evaluation synthesis. The framework retained the same basic structure as the analysis matrix used in the SOHS 2018 report including a rating system to weight evaluations for inclusion based on evaluation quality and relevance. The evaluation synthesis for the 2022 SOHS added an additional dimension by also weighing evaluations for inclusion with a greater emphasis on the scope of the evaluation and the ‘generalisability’ of the evaluation’s findings. The 2022 SOHS report also had a greater focus on specific thematic evaluation analysis, by grouping select evaluations further into ‘thematic evaluation clusters’ using purposive sampling. The thematic clusters were chosen based on the overall parameters and emerging thematic topics judged most relevant for the humanitarian community and SOHS 2022 report, using an iterative design approach. Thematic evaluation clusters included:

  • COVID-19 & humanitarian assistance
  • The HDP nexus
  • Innovation
  • Localisation of humanitarian action and barriers to localisation, etc.

Although the analysis was mostly qualitative, the framework helped to ensure the greatest possible degree of comparability across the findings, while clearly defining the thematic areas for purposive sampling.

The evaluation synthesis included the following two steps:

Step 1

Categorising and coding the (mostly qualitative) findings and recommendations from a selected purposive sample of evaluation reports reviewed in an evaluation synthesis analytical framework. The framework and evaluation scoring system built on protocol used in SOHS 2015 and 2018 for the evaluation synthesis, as was also informed by UNEG, OECD DAC and other guidance on evaluation quality. The scoring system included the following fields:


  • ID#
  • evaluation title
  • year
  • evaluator
  • published/unpublished
  • quality score
  • commissioning agency
  • evaluation type
  • theme, sector, category, etc.
  • scope and timeframe
  • subject area

Scoring criteria

  • evaluation quality
  • relevance of the evaluation to key topics in the SOHS 2022
  • scope of the evaluation & generalisability

Thematic topics

  • COVID-19 & humanitarian assistance
  • The HDP nexus
  • Food insecurity & famine
  • Forced displacement
  • Localisation of humanitarian action and barriers to localisation, etc.

Over 500 humanitarian evaluations were reviewed for inclusion in the 2022 SOHS analysis. The main selection of evaluations was completed taking into consideration the following 3 elements: 1) the scoring criteria (ie the average score across the three criteria), 2) evaluations that look at the defined thematic areas, particularly the HDP nexus and COVID-19 (that were prioritised) and 3) how representative the evaluations are of the ALNAP membership (a balance between evaluations from UN, NGO, bilateral donor, Red Cross, etc). The 3 considerations were used to rank the evaluations to be coded by priority level. Coding continued on an ongoing basis until approximately 130 evaluations were coded (saturation reached). The ranking system also allowed the team to continue to add and score additional evaluations uploaded to ALNAP’s HELP library database towards the end of the research period.

Step 2:

Synthesis of findings against each indicator in the SOHS Study Matrix/analysis framework. The synthesis findings were presented in a structure based on the analytical framework drawn from the evaluations’ key components (findings, conclusions, recommendations). The synthesis analysis took into account the strength of evidence for each finding on the basis of number, breadth and quality of evaluations supporting it.

The ALNAP team’s work included the following:

  1. Evaluation synthesis analytical framework
  2. Complete coding framework for MaxQDA (including emerging codes)
  3. Evaluation selection criteria, scoring matrix and method for collecting evaluations
  4. Two-page mid-point summary and short PowerPoint of findings to date
  5. Thematic analysis on a wide variety of key themes included in the SOHS 2022
  6. Final evaluation synthesis analysis (Word document following outline of 2022 SOHS)
  7. Findings and key messages organised according to the main research framework and questions, to provide rich analysis in all areas of the SOHS
  8. Standalone evaluation synthesis publications on COVID-19 and the HDP nexus (forthcoming)

Each evaluation included in the Evaluation Scoring Matrix for potential inclusion in the evaluation synthesis was given an evidence score. This was on a scale of 1–3, with 3 representing the strongest evidence. The scores were based on the judgement of the researchers against three parameters, each with its own criteria:

  • Evidence depth and relevance: the depth and extent of relevant analysis in the report (‘relevant’ here means that it relates to the themes highlighted in the coding system – see below). The related criteria include whether the work appears to add significantly to the existing evidence base. The score also relates to the subject and extent to which the evaluation covers key issues to be highlighted in the 2022 SOHS report.
  • Evidence quality: the quality of the analysis and the related evidence base. Here we will consider, in particular, how well argued and evidenced the evaluations are, and the rigour of the methods and approach, amount of data, triangulation and other quality parameters (see quality scoring note).
  • Evaluation scope: multi-sectoral, joint evaluations, response-wide evaluations, policy evaluations and other forms of evaluations that cover a wider variety and number of topics and high-order topics (ie. evaluations with a greater potential for generalisability) will be weighted more heavily.

Each of the three parameters were scored 1–3, with the overall value score being the average of the three scores. Evaluations with the highest scores were ranked for priority to be included in the purposive sampling and were coded against the SOHS indicator framework in MAXQDA.
Following the general selection of evaluations for coding against the SOHS 2022 analytical framework, selected evaluations were clustered into thematic evaluation clusters for the purpose of further analysis. The thematic clusters allowed the research team to further explore key topics in more depth. As in the previous SOHS report, the process was both inductive and iterative.


Attempts to conduct a comparative review of evidence from humanitarian evaluations across the sector are hampered by several factors. One is the variability in the object of evaluation: most of the evaluation material is response and organisation specific. Related to this is the difficulty of controlling for contextual variables. A third factor is the variability in the methods of investigation adopted in the evaluations, and the way in which results are recorded. Most of the available evidence is qualitative; where quantitative results are available, the factors noted above tend to make comparison difficult or impossible. Finally, as noted in the 2018 SOHS report, the sample is likely to biased towards particular contexts and some types of organisations may tend to have evaluations that score consistently higher on quality than others, making representative sampling based on objective criteria challenging.  The time lag in conducting evaluations is also a challenge, which is particularly marked in relation to Covid-19. This relates to the ongoing humanitarian impact of Covid-19 both in terms of humanitarian response, but also on the ability of organisations to continue to conduct high quality evaluations. Due to this, ALNAP’s M&E team continued to code Covid-19 evaluations, beyond the formal study period and continued analysis of these evaluations to capture emerging evidence that could be feed back into the SOHS 2022 report immediately before the final publication deadline.

Financial analysis

The analyses on humanitarian funding and people in need were compiled by Development Initiatives (DI). Figures in the report are presented with methodological points where they are important to understand the analysis and a more complete methodology outlining the process and relevant caveats are presented in this Annex for all figures where some form of original calculation or interpretation were required.

Total international humanitarian assistance

DI’s calculation of total international humanitarian assistance (IHA) is the sum of that from private donors and from government donors and EU institutions. Total IHA for governments and EU institutions is compiled using DI’s approach developed for the Global Humanitarian Assistance report, which takes the sum of:

  • ‘official’ humanitarian assistance (OECD DAC donors)
  • international humanitarian assistance from OECD DAC donors to countries not eligible for ODA from the FTS
  • international humanitarian assistance from government donors outside the OECD DAC using data from the FTS

DI’s ‘official’ humanitarian assistance calculation comprises:

  • the bilateral humanitarian expenditure of OECD DAC members, as reported to the OECD DAC database
  • the multilateral humanitarian assistance of OECD DAC members.

The multilateral humanitarian assistance of OECD DAC members consists of three elements.

  • The unearmarked ODA contributions of DAC members to 10 key multilateral agencies engaged in humanitarian response: the Food and Agriculture Organization, IOM, the UN Development Programme, UNFPA, UNHCR, UN OCHA, UNICEF, UNRWA, WFP and WHO, as reported to the OECD DAC and the CRS. We do not include all ODA to the Food and Agriculture Organization, IOM, the UN Development Programme, UNFPA, UNICEF, WHO and WFP but apply a percentage to take into account that these agencies also have a ‘development’ mandate. These shares are calculated using data on humanitarian expenditure as a proportion of the total received directly from each multilateral agency.
  • The ODA contributions of DAC members to some other multilateral organisations (beyond those already listed) that, although not primarily humanitarian-oriented, do report a level of humanitarian aid to OECD DAC. DI does not include all reported ODA to these multilateral organisations but just the humanitarian share of this.
  • Contributions to the UN Central Emergency Response Fund that are not reported under DAC members’ bilateral humanitarian assistance. DI takes this data directly from the UN Central Emergency Response Fund website.

When reporting on the official humanitarian assistance of individual OECD DAC countries that contribute to the EU budget, an imputed calculation of their humanitarian assistance channelled through the EU institutions is included, based on their ODA contributions to the EU institutions. DI does not include this in total international humanitarian assistance and response calculations to avoid double counting.

DI’s estimate for IHA from governments in 2021 is derived from preliminary DAC donor reporting on humanitarian aid grants and multilateral ODA.

IHA by recipient country is calculated based on FTS data to be able to also analyse 2021 data, which will become available in the OECD DAC CRS in December 2022 or later. FTS data was downloaded 13 April 2022.

Private humanitarian funding

DI requests financial information directly from humanitarian delivery agencies (including NGOs, multilateral agencies and the Red Cross and Red Crescent Movement) on their income and expenditure to create a standardised dataset. Where direct data collection is not possible, DI uses publicly available annual reports and audited accounts. For the most recent year, the dataset includes:

  • a large sample of NGOs that form part of representative NGO alliances and umbrella organisations such as Save the Children International, and several large international NGOs operating independently
  • private contributions to IOM, UNHCR, UNICEF, UNRWA, WFP and WHO
  • the International Federation of Red Cross and Red Crescent Societies and the International Committee of the Red Cross.

DI’s private funding calculation comprises an estimate of total private humanitarian income for all NGOs, and the private humanitarian income reported by UN agencies with available data, the International Federation of Red Cross and Red Crescent Societies and the International Committee of the Red Cross. To estimate the total private humanitarian income of NGOs globally, DI calculates the annual proportion of total funding received that the NGOs in DI’s dataset represent of NGOs reporting to UN OCHA FTS. The total private humanitarian income reported to DI by the NGOs in DI’s dataset is then scaled up accordingly.

Data is collected annually, and new data for previous years may be added retrospectively. Due to limited data availability, detailed analysis, for instance on the source of funding, covers the period 2016 to 2020.

DI’s 2021 private funding calculation is an estimate based on data on eight organisations that, combined, receive a large share of global private humanitarian funding year on year, pending data from DI’s full dataset. These are: Médecins Sans Frontières, Plan International, Catholic Relief Services, the International Federation of Red Cross and Red Crescent Societies, the Danish Refugee Council, UNHCR, American Near East Refugee Air and World Relief. DI calculates the average share that these eight organisations’ contributions represent in the private funding figure for the five previous years (50%, ranging between 47% and 52% over 2016–2020) and use this to scale up the private funding figure gathered from these eight organisations to arrive at an estimated total for 2021.

ODA funding from Multilateral Development Banks to humanitarian recipients

MDBs include the following organisations which report the OECD DAC CRS:

  • African Development Bank
  • African Development Fund
  • Asian Development Bank
  • Asian Infrastructure Investment Bank
  • Caribbean Development Bank
  • Council of Europe Development Bank
  • Development Bank of Latin America
  • Inter-American Development Bank
  • Islamic Development Bank
  • European Bank for Reconstruction and Development
  • International Investment Bank
  • IDB Invest
  • International Development Association
  • International Bank for Reconstruction and Development
  • International Finance Corporation

The largest 20 humanitarian recipients each year are based on the total of ODA disbursements by country for all flows under humanitarian purpose codes.
Note that the figures are total ODA disbursements across all sectors and only include funding reported to the OECD DAC Creditor Reporting System (CRS); some MDBs do not report to the CRS, and others only report partially, some like the EBRD also report their financial contributions as other official flows (OOFs), a flow type not included in this analysis due to lack of concessionality. The majority of ODA from MDBs to countries experiencing crisis is reported as development assistance and is not assigned to humanitarian sector codes. This means that funding in this figure cannot be called ‘humanitarian.’

Overall requirements and funding for UN-coordinated appeals

The terms ‘UN-coordinated appeals’ is used to describe all humanitarian response plans and appeals wholly or jointly coordinated by UN OCHA or UNHCR, including strategic response plans, humanitarian response plans, flash appeals, joint response plans, regional refugee response plans and other plans also tracked by FTS.

Data is in current prices to accurately present the relationship of funding against requirements each year and over time.

Appeals fund and requirements data was extracted from the UNOCHA FTS, UNHCR Refugee Funding Tracker and 3RP dashboards:

  • 3RP funding dashboards and annual reports were used for the Syria Regional Refugee and Resilience Plan (3RP), where available (2018–2021).
  • UNHCR Refugee Funding Tracker data was used for all other RRPs and for the Syria 3RP prior to 2018
  • FTS data was used for all other response plans, including HRPs, flash appeals, Bangladesh Joint Response Plans and Venezuela RMRPs and other plans

Requirements and funding per technical sector in UN-coordinated appeals

Total funding is based on funding flows by sector to appeals using data from UNOCHA FTS. Total requirements are based on appeal requirements by sector also using data from UNOCHA FTS.

Data is in current prices to accurately present the relationship of funding against requirements each year and over time.

For funding flows to multiple clusters, DI broke those up evenly to each of the reported clusters. This is a simplifying assumption, as the breakdown of funding across the multiple clusters is not provided in the reporting. FTS opts not to break up this funding in the absence of that information. However, in 2021, around US$2.9 billion of funding were directed to multiple clusters, meaning the exclusion of all that funding would underrepresent proportions met.

Non-standard sectors are aligned to UNOCHA global clusters based on DI mapping. This to avoid inconsistencies of, e.g., the same field cluster being mapped to different global clusters across response plans. Because technical sectors are aligned to FTS’s global clusters based on DI mapping, totals do not match FTS overview figures.

Total funding for cash and voucher assistance

The global estimate of humanitarian assistance provided in the form of cash and voucher assistance (CVA) in 2021 is based on data collected from 27 organisations that implement humanitarian CVA. The data collection is carried out by DI with support from the CALP Network. Data is collected on:

  • Overall programming costs of implementing CVA, including transfer values
  • Transfer values of CVA, disaggregated by cash and voucher assistance if possible
  • Sub-grants provided or received from other implementing agencies for CVA. This information is used to avoid double-counting.

The survey data is complemented with data from FTS for organisations that did not respond to the survey, including all funding to projects that mostly or largely consist of CVA as per their description.

To calculate an approximate estimate for the percentage of funding for humanitarian cash and voucher assistance out of total IHA for 2018-2021, DI took the total global value of humanitarian CVA overall programming costs that is part of the total international humanitarian response and divided it by total IHA provided by public and private donors.

Populations in need, targeted and expected number reached in UN coordinated appears

UNOCHA’s HPC API was scraped for caseload data for all appeals with available data.

It should be noted that:

  • Data are for HRPs only due to data availability. A limited number of HRPs are missing caseload data before 2021.
  • Data for RRPs and Flash Appeals are excluded due to limited data availability for numbers of people targeted and expected reached for these appeal types.
  • Gender, age, and other disaggregations are not available due to lack of data consistency and availability across appeal data. Just 5 appeals in 2021 have some level of caseload disaggregation available as of early 2022, and these disaggregations are not standardised.
  • Expected reached caseload data for El Salvador, Honduras, and Guatemala are not final, and are not shown or included in calculations.

Funding versus requirements per person reached

UNOCHA’s HPC API was scraped for caseload data for all appeals which had it.

Funding and requirements for HRPs were scraped from UNOCHA’s FTS.

It should be noted that:

  • Data are for HRPs only due to data availability. A limited number of HRPs are missing complete caseload data in 2021.
  • Expected reached caseload data for El Salvador, Honduras, and Guatemala are not final, and are not shown or included in calculations.
  • Data for RRPs and Flash Appeals are excluded due to limited data availability for numbers of people targeted and expected reached for these appeal types. In addition, not all RRPs are hosted by UNOCHA.
  • Data for persons expected reached does not exist before 2020.
  • Gender, age, and other disaggregations are not available due to lack of data consistency and availability across appeal data. Just 5 appeals in 2021 have some level of caseload disaggregation available as of early 2022, and these disaggregations are not standardised.

Earmarked funding

‘Earmarked’ funding comprises all non-core (‘other’) funding directed to multilateral organisations. Unearmarked funding may include softly earmarked contributions where this data was provided, for instance by region, to better reflect progress against the Grand Bargain commitment of providing more unearmarked and softly earmarked funding. DI’s definitions of different levels of earmarking used in the data collection reflect those in the annex of the Grand Bargain document.3

DI’s calculation of earmarking to nine UN agencies – the Food and Agriculture Organization, the International Organization for Migration (IOM), the UN International Children’s Emergency Fund (UNICEF), the UN Development Programme, UNHCR, UN OCHA, UNRWA, the World Food Programme (WFP) and the World Health Organization (WHO) – is primarily based on data provided directly to DI by each agency based on its internal reporting or extracted from annual reports.

Funding to local and national actors

  • DI’s analysis of international humanitarian funding to local and national actors draws on data from FTS and from UN OCHA’s CBPF Data Hub. FTS data is coded by DI according to a set of organizational categories provided below. CBPF data uses the funds’ own classifications of recipients that might differ from the definitions below. DI’s coding process relies on the following categories of local and national non-state actors and national and subnational state actors, as defined by the Inter-Agency Standing Committee Humanitarian Financing Task Team in its Localisation Marker Definitions Paper.4 National NGOs/civil society organisations (CSOs): NGOs/CSOs operating nationally in the aid-recipient country in which they are headquartered, working in multiple subnational regions, and not affiliated to an international NGO. This category can also include national faith-based organisations.
  • Local NGOs/CSOs: NGOs/CSOs operating in a specific, geographically defined, subnational area of an aid-recipient country, without affiliation to an international NGO/CSO. This category can also include community-based organisations and local faith-based organisations.
  • Red Cross/Red Crescent National Societies: national societies based in and operating within their own aid-recipient countries.
  • Local and national private sector organisations: organisations run by private individuals or groups as a means of enterprise for profit, based in and operating 
  • within their own aid-recipient countries and not affiliated to an international private sector organisation.
  • National governments: national government agencies, authorities, line ministries and state-owned institutions in aid-recipient countries, such as national disaster management agencies. This category can also include federal or regional government authorities.
  • Local governments: subnational government entities in aid-recipient countries exercising some degree of devolved authority over a specifically defined geographic constituency, such as local/municipal authorities.

Direct funding to the IFRC, ICRC and national societies operating internationally is recorded as funding to the ‘international RCRC movement’, as DI was unable to trace how funding would be shared between those actors and domestic national societies.

Depending on the analysis in this report, the emphasis is on direct funding only or on both direct and indirect funding, which are defined as follows:

  • Direct funding includes all funding to local and national actors directly from the original donor entity, e.g., governments and private donors. It therefore only draws on FTS data that is marked as ‘new money’ to the humanitarian system and represents first-level funding.
  • Indirect funding includes all funding to local and national actors from intermediaries, which could be pooled funds, NGOs, UN agencies or other institutions in receipt of humanitarian funding. It draws on allocations data from CBPFs of funding provided by those funds or as sub-grants under projects funded by CBPFs and on indirect funding as reported to FTS.

When combining data from CBPFs Data Hub and FTS, all CBPF allocations reported to FTS are excluded to avoid double-counting.

It should be noted that with the exception of CBPFs, timely, comprehensive and disaggregated reporting on indirect funding to local and national actors to FTS or other publicly accessible databases continues to be lacking. Improved reporting practices by UN agencies in particular and INGOs of funding they provide to local and national actors would greatly improve analyses on the progress of localisation in the humanitarian system. Their inconsistent reporting makes it difficult to estimate exact volumes of indirect flows and therefore to fully understand the direct versus indirect funding to local and national actors.

Funding sources outside IHA

Government revenues for Bangladesh and Ethiopia are provided in terms of their fiscal years. Government revenue data for Bangladesh for 2019 is preliminary and for 2020 is an estimate, while data for DRC, Ethiopia, Yemen and Lebanon includes estimates from IMF staff and national authorities.

ODA towards humanitarian aid comprises gross ODA disbursements towards sector ‘700 Humanitarian Aid’ from ‘Official Donors’ as reported in the OECD CRS.

ODA excluding humanitarian aid comprises gross ODA disbursements towards all ODA sectors from ‘Official Donors’ as reported in the OECD CRS, excluding ODA towards sector ‘700: Humanitarian Aid’.

Peacekeeping financing values for DRC and Lebanon for missions UNIFIL (United Nations Interim Force in Lebanon) and MONUSCO (United Nations Organization Stabilization Mission in the Democratic Republic of the Congo) have been transformed from financial year (July[en dash]June) to calendar year. Peacekeeping financing values for Lebanon consist of the combined values for peacekeeping operation UNIFIL and special political mission UNSCOL (United Nations Special Coordinator for Lebanon), the peacekeeping financing values for Yemen comprise the combined values for UNMHA (UN Mission to Support the Hudaydah Agreement) and OSESGY (Office of the Special Envoy of the Secretary-General for Yemen) and there is a possibility of double counting for shared costs across the two missions in each of the two countries.

Data on remittances for Yemen is not available.

All data is in USD and constant 2020 prices. For Lebanon the deflator used for non-grant government revenue and remittances is based on the DEC effective exchange rate. Numbers have been rounded to the nearest tenth.

Literature review

Unlike previous editions, the literature review was designed as a core research component from the beginning of the study and covered a broad range of topics. For the 2018 edition, the literature review was used mainly to provide information on a small number of specific areas not captured fully by other means in the Study Matrix after data collection for other components had commenced or finished. In the 5th edition, the literature review was used to gather documented evidence that would not be included in the evaluation synthesis for a broad range of topics across the study matrix with the recognition that different types of evidence can shed light on those topics.

Selection of literature to review

The literature review focused on a broad range of topics identified at the inception phase and others added after the final data meeting following the identification of areas where hypotheses required further testing with additional evidence. Topics included:

  • coordination
  • counter-terrorism and corruption
  • cash transfers and vouchers
  • efficiency
  • global shifts and crises
  • health system support
  • humanitarian access, international law and humanitarian principles
  • impact of humanitarian system
  • inclusion
  • international development and crisis prevention
  • locally led humanitarian action
  • needs assessments
  • other forms of humanitarian assistance
  • protection
  • prevention of sexual exploitation and abuse

The reviewer from the ALNAP team constructed search strings for each of the topics in collaboration with the research co-leads. They used these to search in several search engines, including google scholar, the ALNAP HELP Library, and EBSCO. Search results were limited to work published in 2018–22 and concerning current or recent humanitarian responses. Evaluations were excluded but both grey and academic literature were included. The literature review component began in Spring 2021, with the bulk of the topics being covered prior to Autumn 2021 and final topics, identified during the component analysis meeting, completed in Spring 2022.

In addition to the search engines, the reviewer also searched relevant policy and practice websites, including:

  • Humanitarian Policy Group and Humanitarian Practice Network
  • Groupe URD
  • Feinstein International Center (Tufts)
  • Refugee Studies Centre
  • Chatham House
  • OCHA
  • IRIN
  • Other relevant publications, including NGO policy papers

From the search process described above, the researchers identified around 20 key sources to inform a thematic synthesis on the list of topics above. Where relevant, the reviewer also considered forward and backward review of citations of included documents. In total, over 250 documents were reviewed.

Analysis of literature

The synthesis process involved several elements:

  1. Collating the material according to related findings on common topics.
  2. Identifying findings that appeared to be broadly common across a range of evidence from the source material.
  3. Identifying gaps and contradictions

Constraints and limitations

The nature and availability of the evidence on different topics was variable, although the reviewer attempted to focus on research with high quality methods and considered academic peer review data where it was available and relevant. As with most literature review processes, there was also scope for reviewer bias in assessing the quality of evidence or determining the most useful information to include.

Synthesis of data from components

Using approaches common to mixed method studies,5 the SOHS research team identifies general trends and findings through frequency, weighted by quality. For example, evaluations are assessed and included/excluded in the synthesis on the basis of their quality, and claims made by key informants in country-level research are triangulated with other perspectives. At the level of each research component, research leads identify findings by the volume of data points – for example, findings supported by a minimum number of evaluations or KIIs. 

The SOHS research team then synthesised the findings from each component, prioritising those that are supported by two or more components – in cases where findings from separate components contradict one another, more follow-up and investigation was carried out to understand the reason for the discrepancy. 

To synthesise such a large volume of variable data, the SOHS used a shared coding framework across the research components, and employed hypothesis testing and an iterative approach in the analysis process. For the latter, ALNAP organised meetings throughout the data collection and analysis process, where emerging data was shared and gaps identified – through this, hypotheses were developed and further data collection was targeted to confirm or disconfirm these.

Constraints and limitations

A cross-cutting constraint for the entire report is that identifying general trends and findings for so many humanitarian responses over a four-year period is inherently challenging, particularly given the absence of samples that can truly be considered ‘representative’, rather than illustrative, of the entirety of humanitarian action. Even when using a shared indicator framework, it is difficult to avoid the problem of data comparability that is common to mixed method approaches.6  

The foreword for the 2010 SOHS pilot study noted that ‘Almost as important as what the report says, is what it does not say’.7 Pervasive data gaps continue to limit this report’s ability to provide clear, definitive assessments on key performance issues – such as how many people are reached with humanitarian assistance each year, the degree to which needs are covered, whether humanitarian action saves lives and protects people from harm, or how cost-effective programmes and mechanisms are. For this edition of the report ALNAP went to greater lengths than previously to locate or generate this data, but it is clear that addressing these gaps requires more resources and effort than can be achieved for a single research project, even one as long-running and large in scope as the SOHS. 

There have been repeated calls on the system to improve its evidence base, in each edition of the SOHS as well as by many others in the system.8 The significant burdens stretching limited humanitarian funding described in this report are likely to mean that knowledge production, monitoring and evaluation and improving the quality and accessibility of data will continue to be deprioritised. This is at the system’s own peril. Better evidence could not only guide more effective improvements to performance, but also help to demonstrate the system’s value in the context of a potentially contracting global economy and the rising costs of conflicts and disasters.


International Telecommunication Union ( ITU ) World Telecommunication/ICT Indicators Database for subscribers in 2020.

Although it was more challenging in Lebanon to get younger people to respond to the survey than older people.

Grand Bargain, 2016. The Grand Bargain – a shared commitment to better serve people in need. Available at:


Inter-Agency Standing Committee Humanitarian Financing Task Team, 2018. HFTT localisation marker definitions paper 24. Available at:


Heyvaert et al Mixed methods research synthesis: definition, framework, and potential; Yin 2013 Case study Research: Design and Methods 5th edition.


Heyvaert et al. 2013


ALNAP 2010, The State of the Humanitarian System. Assessing Performance and Progress. A pilot Study. ALNAP/ODI. p.5.


E.g.: Carden, F., Hanley, T., Paterson, A. (2021) From knowing to doing: evidence use in the humanitarian sector. Elrha: London.