Evaluation of Humanitarian Action (EHA)
Project cycle management
Evaluation is part of Project Cycle Management which consists of the following phases: Assessment - Design - Appraisal - Implementation - Monitoring - Completion – Evaluation. In this cycle assessment, monitoring and evaluation activities each logically flow one into the other. Assessment is the basis for programming while monitoring, and evaluation analyses the outcomes. Although their objectives are similar, different people with different responsibilities might perform the stages or the scope of activities slightly differently. Program managers and decision-makers should view all of these activities as part of an integrated approach to information collection and processing, which is intended to improve the quality of services offered to members of the affected population. there are important differences between the project cycle adopted in a non-crisis context and the one that develops in crisis / unstable situations.
Purposes of evaluation of humanitarian action
An emphasis on learning in an evaluation will help organisations understand why particular aid activities are more or less successful. The aim is to improve future performance. Timeliness is an important factor, since the need and willingness exist to pass on lessons now, not in a year’s time when the final report is eventually published. An emphasis on accountability is the duty to provide an account (by no means a financial account) or reckoning of those actions for which one is held ‘responsible’. Thus accountability involves two responsibilities or duties: the responsibility to undertake certain actions (or forbear from taking actions) and the responsibility to provide an account for those actions” (Gray, 1996). Organisations are often undertaking evaluations to demonstrate their accountability to donors. Traditional virtues of rigour, independence, replicability and efficiency tend to be primary concerns in accountability. Evaluations designed primarily for lesson learning will not necessarily provide sufficient information for accountability and vice versa. The tension and balance between the two need to be explicitly addressed at the evaluation design stage. The two objectives can be incompatible and their target audiences differ.
International Standards and Legal Framework
In the humanitarian sector there are now some major accountability initiatives. They fall into the following categories:
Legislative: International Humanitarian, Human Rights Law, Local and National laws.
Voluntary standards and processes: Sphere, People in Aid, Codes of Conduct and Protocols, the People’s Panel, Evaluations.
Contractual agreements: Memorandum of Understanding, Joint Policy Agreements, Partnerships.
Cross cutting themes underpin HA programming and therefore need to be considered during the evaluation process. Cross cutting themes include: consideration of gender
Differences, promoting co-ordination among stakeholders; providing opportunities for participation of stakeholders, particularly beneficiaries; and attention to rights based approaches, such as protection. The manner in which a programme addresses these themes is critical. If a programme was not designed with these in mind, it is appropriate for an evaluation to ask why.
Background to Sphere
A model set of minimum standards and international human rights and humanitarian law is offered in Sphere Project. The Humanitarian Charter and accompanying minimum standards provide the humanitarian community with a practical tool for more effective and accountable inter-agency.
Types of Evaluation
There are a number of different types of evaluation, which vary, in their primary objective, orientation (independent/self), themes and timing. Different types of evaluation are carried out at different stages of the project cycle. The traditional project-based approach is giving way to broader country programmes with thematic initiatives and sector-wide approaches. The boundaries of these are much harder to define and the process more complex, particularly when it comes to demonstrating aid effectiveness. The attraction of broader-based evaluations is that the lessons learned are likely to be more widely applicable, both in policy and operational terms.
Additional Information
On-going (real time) evaluations are commissioned during project implementation. These are also referred to as interim evaluations. Ex-post evaluations take place after implementation has been completed. A major difference is between independent evaluation and self-evaluation. Independent evaluation involves evaluators who have had no responsibility or involvement in the activity being evaluated. Operational staff may well be involved in the process, but the primary purpose is to achieve an independent assessment. Self-evaluation involves operational staff and beneficiaries evaluating their own activities. The direction of the interactions in an evaluation is also important. Traditional evaluations are more typical and tend to be initiated by the top management with their objectives and use independent outside judgement and technical expertise. They may or may not involve beneficiaries. Participatory evaluations include the stakeholders and have a more bottom-up approach. These stakeholders may participate fully in all or in some phases of the evaluation.
When evaluation briefs are broad and not everything can be done, there is a clear need to discuss with an organisation what its priorities are. To decide on a realistic allocation of time tasks involved should be broken down into: amount of preparation and desk study time/time in field/time in writing up.nitial contact with an organisation gives external consultants insight into how an organization works. Notice the language used, how people are treating insiders/outsiders, their willingness to discuss ‘real’ issues etc.
Stakeholders
The planning process is a time for gaining cooperation from all stakeholders. Priority however should be given to primary stakeholders.
Stakeholders’ interests power the evaluation. In addition to the decision-makers most directly involved there is often a long list of persons with an interest in the evaluation such as policy makers, donors, operational partners, beneficiaries and various parts of the host government. The extent to which these groups may seek involvement in the evaluation is not always clear. Such aspects however should be given consideration in consultations and planning.
What is a user-focused evaluation? No matter what the type or purpose, the evaluation should aim to be useful to the stakeholders. the philosophy of evaluation, in general, is moving from “evaluate to evolve” instead of “evaluate to control”.
Most projects involve at least four categories of stakeholders. Stakeholders are defined as those who have an interest in the outcome of the project and consequently in the orientation and interpretation of the evaluation e.g., International organisations (donors, NGOs and research foundations); national and sectoral organisations (central government ministries, financial ministries, line ministries local NGOs and national consulting and research groups); project implementing agencies; and, intended beneficiaries. Some or all will affect the fate of the evaluation.
Evaluators and evaluation managers will often find that many parties will be involved in the use of evaluation results and they need to recognise the value in exploring interests and needs underlying positions. Potential reactions to the findings should be borne in mind and cooperation sought. Evaluation managers need to appreciate what will make an EHA ‘relevant’ from a range of perspectives. They need to create a strategic plan to reach the stakeholders.
Content of TOR
TORs provide a formal record of agreement as to what will be done and should outline all obligations of all participants in the evaluation. The TOR should provide sufficient details (including contextual overview, intervention objective and key stakeholders) to inform the evaluation’s analysis. It should, for the purpose of transparency, outline the evaluation budget, preferably as a percentage of the cost of the intervention being evaluated. The evaluator and commissioning organisation need to work together in order to streamline the evaluation. The evaluator must examine the practicality of the planning process, the feasibility of meeting stakeholders needs, the strength of the TOR, the balance within the team, and take steps to influence these variables to make the evaluation more effective. Whether evaluators take part in the core planning activities or not, insight into the planning process is essential. The key components in planning include the identification of stakeholder interests, the budget and resources, feasibility and timing, development of the TOR and formation of the team. Expending adequate time and effort in preparing a good TOR has big payoffs in terms of resulting quality, relevance and usefulness.
The budget and other resources available influence the scope and methods of the evaluation.
Poorly planned evaluations often delegate the more detailed TOR to the consultant/evaluator. In this role they are seldom in a position to interpret the aims and issues to be addressed by the evaluation and the work can easily follow the interests of the consultant rather than the needs of the decision makers
Evaluation Criteria
Evaluation criteria provide a functional tool and checklist. Its use establishes priorities for the evaluation. There are five main criteria: efficiency, effectiveness, impact, relevance/appropriateness, and connectedness (for short-term project impact on long term processes).
Evaluation criteria form the framework or logical approach to examination of the activities to be evaluated. Criteria developed by OECD DAC for development evaluation can be appropriate for humanitarian action.
There are other frameworks. A simple framework may be: What is right, wrong; Why and what needs to be done? Projects evaluated by a logframe can ask: what was the planning target, what are the actual results, what are the weaknesses and reasons for deviation/positive experiences and unplanned results, followed by recommendations.
Team’ Composition
The following are important to consider when putting together an evaluation team: origin of team members – internal/external/mixed; size of team variations – is it a team or a pair; Structure of team variations – single consultant or team, core team, groups. Teams should contain a mix of skills and experience including Professional expertise relating to the issue being evaluated and Knowledge of the country/region cross-disciplinary skills (social economic and institutional). Representatives from the partner country or organisation will improve the quality and the local credibility of the evaluation findings as well as building local capacity. Representatives, however, can find themselves in difficult positions e.g. if the evaluation is reflecting negatively on their organisation or colleagues.
Availability is required for the whole time of the evaluation along with a good leadership.
Preparations by evaluation team
Identifying Objectives mean establishing a set of actions to be taken by the evaluation team at the start of an evaluation. As preparatory action Building in adequate preparation time into TOR enables evaluation team to build relations with the centre and helps to test the TOR. In practice, never enough time allowed. Briefing of team members before fieldwork as to their roles, expectations and ways of working leads to more effective use of time will follow.
Key aspects of preparation include reviewing document, planning country visits and field work as well as maintaining staff security.
For gaining cooperation and foreseeing follow up, the evaluator should know:
- What actions did the evaluation manager undertake prior to the team’s arrival on the scene?
- What decisions were made as to the timing and content of the evaluation?
- What political realities and contextual issues affect the process?
- Was cooperation gained from all stakeholders?
- Are stakeholders aware of how the evaluation will benefit them?
- Was the exercise painted as a contribution to dialogue and not a judgment?
- Were the stakeholders involved in planning – discussing goals, building consensus, planning the evaluation approach – and to what degree?
- Was the TOR submitted to stakeholders for their approval to help in gaining commitment to the evaluation?
- Were the constraints (security, lack of data, etc.) to the evaluation made clear to the stakeholders? Logistical constraints regarding access and security are often underestimated.
Source: ALNAP, ODI, www.alnap.org.uk
<< Home