Sunday, June 11, 2006

Moniroring and Evaluation

M & E

Monitoring is a continuous internal process, conducted by managers (or those assigned responsibility for M&E), to check on the progress of development interventions (or in this case research programmes) against pre-defined objectives and plans – “keeping the ship on course”.

Evaluations basically ask what happened and why and answer specific questions related to the relevance, effectiveness, efficiency, impact and sustainability of the programme’s outputs. The audiences and tools are different for monitoring, reviews and evaluations.
It is very difficult, if not impossible, to monitor or evaluate unless the research programme’s purpose and outputs are specified in a way which is both clear to all parties involved and which can be assessed. Similarly if M&E is not built in at an early enough stage to allow ownership by key stakeholders, methods to be developed, baselines to be established, indicators to be developed and systems for reporting data to be built in.

Monitoring and Evaluation is used to make sure the research programme is on-track towards achieving its outputs and purpose. Application of mixture of tools means that the M&E system serves both lesson learning and accountability functions. Learning is emphasised in every instances, not just in formal evaluation. Accountability refers to both financial accountability (leading to broader accountability to the public) and also accountability for the achievement of research programme outputs, which impact on stakeholders and lead to poverty reduction. The overall message is that M&E needs to be understood as an integrated reflection and communication system that must be planned, managed and resourced; it is not simply a statistical task or an external obligation. 1

The logframe is a powerful participatory tool in the Logical Framework Approach. It is a matrix that details the logical steps for implementing the programme and organizes thinking (i.e. activities to outputs to purpose to goal). It relates activities and investment to expected results and allows allocation of responsibilities. Logframes are used to identify what is to be achieved, and to determine to what degree the planned activity fits into broader or higher-level strategies. It is used as management tool which sets-up a framework for M&E where planned and actual results can be compared.

To be a useful management tool, a logframe must have good indicators:

DFID defines an indicator as: Quantitative or qualitative factor or variable that provides a simple and reliable means to measure achievement, to reflect the changes connected to an intervention, or to help assess the performance of a development actor.

It is important to note that different audiences have different needs from the same or similar indicators (the moving treetops are an indicator of wind, but the fisherman wants to know about wind direction, the farmer about the strength of the wind). Indicators should be smart. The critical issue in selecting good indicators is credibility, not precision in measurement. Indicators do not provide scientific “proof” or detailed explanations about change. Indicators that are carefully considered and shared among partners are much better than guesswork or individual opinion.

Criteria for indicators should include specific or precise meaning, should be valid, measurable or practical, attainable with clear direction, relevant, and entail timing.

The process of a research program should be established based on logical flow of moving from outcome and purpose into funding phase for achieving goals. The hierarchy of objective moves from new knowledge acquired through the research, toward communicating the outcomes to decision makers, and finally ensuring persuasive argument for policy changes.

The outputs and purpose can be verified by a number of indicators, which serve to monitor the progress of the programme throughout its implementation phases and provide an early warning system for possible shortfalls.

The development of research and communication capacity to use, carry out and communicate research is an important outcome. However, some programmes may wish to integrate capacity building within the logframe, rather than have a specific output on capacity development. If there is no output on capacity development, then it should be included as a verifying indicator against an output – i.e. there must be an indication that the Capacity Building component of the research programme is being monitored and evaluated. Research programmes must focus on the outputs of the research – what has actually been achieved, what the research programme has changed and also providing the evidence of where, when and how this change has happened.

Learning and reflection, as a monitoring process in research programmes, is important for monitoring what is working well and what is not working so well. Any lessons arising from the research programme and how these lessons will be used to improve performance in future years can be summarized under headings such as: Working with Partners, Good Practice/Innovation, Project/programme Management, and Communication.

The Final Report must identify the main lessons learned in the research programme. These lessons are needed for case studies, future design of research programmes, to inform future research strategy and other uses. Case studies and success stories as valuable sources of information should be used regularly.


Oxfam GB monitors and evaluates its work in order to:

• Check progress against objectives and unexpected results
• Learn from experience and adapt projects to optimise their impact
• Provide information and learning to stakeholders and be accountable for our actions and the resources we manage

Seven questions about performance and impact

All processes of impact assessments should contribute to answering following questions:

1. What significant changes have occurred in the lives of poor women, men and children?
2. How far has greater equity been achieved between women and men and between other groups?
3. What changes in policies, practices, ideas, and beliefs have happened?
4. Have those we hope will benefit and those who support us been appropriately involved at all stages and empowered through the process?
5. Are the changes which have been achieved likely to be sustained?
6. How cost-effective has the intervention been?
7. To what degree have we learned from this experience and shared the learning?


Proactive and participatory monitoring and evaluation

Development projects need to provide documented and unambiguous information about their impact on poverty. Implementation completion reports may effectively assess or systematically document project lessons. Participatory methods for monitoring and evaluation provide rapid assessments, and are used as substitutes for thorough evaluation. But for the most part these methods do not use quantitative methods—stunting efforts to systematically trace a project’s impact on beneficiaries. Projects should include both quantitative and participatory mechanisms for tracking change and project impact. Both quantitative and participatory methods are needed to assess a project’s impact on poverty. The monitoring and evaluation strategy can include random sampling to document the impact of certain components as well as a monitoring, evaluation, and information system that uses ongoing participatory evaluation methods to evaluate inputs and outputs. Projects can rely on systematic monitoring of inputs and outputs flowing through the organizations implementing the project.

Impact Evaluation can address:

• Does the program have impacts on participants
• Are impacts stronger for particular participant groups
• Is the program cost-effective relative to other options
• What are the reasons for a program’s performance
• How can the design or implementation be changed to improve performance

Complementary and ongoing participatory monitoring and evaluation, including a quantitative evaluation design, offer two clear benefits in the fight against poverty.

First, ongoing participatory evaluation enables just-in-time inputs into management decisions at the local and central levels. Such inputs promote better management and more responsive alignment of project inputs to achieve project objectives. The dynamic nature of most projects during implementation requires a responsive mechanism so that inputs are adjusted to changing environments—while also providing a means to verify impact on beneficiaries as it occurs.

Second, the quantitative methods used in household and community surveys are important for assessing a project’s impact and for verifying the determinants of that impact. Such assessment and verification is especially essential during a project’s midterm review, when inputs can be realigned as needed. Such efforts can also provide more information for the next phase of the project.




1 Adapted from IFAD M&E Guide, 2002 download the full version from


sources:

http:// www.dfid.gov.uk/research.
http://www.ifad.org/evaluation/guide/index.htm .
UNDP:SelectingIndicators.http://www.undp.org/eo/documents
/methodology/rbm/Indicators-Paperl.doc
http://www.undp.org/eo/methodologies.htm
Case studies are valuable sources of information and appear on the DFID website (www. dfid.gov.uk) and R4D – DFID’s research portal (www.research4development.info).
http://www.oxfam.org.uk/what_we_do/issues/evaluation/
The World Bank, The Impact Evaluation
Thematic Group, PREMnet., (http://prem)