Evaluation Handbook

II. A. Project/Programme Document


Snapshot of the Project Document:

3 .4. Evaluation

Evaluation at UNODC is a process that assesses systematically  and objectively the achievements of results and outcomes as regards the project's relevance, efficiency, effectiveness, impact and sustainability. These should be done in line with the criteria on evaluation contained within the Programmes and Operations Manual. Please consult with IEU and refer to the Evaluation Policy and Evaluation Handbook for specific information.

Evaluation is part of the project/programme cycle and needs to be planned for at  the design stage. Independent Project Evaluation are required for all projects. Independent Project Evaluation can also take place (i) if there is an innovative dimension, (ii) if it is a pilot project, (iii) if there is a request from a Member State or a Donor. Please consult with IEU for any of the above.

Please describe the following:

1. Type of evaluation: (mid-term and/or final Independent Project Evaluation) and rationale for this choice.

2. Purpose of evaluation: indicate utilization of evaluation findings.

3. Timing for evaluation: plan for evaluation preparation and implementation.

4. Rescheduling of evaluation: indicate whether, in the case of project extension, initially planned evaluations are rescheduled.

5. Evaluation budget: specify in budget table under budget line 5700, as well within this  paragraph (2-3% of overall budget).

6. Evaluation management: clarify whether specific involvement of IEU is needed beyond evaluation rules.

1. Quality Criteria

Project Managers must build evaluation into the programme or project design. Most importantly, all newly designed programmes or projects (or phases) should integrate evaluation quality criteria [1]. These criteria are summarized below.

-    Project Managers should consider and include the relevant recommendations and lessons learned from previous evaluation exercises.

The programme or project document should also clearly:

-    State what type of evaluation will be conducted;
- Identify the evaluation purpose, e.g. the platform at which the evaluation findings will be presented;
- Specify the evaluation budget under budget line 5700;
- Indicate the proposed timing;
- Identify the evaluation stakeholders.

For further information on the evaluation purpose, scope, budget and the roles and responsibilities, please refer to Chapter II Sections C to F.

To ensure coherence among evaluations, programme and project documents should also state how coordination between evaluations of the different levels of UNODC programming is envisaged. The relationship between project, Sub-Programme or Global Programme evaluations and Country, Regional or Thematic Programme evaluations should be clearly laid out. In this regard, the Project Manager has the responsibility to consult existing Evaluation Plans and to provide annual updates to IEU. Evaluation Plans are produced by Project Managers and respective evaluation focal points and provided to IEU. Please see Chapter II Section B below for further information on Evaluation Plans.

2. Evaluability Assesssment

Quality design of projects and programmes as regards evaluation does not guarantee evaluability of the project or programme. Project Managers could ensure that their projects/programmes are evaluable by undertaking an optional evaluability assessment.

An evaluability assessment examines the extent to which a project or a programme can be evaluated in a reliable and credible fashion [2] . An evaluability assessment calls for the early review of a  proposed project or programme in order to ascertain whether its design is adequate and its results verifiable.

a) Purpose

An Evaluability Assessment answers the following questions:

-    Does the quality of the design of the Programme allow for the evaluation?
- Are the results of the programme verifiable based on the documentation that will be available?
- Would the evaluation be feasible, credible and useful?
- Based on the above, should the Programme be modified?

b) Timing

-    An evaluability assessment is typically done early in the programme cycle - when the programme is being designed but has not yet become operational. The assessment of the strength of the design and logic is most worthwhile at this early stage - when something can be done to remedy any weaknesses.
- However, an evaluability assessment can also be undertaken during the programme's implementation prior to an evaluation.

c) Roles and Responsibilities

Project Managers could undertake evaluability assessments by using the respective  template provided in the Chapter tools.

Such an assessment examines the monitoring system, in particular determines whether the project or programme indicators are defined adequately, whether the results are verifiable and whether the baselines and performance indicators enable a credible evaluation to be undertaken in the future. Therefore, it is crucial to understand the role of results-based monitoring and evaluation systems, logical frameworks, performance indicators and baselines ( please see paragraph 3 below).

3. Role of results-based monitoring and evaluation systems, logical frameworks, performance indicators, baselines, and targets

a) Results-based Monitoring

Results-based monitoring is the continuous process of collecting and analysing information on key performance indicators in order to measure progress towards results [3]

The below elements of  results-based monitoring are essential preconditions for effective implementation and evaluation. Please also consult the Operational Manual and Guidelines contained in the Project Document.

The logical framework matrix of the project or programme, including  performance indicators and targets.

A logical framework (logframe) is a management tool used to improve the design of interventions, most often at the project level. It involves identifying strategic elements (inputs, activities, outputs, outcomes, objectives) and their causal relationships, performance indicators, and the assumptions or risks that may influence success and failure. It thus facilitates planning, execution and evaluation of an intervention [4].

A performance indicator is a quantitative or qualitative variable that allows the verification of changes produced at the level of objectives, outcome(s) and outputs or shows results relative to what was planned. This variable is tracked systematically over time to indicate progress (or the lack thereof) toward a target.

As results cannot be measured directly, Project Managers must first translate them into a set of performance indicators that, when regularly measured, provide information about whether or not the results are being achieved.

It is the cumulative evidence of several indicators that Project Managers examine to see if their project is making progress. No result should be measured by just one indicator.

The effectiveness of evaluation depends on, among other factors, the quality of performance indicators as these provide the basis for monitoring data collection. The formulation of performance indicators is therefore of importance to evaluation. Performance indicators should:

(i) Be SMART: specific, measurable, achievable, relevant and time-bound;
(ii)    Measure what is important, e.g. Project Managers should focus on a few key performance indicators;
(iii)  Measure what they are intended to measure, e.g. Project Managers should ensure that indicators indicate progress towards achievement of results;
(iv) Measure what it is possible to measure, e.g. Project Managers should ensure that they have the means to collect and analyse the data against each performance indicator.
A target is a quantifiable amount of change that is to be achieved over a specified time frame in a [performance] indicator [5].

Establishing targets is an integral step in building the results-based monitoring system.

Most results (outcomes and objectives) in UNODC are complex and are visible only over the long term. There is, therefore, a need to establish interim targets that specify how much progress toward a result can be achieved, in what time frame, and with what level of resource allocation. Measuring performance against these targets can involve both direct and proxy indicators, as well as the use of both quantitative and qualitative data.

When setting targets for performance indicators, Project Managers must have a clear understanding of the following:

-      The baseline starting point;
- The level of funding and personnel resources over the timeframe for the target;
- The amount of outside resources expected to supplement the current resources;
- The relevant political concerns;
- The organizational (especially managerial) experience in delivering projects and programmes.

Only one target should be set for each performance indicator. If the indicator has never been used, setting a range for the target would be preferable [6].

The baseline is the situation prior to a project or a programme, against which progress can be assessed or comparisons made. The baseline is the first measurement of the project's or programme's performance indicators.

The measurement of progress (or lack of it) towards results begins with the description and measurement of initial conditions, which is the baseline. A baseline provides information (qualitative or quantitative) about performance of indicators at the beginning of (or immediately before) the project or programme.

Collecting baseline data essentially means taking the first measurements of the performance indicators. The baseline data allows Project Managers as well as all other stakeholders to seize a given situation at the planning stage of projects/programmes.


One consideration in selecting performance indicators is the availability of baseline data which allow performance to be tracked relative to the baseline.

Project Managers' sources of baseline data can be either primary or secondary. Secondary data can come from within an organization, from the government, or from international data sources. Using such data can save money, as long as they really provide the information needed [7].

Primary data: gathered specifically for this measurement system

Secondary data: collected for another purpose

In the event that no baseline data have been collected at the beginning of the project, baseline data can be reconstructed. Both Project Managers and Evaluators could envisage the following techniques for the reconstruction of missing baseline:

-     Using secondary data;
- Using individual recall/retrospective interviewing techniques (respondents are asked to recall the situation at around the time the project began);
- Using participatory group techniques to reconstruct the history of the community and to assess the changes that have been produced by the project;
- Undertaking interviews with key informants, preferably persons who know the target community, as well as other communities, and therefore, have a perspective on relative changes occurring over time [8].

4. Planning for Impact Evaluation

Impact is the positive and negative, primary and secondary long-term economic, environmental, social change(s) produced or likely to be produced by a project, directly or indirectly, intended or unintended, after the project was implemented.

For rigorous impact evaluation, planning needs to take place at the design stage of projects/programmes. The project/programme document should identify:

a) The impacts that are valid

-     Impact performance indicators should be identified, along with impact targets and baselines.

b) The resources needed to gather evidence of impact

Measuring impact has human, financial and methodological implications that should be considered at the design stage of a project.

-    A baseline prior to the implementation of the project should be collected against impact performance indicators;
- A monitoring system for collecting impact data on a regular basis should be built.

c) The causal context

Rigorous assessment of impact implies assessing the changes ''attributable'' to the project. To ensure attribution, methodologies should be put in place at the design stage of a project (e.g. creation of control and treatment groups).

To ensure attribution versus contribution it is important to:

-    Understand and analyse the context in which the project is implemented;
- Recognize that UNODC is one part of a picture (attribution issue) by identifying key contextual factors that are beyond UNODC control.

5. The Case of Joint Evaluation

With the objective of increasing harmonization and cohesion of United Nations support and reducing the burden on recipient countries, UNODC encourages Joint Evaluations with United Nations agencies, multilateral organizations or Member States.

Joint Evaluations should be planned for in project and programme documents as any other evaluation. Generally, the suggested steps in planning a Joint Evaluation are the same as for any other evaluation.

However, in planning for Joint Evaluations, it should be kept in mind that they tend to be lengthier in process and require greater coordination efforts. There are a number of issues specific to Joint Evaluations that warrant greater attention.

a) Deciding on a Joint Evaluation

It is important to assess whether the programme or project warrants a Joint Evaluation. To do so, ask the following questions:

-    Is the focus of the project or programme on an outcome that reaches across sectors and agencies?.
- Is the project or programme co-financed by multiple partners?

b) Determining the Partners

In programme or project documents, it is essential to determine the partners of joint evaluations to ensure their involvement and ownership. The partners could be determined by:

-    Focusing on where the finances come from, who the implementing partners are; or
- Researching which other agencies are conducting similar work and thus may be contributing to the overall outcomes and objectives.

It is also important to assess the potential contributions of partners at an early stage as it may not be suitable to get involved.

Due to conflicting activities or constraints, it is always important to discuss the objectivity that partners may or may not bring to the table to ensure that the evaluation is independent and free from biases.

c) Overcoming Challenges

Joint evaluations can be characterized by a number of benefits and challenges as shown in the below table.

-    Strengthened evaluation harmonization and capacity development: shared good practice, innovations and improved programming;
- Reduced transaction costs and management burden (mainly for the partner country);
- Improved donor coordination and alignment: increase donor understanding of government strategies, priorities and procedures;
- Objectivity and legitimacy: enables greater diversity of perspectives and a consensus must be reached;
Broader scope: able to tackle more complex and wider reaching subject areas;
- Enhanced ownership: greater participation;
- Greater learning: by providing opportunities for bringing together wider stakeholders, learning from evaluation becomes broader than simply for organizational learning and also encompasses advancement of knowledge in development.
- More difficult subjects to evaluate (complex, many partners, etc.); 
- Processes for coordinating large numbers of participants may take it difficult to reach consensus;
- Lower-level of commitment by some participants.
  • Source: Adopted from OECD, 'DAC Guidance for Managing Joint Evaluations', Paris, France, 2006; and Feinstein O and G Ingram, 'Lessons Learned from World Bank experiences in Joint Evaluation', OECD, Paris, France, 2003.

To overcome some of the challenges of Joint Evaluations, in the programme or project documents it is recommended to:

-    Having a clear definition of the roles and responsibilities;
- Establishing dispute resolution procedures;
- Designating a lead agency, keeping the governance arrangements clear and simple;
- Keeping the scope of the Joint Evaluation focused;
- Having an equal share of funding to reinforce sense of ownership;
- Getting a long lead time for preparation;
- Ensuring transparency in the decision-making processes.


[1] Please refer to the evaluation quality criteria checklist for projects and programmes in the Chapter 2 Tools

[2] OIOS, op.cit.

[3] R. Rist and L. Morra, The road to results

[4] OECD/DAC Glossary of key terms in evaluation and results based management, 2002

[5] The road to results, R. Rist and L. Morra

[6] The road to results, R. Rist and L. Morra

[7] The road to results, R. Rist and L. Morra

[8] RealWorld Evaluation: Working under budget, time, data, and political constraints, M. Bamberger, J. Rugh, L. Mabry


IEU Home
Evaluation Handbook Home
Table of Contents
Chapter I: Defining Core Concepts

Chapter II: Planning an Evaluation at the Design Stage

Chapter II Tools:
Chapter III: Managing an Independent Project Evaluation
Chapter IV: Undertaking an In-depth Evaluation
Chapter V: Undertaking a Participatory Self-Evaluation
Chapter VI: Using the Evaluation
Annex I: Evaluation Glossary
Annex II: UNEG Norms
Annex III: UNEG Standards
Download as pdf
Chapter II: Planning an Evaluation at the Design Stage