Loading…
Transcript

Monitoring and Evaluation for CSR Programs

Expectation Setting

Introduction

THE ESSENCE OF MONITORING & EVALUATION

To what extent are we making a difference? What are the results on the ground? How can we do better?

Considerations

CONSIDERATIONS

OBJECTIVES

Purpose of M&E

  • Understanding the context
  • Deepening understanding

1. Strategy and direction: ‘Are we doing the right thing?’

2. Management and governance: ‘Are we implementing the

plan as effectively as possible?’

  • Building and sustaining trust
  • Lobbying and advocacy
  • Sensitising for action

3. Outputs: ‘Are activities audience-appropriate and do they

meet the required standards?’

4. Outcomes and impacts: ‘What kinds of effects or

changes has the work contributed to?’

  • Showcasing results
  • Justifying costs

5. Context: ‘How does the changing political, economic,

social and organisational climate affect plans and

intended outcomes?’

  • Strengthening capacity
  • Improving operations
  • Readjusting strategy

Types of Monitoring

Monitoring & Evaluation

Results monitoring involves the periodic collection of data on the project’s actual accomplishment of results (outputs, outcomes, and impacts). It responds to the question: what results have been accomplished relative to what was planned (targeted)?

Project implementation monitoring requires constant documentation of data on project activities and operations such as tracking funds and other inputs, as well as processes. It includes field records of interventions, as well as recurrent checking of work plans and budgets.

EVALUATION

Evaluation

RELEVANCE: The extent to which the objectives of an intervention are consistent with recipients’ requirements, country needs, global priorities and partners’ policies.

EFFECTIVENESS: The extent to which the intervention’s objectives were achieved, or are expected to be achieved, taking into account their relative importance.

EFFICIENCY: A measure of how economically resources/inputs (funds, expertise, time, equipment, etc.) are converted into results.

SUSTAINABILITY: The continuation of benefits from the intervention after major development assistance has ceased. Interventions must be both environmentally and financially sustainable.

IMPACT: Positive and negative primary and secondary long-term effects produced by the intervention, whether directly or indirectly, intended or unintended.

LEVELS OF M&E

Nutrition Program for Mothers

LEVELS

  • Preparation

(Staff are hired and trained)

  • Needs Assessment

(Baseline conducted)

  • Planning

Objective achievement

1

(Setting objectives)

  • Activities

(Villages visited)

  • Teaching

(Women attended classes)

  • Involvement

(Women groups formed and active)

  • Learning

(Women can tell the value of food)

  • Intermediate Action

2

  • Learning

(Women plant vegetable gardens)

  • Application

(Women feed their children better)

  • Impact

(Improved nutritional status)

Side Effects

(Unforeseen changes)

Objective achievement

(Status of meeting objectives)

Appropriateness

(Has this met community needs?)

Objective achievement

3

Sustainability

(Can the community continue and multiply this “success” on its own?)

Efficiency

(Can another approach achieve these results with less cost and effort?)

THEORY OF CHANGE

Theory of Change

EXPLAINER VIDEOS

Objective achievement

TOC EXAMPLE

Objective achievement

"One of the great mistakes is to judge policies and programs by their intentions rather than their results."

- Milton Friedman

An M&E plan outlines-

1. Scope and objectives

2. Information needs

3. Methods and sources of information

4. Roles and responsibilities

5. Use of findings

6. Capacities and conditions

Conducting M&E

A good M&E framework depends on rigour of all components

1. Sample selection

3. Reliability of indicators

2. Research tools

4. Quality of data collection

5. Depth of analysis

6. Reporting and utilisation

Check list of Best Practices

From compliance to creating value

Technologies for feedback loops and rigour

Check list of Best Practices

Gather baseline information

Establish a counterfactual

Check List of Best Practices

Two-way flow of information

Check List of Best Practices

Set specific targets

Involve all stakeholders

Analyse and report the results for corrective actions

Articulating a model that includes external influences

Stakeholder Involvement

Use of information

Form of communication

Relation to the program

Stakeholder group

Evaluation information needed

Stakeholder Involvement

Reducing costs of data collection and analysis

  • Use self-administered questionnaires & direct observation
  • Reduce length and complexity of instrument
  • Participatory assessment methods
  • Look for reliable secondary sources
  • Use internal project records

Participatory Evaluation

Participatory Evaluation

Participatory processes enable people to see more clearly and learn from the complexity they are living and working amid. Participation can help people identify opportunities, strategies for action, and build solidarity to effect change.

FATIGUE

Low and poor levels of data

EXTRACTIVE

What 'WE' want to hear!

UPWARD MOBILITY

Opinions of 'outsiders'

Internal data sources

  • Feasibility/planning studies
  • Application/registration forms
  • Supervision reports
  • Management Information System (MIS) data
  • Meeting reports
  • Community and agency meeting minutes
  • Progress reports
  • Construction, training and other implementation records, including costs

Using Results

USING RESULTS

1. Make Sense of Data

Is the social problem still a problem?

Are the root causes the same as before? Are they still valid?

Are the conditions the same as before?

What has changed since you started to offer your solution or product?

HOW MUCH?

HOW LONG?

ATTRIBUTION

2. Reports should be accurate, relevant and complete

  • The scope of the report, setting out what activities are included and timespan
  • Purpose of the report, its audience and expected use
  • Plans for developing Impact Thinking
  • Changes made to activities based on data from stakeholders
  • Table of proposals to take to decision makers (your board or management committee) about how your activities should be scaled, changed or stopped
  • A set of recommendations about changing the targets (or justification for not changing them)

3. Developing Recommendations

  • Beneficiary exchange: discussing findings between beneficiaries in order to provide feedback
  • Chat rooms: setting up online spaces where findings can be discussed
  • Electronic democracy: using new and emergent forms of media in order to engage community members in seeking to influence the decision making process.
  • External review: having external experts or anonymous reviewers provide feedback.
  • Group critical reflection: facilitating a group stakeholder feedback session.
  • Individual critical reflection: asking particular individual stakeholders for their independent feedback
  • Participatory recommendation screening: testing recommendations with key stakeholders

4. Support Use

  • Annual reviews: reviewing major evaluation findings and conclusions based on evaluation studies completed during the preceding year.
  • Action Plans: These guidelines describe the responsibilities and steps in the follow-up to evaluation report recommendations.
  • Data use calendar: guides the collection of data and reporting requirements, as well as ensuring that analysis and evaluation data is actively used.
  • Policy briefings: Providing evaluation findings and lessons learned in an accessible manner for target audiences which can be followed up by management and staff.
  • Recommendations tracking: keeping a transparent record of the responses to and action from recommendations.
  • Social Learning: focusing on how people learn through social interactions, such as modelling, making connections, sharing experiences and resources, collaboration and self-organization.

Type Below link in Mobile Browser

(link is case sensitive )

Link : https://goo.gl/gw1qWD

Wifi :

Network Id - Boardroom One

Password - Chamber2022#$

www.the4thwheel.com

Social Impact

Evaluation

RELEVANCE: The extent to which the objectives of an intervention are consistent with recipients’ requirements, country needs, global priorities and partners’ policies.

EFFICIENCY: A measure of how economically resources/inputs (funds, expertise, time, equipment, etc.) are converted into results.

EFFECTIVENESS: The extent to which the intervention’s objectives were achieved, or are expected to be achieved, taking into account their relative importance.

IMPACT: Positive and negative primary and secondary long-term effects produced by the intervention, whether directly or indirectly, intended or unintended.

SUSTAINABILITY: The continuation of benefits from the intervention after major development assistance has ceased. Interventions must be both environmentally and financially sustainable.

THEORY OF CHANGE

Results-based Monitoring and Evaluation

The Power of Measuring Results

• If you do not measure results, you cannot tell success from failure.

• If you cannot see success, you cannot reward it.

• If you cannot reward success, you are probably rewarding failure.

• If you cannot see success, you cannot learn from it.

• If you cannot recognize failure, you cannot correct it.

• If you can demonstrate results, you can win stakeholder support.

Complementary Roles of Results-Based M&E

Indicator Development

Monitoring

  • Clarifies program objectives
  • Links activities and their resources to objectives
  • Translates objectives into performance indicators
  • Routinely collects data on these indicators, compares actual results with targets
  • Reports progress to managers and alerts to problems

Evaluation

  • Analyzes why intended results were or were not achieved
  • Assesses specific causal contributions of activities to results
  • Examines implementation process and sets targets
  • Explores unintended results
  • Provides lessons, highlights significant accomplishment of program potential and offers recommendations for improvement

Developing indicators for RBM

An Indicator is

a variable (its value changes)

that measures (objective calculation of value)

key elements of a program or project - Inputs, processes, outputs, outcomes

Demonstrating progress towards results

Reference point for monitoring, decission-making and stakeholder consultations

Signposts of change

Clarify consistency between activities, outputs, outcomes and goals

Measure progress and achievements

Ensures a shared vision among stakeholders

Assess project and staff performance

Indicators provide critical M&E data at every level (and stage) of program implementation

Who is changing?

POPULATION

EACH INDICATOR HAS FOUR PARTS

How many do we expect will succeed?

TARGET

THRESHOLD

How much is good enough?

By when does this outcome need to happen?

TIMELINE

Identifying indicators

1. Involve representatives from implementing agencies, government, beneficiaries, and other

stakeholders. Be sure to include stakeholders and direct actors identified during the

stakeholder analysis. A participatory approach to selecting indicators not only draws on

stakeholders’ experience and knowledge, it also helps obtain their consensus and promotes

ownership.

4. Select the "best" indicators that will provide useful information at an affordable cost; choose only a few—the minimum needed to characterize the most basic and important measures.

2. Brainstorm to develop a general list of possible indicators for each objective and result (activities, outputs, outcomes, and so on). This initial list can consider all stakeholder perspectives, and not worry about how to measure them.

3. Assess each indicator on the general/initial list against a checklist of criteria for judging its suitability and effectiveness

Types of Indicators

Different types of indicators are required to assess progress towards results. Within the RBM framework, there at least three types of indicators, also known as results indicators:

1) USER DATA

Asks: Is your service effective at reaching the intended target group?

Establishes: The characteristics of your service users.

Situational (impact) indicators, which provide a broad picture of whether the developmental changes that matter are actually occurring

2) ENGAGEMENT DATA

Asks: How effective is your service at engaging your target users?

Establishes: The extent to which people use your service and how they use it.

3) FEEDBACK DATA

Outcome indicators, which assess progress against specified outcomes

Asks: What do people think about the service?

Establishes: Whether your service gets the reaction you want; and whether it is beginning to work in the way intended.

4) OUTCOMES DATA

Asks: How have people been influenced by your service in the short-term?

Establishes: The immediate resources, benefits or assets that your users gain from the service.

Output indicators, which assess progress against specific operational activities.

5) IMPACT DATA

Establishes: The long-term difference achieved for individuals, families, communities.

Asks: Have the outcomes achieved helped people to change their lives for the better?

1. Core Sector Indicators

A core sector indicator is an outcome or output indicator that can be measured and monitored at the project level, and can be aggregated across projects and geographies for corporate reporting. Assessing indicators developed on a global scale.

2. Proxy Indicators

A proxy indicator is an indicator that is substituted for another indicator that would be hard to measure directly. In this case, proxy indicators may reveal performance trends and make managers aware of potential problems or areas of success.

Counts

# of teachers trained

# of teaching kits distributed

Thresholds

Presence, absence

Index, composite measures

An index is a set of related indicators which intend to provide a means for meaningful and systematic comparisons of performance across programmes that are similar in content and/or have the same goals and objectives.

Pre-determined level or standard

A standard is a set of related indicators, benchmarks or indices which provide socially meaningful information regarding performance.

Calculations: percentages, rates, ratios

% of facilities with trained teachers

% of teachers who used the teaching kits

Recognise context

Consult stakeholders

Define purpose and clearly relate to the targets

Mixed methods

Account for human and financial resources

Involve experts

Develop a basket of indicators

RESOURCES

1. Survey CTO

2. Tableau

3. Dimagi

4. Social Cops

5. Chart Blocks

6. Chartist.js

7. Canva

8. Gantt pro

9. Unsplash

10. Google data studio

11. RAW

12. Impact Matrix https://www.goodfinance.org.uk/impact-matrix

13. Tool to Improve Impact https://www.inspiringimpact.org/measuring-up/

14. IRIS - https://iris.thegiin.org/

www.the4thwheel.com