Top Banner
Impact Evaluation Impact Evaluation for Impact Evaluation for Evidence-Based Policy Evidence-Based Policy Making Making Arianna Legovini Arianna Legovini Lead, Africa Impact Evaluation Lead, Africa Impact Evaluation Initiative Initiative AFTRL AFTRL
40

Impact Evaluation for Evidence-Based Policy Making

Jan 23, 2016

Download

Documents

Impact Evaluation for Evidence-Based Policy Making. Arianna Legovini Lead, Africa Impact Evaluation Initiative AFTRL. Answer Three Questions. Why is evaluation valuable? What makes a good impact evaluation? How to implement evaluation?. IE Answers: How do we turn this teacher…. - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Impact Evaluation for Evidence-Based Policy Making

Impact Evaluation

Impact Evaluation for Impact Evaluation for Evidence-Based Policy Evidence-Based Policy

MakingMakingArianna LegoviniArianna Legovini

Lead, Africa Impact Evaluation InitiativeLead, Africa Impact Evaluation Initiative

AFTRLAFTRL

Page 2: Impact Evaluation for Evidence-Based Policy Making

2

Answer Three Questions

Why is evaluation valuable?

What makes a good impact evaluation?

How to implement evaluation?

Page 3: Impact Evaluation for Evidence-Based Policy Making

3

IE Answers: How do we turn this teacher…

Page 4: Impact Evaluation for Evidence-Based Policy Making

4

…into this teacher?

Page 5: Impact Evaluation for Evidence-Based Policy Making

5

Why Evaluate? Need evidence on what works

Allocate limited budget Fiscal accountability

Improve program/policy overtime Operational research Managing by results

Information key to sustainability Negotiating budgets Informing constituents and managing press Informing donors

Page 6: Impact Evaluation for Evidence-Based Policy Making

6

What is different between traditional M&E and Impact Evaluation? monitoring to track

implementation efficiency (input-output)

INPUTS OUTCOMESOUTPUTS

MONITOR EFFICIENCY

EVALUATE EFFECTIVENESS

$$$

BEHAVIOR

impact evaluation to measure effectiveness (output-outcome)

Page 7: Impact Evaluation for Evidence-Based Policy Making

7

Question types and methods

Process Evaluation / Monitoring:

Descriptive Descriptive analysisanalysis

Causal Causal analysisanalysis

▫What was the effect of the program on outcomes?▫How would outcomes change under alternative program designs?▫Does the program impact people differently (e.g. females, poor, minorities)?▫Is the program cost-effective?

▫Is program being implemented efficiently?▫Is program targeting the right population?▫Are outcomes moving in the right direction?

Impact Evaluation:

Page 8: Impact Evaluation for Evidence-Based Policy Making

8

Which can be answered by traditional M&E and which by IE?

Are ITNs being delivered as planned?

Does school-based delivery of malaria treatment increase school attendance?

What is the correlation between health coverage and under fives receiving treatment within 24 hr of fever outbreak?

Does the house-to-house approach lead to an increase in under fives sleeping under ITNs relative to level in communities with other community-based approaches?

M&E

IE

M&E

IE

Page 9: Impact Evaluation for Evidence-Based Policy Making

9

Types of Impact Evaluation

Efficacy: Proof of Concept Pilot under ideal conditions

Effectiveness: At scale Normal circumstances & capabilities Lower or higher impact? Higher or lower costs?

Page 10: Impact Evaluation for Evidence-Based Policy Making

10

So, use impact evaluation to…. Test innovations Scale up what works (e.g. de-worming) Cut/change what does not (e.g. HIV counseling) Measure effectiveness of programs (e.g. JTPA ) Find best tactics to e.g. change people’s behavior

(e.g. come to the clinic) Manage expectations

e.g. PROGRESA/OPORTUNIDADES (Mexico) Transition across presidential terms Expansion to 5 million households Change in benefits Battle with the press

Page 11: Impact Evaluation for Evidence-Based Policy Making

11

Next question please

Why is evaluation valuable?

What makes a good impact evaluation?

How to implement evaluation?

Page 12: Impact Evaluation for Evidence-Based Policy Making

12

Assessing impact

examples How much does an anti-malaria program

lower under-five mortality? What is the beneficiary’s health status with

program compared to without program?

Compare same individual with & without programs at the same point in time

Never observe same individual with and without program at same point in time

Page 13: Impact Evaluation for Evidence-Based Policy Making

13

Solving the evaluation problem

Counterfactual: what would have happened without the program

Need to estimate counterfactual i.e. find a control or comparison group

Counterfactual Criteria Treated & counterfactual groups have identical

initial characteristics on average, Only reason for the difference in

outcomes is due to the intervention

Page 14: Impact Evaluation for Evidence-Based Policy Making

14

2 “Counterfeit” Counterfactuals

Before and after: Same individual before the treatment

Non-Participants: Those who choose not to enroll in program Those who were not offered the program

Page 15: Impact Evaluation for Evidence-Based Policy Making

15

Before and After Example

Food Aid Compare mortality before and after Find increase in mortality Did the program fail? “Before” normal year, but “after” famine

year Cannot separate (identify) effect of food

aid from effect of drought Epidemic

Page 16: Impact Evaluation for Evidence-Based Policy Making

16

Before and After

Compare Y before and after intervention

B Before-after counterfactual

A-B Estimated impact

Control for time varying factors

C True counterfactual

A-C True impact

A-B is under-estimatedTime

Y

AfterBefore

A

B

C

t-1 t

Treatment

B

Page 17: Impact Evaluation for Evidence-Based Policy Making

17

Non-Participants….

Compare non-participants to participants

Counterfactual: non-participant outcomes

Problem: why did they not participate?

Page 18: Impact Evaluation for Evidence-Based Policy Making

18

Exercise: Why participants and

non-participants might differ? Mothers who came to the

health unit for ORT and mothers who did not?

Communities that applied for funds for IRT and communities that did not?

Children who received ACT and children who did not?

Child had diarrhea

Access to clinic

Costal and mountain

Epidemic and non-epidemic

Child had fever

Access to clinic

Page 19: Impact Evaluation for Evidence-Based Policy Making

19

Health program example

Treatment offered Who signs up?

Those who are sick Areas with epidemics

Have lower health status that those who do not sign up

Healthy people/communities are a poor estimate of counterfactual

Page 20: Impact Evaluation for Evidence-Based Policy Making

20

Health insurance example

Health insurance offered

Who buys health insurance?

Who does not buy?

Compare health care utilization of those who got insurance to those who did not

Cannot separately identify impact of insurance and utilization on health

Page 21: Impact Evaluation for Evidence-Based Policy Making

21

What's wrong?

Selection bias: People choose to participate for specific reasons

Many times reasons are directly related to the outcome of interest Health Insurance: health status

and medical expenditures

Cannot separately identify impact of the program from these other factors/reasons

Page 22: Impact Evaluation for Evidence-Based Policy Making

22

Program placement example

Government offers family planning program to villages with high fertility

Compare fertility in villages offered program to fertility in villages not offered

Program targeted based on fertility, so Treatments have high fertility Counterfactuals have low fertility

Cannot separately identify program impact from geographic targeting criteria

Page 23: Impact Evaluation for Evidence-Based Policy Making

23

Need to know…

Why some get program and others do not How some get into treatment and other in

control group

If reasons correlated with outcome cannot identify/separate program impact from other explanations of differences in outcomes

The process by which data is generated

Page 24: Impact Evaluation for Evidence-Based Policy Making

24

Possible Solutions…

Guarantee comparability of treatment and control groups

ONLY remaining difference is intervention

In this workshop we will consider Experimental design/randomization Quasi-experiments

Regression Discontinuity Double differences

Instrumental Variables

Page 25: Impact Evaluation for Evidence-Based Policy Making

25

These solutions all involve…

Randomization Give all equal chance of being in

control or treatment groups Guarantees that all factors/characteristics will

be on average equal between groups Only difference is the intervention

If not, need transparent & observable criteria for who is offered program

Page 26: Impact Evaluation for Evidence-Based Policy Making

26

The Last Question

Why is evaluation valuable?

What makes a good impact evaluation?

How to implement evaluation?

Page 27: Impact Evaluation for Evidence-Based Policy Making

27

Implementation Issues

Political economy

Policy context

Finding a good control Retrospective versus prospective designs Making the design compatible with operations Ethical Issues

Relationship to “results” monitoring

Page 28: Impact Evaluation for Evidence-Based Policy Making

28

Political Economy

What is the policy purpose? In USA derail from national policy, defend

budget In RSA answer electorate In Mexico allocate budget to poverty programs In IDA country pressure to demonstrate aid

effectiveness and scale up In poor country hard constraints and ambitious

targets

Page 29: Impact Evaluation for Evidence-Based Policy Making

29

Political Economy

Cultural shift

From retrospective evaluation Look back and judge

To prospective evaluation Decide what need to learn Experiment with alternatives Measure and inform Adopt better alternatives overtime

Change in incentives Rewards for changing programs that do not work Rewards for generating knowledge Separating job performance from knowledge generation

Page 30: Impact Evaluation for Evidence-Based Policy Making

30

The Policy Context

Address policy-relevant questions: What policy questions need to be answered? What outcomes answer those questions? What indicators measures outcomes? How much of a change in the outcomes

would determine success?

Example: teacher performance-based pay Scale up pilot? Criteria: Need at least a 10% increase in test

scores with no change in unit costs

Page 31: Impact Evaluation for Evidence-Based Policy Making

31

Prospective designs

Use opportunities to generate good control groups

Most programs cannot deliver benefits to all those eligible Budgetary limitations:

Eligible who get it are potential treatments Eligible who do not are potential controls

Logistical limitations: Those who go first are potential treatments Those who go later are potential controls

Page 32: Impact Evaluation for Evidence-Based Policy Making

32

Who gets the program?

Eligibility criteria Are benefits targeted? How are they targeted? Can we rank eligible's priority? Are measures good enough for fine rankings?

Who goes first? Roll out

Equal chance to go first, second, third?

Page 33: Impact Evaluation for Evidence-Based Policy Making

33

Ethical Considerations

Do not delay benefits: Rollout based on budget/administrative constraints

Equity: equally deserving beneficiaries deserve an equal chance of going first

Transparent & accountable method

Give everyone eligible an equal chance

If rank based on some criteria, then criteria should be quantitative and public

Page 34: Impact Evaluation for Evidence-Based Policy Making

34

Retrospective Designs

Hard to find good control groups Must live with arbitrary or unobservable

allocation rules Administrative data

good enough to reflect program was implemented as described

Need pre-intervention baseline survey On both controls and treatments With covariates to control for initial differences

Without baseline difficult to use quasi-experimental methods

Page 35: Impact Evaluation for Evidence-Based Policy Making

35

Manage for results

Retrospective evaluation cannot be used to manage for results

Use resources wisely: do prospective evaluation design Better methods More tailored policy questions Precise estimates Timely feedback and program changes Improve results on the ground

Page 36: Impact Evaluation for Evidence-Based Policy Making

36

Monitoring Systems

Projects/programs regularly collect data for management purposes

Typical content Lists of beneficiaries Distribution of benefits Expenditures Outcomes Ongoing process evaluation

Information is needed for impact evaluation

Page 37: Impact Evaluation for Evidence-Based Policy Making

37

Evaluation uses information to:

Verify who is beneficiary When started What benefits were actually delivered

Necessary condition for program to have an impact:

benefits need to get to targeted beneficiaries

Page 38: Impact Evaluation for Evidence-Based Policy Making

38

Improve use of monitoring data for IE Program monitoring data usually only

collected in areas where active

Collect baseline for control areas as well

Very cost-effective as little need for additional special surveys

Add a couple of outcome indicators

Most IE’s use only monitoring data

Page 39: Impact Evaluation for Evidence-Based Policy Making

39

Overall Messages

Impact evaluation useful for Validating program design Adjusting program structure Communicating to finance ministry

& civil society A good evaluation design requires

estimating the counterfactual What would have happened to beneficiaries

if had not received the program Need to know all reasons why beneficiaries got

program & others did not

Page 40: Impact Evaluation for Evidence-Based Policy Making

40

Design Messages

Address policy questions Interesting is what government needs and will

use Stakeholder buy-in Easiest to use prospective designs Good monitoring systems & administrative

data can improve IE and lower costs