Top Banner
Impact Evaluation Impact Evaluation for Evidence-Based Policy Making for Evidence-Based Policy Making Arianna Legovini Arianna Legovini Lead Specialist Lead Specialist Africa Impact Evaluation Initiative Africa Impact Evaluation Initiative
25

Impact Evaluation for Evidence-Based Policy Making

Dec 31, 2015

Download

Documents

bernadine-wolf

Impact Evaluation for Evidence-Based Policy Making. Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative. How to turn this child…. …into this child. Why Evaluate?. Fiscal accountability Allocate limited budget to what works best Program effectiveness - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Impact Evaluation  for Evidence-Based Policy Making

Impact Evaluation Impact Evaluation for Evidence-Based Policy Makingfor Evidence-Based Policy Making

Arianna LegoviniArianna LegoviniLead SpecialistLead Specialist

Africa Impact Evaluation InitiativeAfrica Impact Evaluation Initiative

Page 2: Impact Evaluation  for Evidence-Based Policy Making

2

How to turn this child…

Page 3: Impact Evaluation  for Evidence-Based Policy Making

3

…into this child

Page 4: Impact Evaluation  for Evidence-Based Policy Making

4

Why Evaluate?

• Fiscal accountability– Allocate limited budget to what works best

• Program effectiveness– Managing by results: do more of what works

• Political sustainability– Negotiate budget– Inform constituents

Page 5: Impact Evaluation  for Evidence-Based Policy Making

5

Traditional M&E and Impact Evaluation

• monitoring to track implementation efficiency (input-output)

INPUTS OUTCOMESOUTPUTS

MONITOR EFFICIENCY

EVALUATE EFFECTIVENESS

$$$

BEHAVIOR

impact evaluation to measure effectiveness (output-outcome)

Page 6: Impact Evaluation  for Evidence-Based Policy Making

6

Question types and methods

• M&E: monitoring & process evaluation

Descriptive Descriptive analysisanalysis

Causal Causal analysisanalysis

▫What was the effect of the program on outcomes?

▫How would outcomes change under alternative program designs?

▫Is the program cost-effective?

▫Is program being implemented efficiently?

▫Is program targeting the right population?

▫Are outcomes moving in the right direction?

• Impact Evaluation:

Page 7: Impact Evaluation  for Evidence-Based Policy Making

7

Answer with traditional M&E or IE?

• Are nets being delivered as planned?

• Do IPTs increase cognitive ability?

• What is the correlation between HIV treatment and prevalence?

• How does HIV testing affect prevention behavior?

M&E

IE

M&E

IE

Page 8: Impact Evaluation  for Evidence-Based Policy Making

8

Efficacy & Effectiveness

• Efficacy: – Proof of Concept– Pilot under ideal conditions

• Effectiveness:– At scale– Normal circumstances & capabilities– Lower or higher impact?– Higher or lower costs?

Page 9: Impact Evaluation  for Evidence-Based Policy Making

9

Use impact evaluation to….

• Test innovations• Scale up what works (e.g. de-worming)• Cut/change what does not (e.g. HIV

counseling)• Measure effectiveness of programs (e.g.

JTPA )• Find best tactics to change people’s

behavior (e.g. bring children to school)• Manage expectations

Page 10: Impact Evaluation  for Evidence-Based Policy Making

10

What makes a good impact evaluation?

Page 11: Impact Evaluation  for Evidence-Based Policy Making

11

Evaluation problem• Compare same individual with & without a program at

the same point in time

• BUT Never observe same individual with and without program at same point in time

• Formally the impact of the program is:

α = (Y | P=1) - (Y | P=0)

• Example

– How much does an anti-malaria program lower under-five mortality?

Page 12: Impact Evaluation  for Evidence-Based Policy Making

12

Solving the evaluation problem

• Counterfactual: what would have happened without the program

• Estimate counterfactual– i.e. find a control or comparison group

• Counterfactual Criteria– Treated & counterfactual groups have

identical initial average characteristics– Only reason for the difference in

outcomes is due to the intervention

Page 13: Impact Evaluation  for Evidence-Based Policy Making

13

“Counterfeit” Counterfactuals

• Before and after:– Same individual before the treatment

• Non-Participants:– Those who choose not to enroll in program, or– Those who were not offered the program

– Problem:

We can not determine why some are treated and some are not

Page 14: Impact Evaluation  for Evidence-Based Policy Making

14

Before and After Example

• Food Aid– Compare mortality before and after– Observe mortality increases– Did the program fail?– “Before” normal year, but “after” famine

yearCannot separate (identify) effect of food

aid from effect of drought

Page 15: Impact Evaluation  for Evidence-Based Policy Making

15

Before & After

• Compare Y before & after intervention

Before & after counterfactual = BEstimated impact = A-

B

• Control for time varying factors

True counterfactual = CTrue impact = A-C

A-B is under-estimatedTime

Y

AfterBefore

A

B

C

t-1 t

Treatment

B

Page 16: Impact Evaluation  for Evidence-Based Policy Making

16

Non-Participants….

• Compare non-participants to participants

• Counterfactual: non-participant outcomes

• Problem: why did they not participate?

• Estimated Impact

αi = (Yit | P=1) - (Ykt| P=0) ,

• Hypothesis :

(Ykt| P=0) = (Yit| P=0)

Page 17: Impact Evaluation  for Evidence-Based Policy Making

17

• Mothers who came to the health unit for ORT and mothers who did not?

• Communities that applied for funds for IRS and communities that did not?

• People who receive ART and people who do not?

Child had diarrhea

Access to clinic

Costal and mountain

Epidemic and non-epidemic

People with HIV

Access to clinic

Exercise: Why participants and non-participants might differ?

Page 18: Impact Evaluation  for Evidence-Based Policy Making

18

Health program example

• Treatment offered

• Who signs up?– Those who are sick– Areas with epidemics

• Have lower health status that those who do not sign up

• Healthy people/communities are a poor estimate of counterfactual

Page 19: Impact Evaluation  for Evidence-Based Policy Making

19

What's wrong?

• Selection bias: People choose to participate for specific reasons

• Many times reasons are directly related to the outcome of interest

• Cannot separately identify impact of the program from these other factors/reasons

Page 20: Impact Evaluation  for Evidence-Based Policy Making

20

Need to know…

• Why some get assigned to treatment and others to control group. If reasons correlated with outcome – cannot separately identify program impact

from– these other “selection” factors

• The process by which data is generated

Page 21: Impact Evaluation  for Evidence-Based Policy Making

21

Possible Solutions…

• Guarantee comparability of treatment and control groups

• ONLY remaining difference is intervention

• How?– Experimental design/randomization– Quasi-experiments

• Regression Discontinuity

• Double differences

– Instrumental Variables

Page 22: Impact Evaluation  for Evidence-Based Policy Making

22

These solutions all involve…

• EITHER Randomization– Give all equal chance of being in

control or treatment groups– Guarantees that all factors/characteristics will

be on average equal between groups– Only difference is the intervention

• OR Transparent & observable criteria for assignment into the program

Page 23: Impact Evaluation  for Evidence-Based Policy Making

23

Finding controls: opportunities

• Budget constraints: – Eligible who get it = potential treatments– Eligible who do not = potential controls

• Roll-out capacity:– Those who go first = potential treatments– Those who go later = potential controls

Page 24: Impact Evaluation  for Evidence-Based Policy Making

24

Finding controls: ethical considerations

• Do not delay benefits: Rollout based on budget/capacity constraints

• Equity: equally deserving populations deserve an equal chance of going first

• Transparent & accountable method

– Give everyone eligible an equal chance

– If rank based on criteria, then criteria should be measurable and public

Page 25: Impact Evaluation  for Evidence-Based Policy Making

25

Thank you