Impact Evaluation Methods · Impact Evaluation Methods April 9, 2012 Diva Dhar ... • Reduced test scores for all ... – failure of “parallel trend assumption”, i.e. impact

Post on 15-Mar-2020

0 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

Transcript

povertyactionlab.org

Impact Evaluation MethodsApril 9, 2012

Diva DharCLEAR Regional Center

J‐PAL South Asia

a.  Pre‐Postb. Simple Differencec. Differences‐in‐Differencesd. Multivariate Regressione. Statistical Matchingf. Instrumental Variablesg. Regression Discontinuityh. Randomized Evaluations

Impact evaluation methods

Example:  Pratham’s BalsakhiProgram

• Many children in 3rd and 4th standard were not even at the 1st standard level of competency

• Class sizes were large• Social distance between teacher and some of the students was large

What was the problem?

• 124 Municipal Schools in Vadodara (Western India)

• 2002 & 2003:Two academic years 

• ~ 17,000 children

• Pratham – “Every  child in school and learning well”

• Works with most states in India reaching millions of children

Context  and  Partner 

• Hire local women (balsakhis) from the community

• Train them to teach remedial competencies– Basic literacy, numeracy

• Identify lowest performing 3rd and 4thstandard students– Take these students out of class (2 hours/day)– Balsakhi teaches them basic competencies

Proposed solution

Pros

• Reduced social distance• Reduced class size• Teaching at appropriate 

level

• Improved learning for lower‐performing students

• Improved learning for higher‐performers

Cons

• Less qualified• Teacher resentment• Reduced interaction with 

higher‐performing peers

• Increased gap in learning• Reduced test scores for all 

kids

Possible outcomes

What can we conclude?

• Balsakhi students score an average of 51%

We conduct a test at the end

1 ‐ Pre‐post (Before vs. After)Average change in the outcome of interest 

before and after the programme

• Look at average change in test scores over the school year for the balsakhi children

1 ‐ Pre‐post (Before vs. After)

• QUESTION: Under what conditions can this difference (26.42) be interpreted as the impact of the balsakhi program?

1 ‐ Pre‐post (Before vs. After)

Average post‐test score for children with a balsakhi

51.22

Average pretest score for children with a balsakhi

24.80

Difference 26.42

What would have happened without the balsakhi program?

Method 1: Before vs. AfterImpact = 26.42 points?

Pre‐post

75

50 

25

0

2002                                        2003

26.42 points?

Impact is defined as a comparison between:

1. the outcome some time after the program has been introduced

2. the outcome at that same point in time had the program not been introduced 

the ”counterfactual”

How to measure impact?

13

Impact: What is it?

Time

Prim

ary Outcome

Impact

Intervention

Impact: What is it?

Time

Prim

ary Outcome

ImpactIntervention

Impact: What is it?

Time

Prim

ary Outcome

Impact

Intervention

What else can we do to estimate impact?

• Limitations of the method: No comparison group, doesn’t take time trend into account

Pre‐Post:

• Example:– programme is rolled out in phases leaving  a cohort for comparison, even though the assignment of the program is not random

2 ‐ Simple difference

A post‐ programme comparison of outcomes betweenthe group that received the programme and 

a “comparison” group that did not

Children who gotbalsakhi

2 ‐ Simple difference

Compare post‐program test scores of…

Children who did not get balsakhi

Withtestscores of…

• QUESTION: Under what assumptions can this difference (‐5.05) be interpreted as the impact of the balsakhi program?

2 ‐ Simple difference

Average score for children with a balsakhi

51.22

Average score for children without a balsakhi

56.27

Difference ‐5.05

Children who gotbalsakhi

3 – Difference‐in‐Differences

Compare gains in test scores of…

Children who did not get balsakhi

With gains in test scores of…

• Suitability:– programme is rolled out in phases leaving  a cohort for comparison, even though assignment of treatment is not random

3 – Difference‐in‐Differences (or Double Difference)

Comparison of outcome between a treatment and comparison group (1st difference) and 

before and after the programme (2nd difference)

3 ‐ Difference‐in‐differences 

Pretest Post‐test Difference

Average score for children with a balsakhi

24.80 51.22 26.42

Method 3: Difference‐in‐differences

What would have happened without balsakhi?

75

50 

25

0

02002                                        2003

26.42

3 ‐ Difference‐in‐differences 

Pretest Post‐test Difference

Average score for children with a balsakhi

24.80 51.22 26.42

Average score for children without a balsakhi

36.67 56.27 19.60

Method 3: Difference‐in‐differences

What would have happened without balsakhi?

75

50 

25

0

02002                                        2003

26.4219.60 6.82  points?

• QUESTION: Under what conditions can 6.82 be interpreted as the impact of the balsakhi program?• Issues:

– failure of “parallel trend assumption”, i.e. impact of time on both groups is not similar

3 ‐ Difference‐in‐differences Pretest Post‐

testDifferenc

eAverage score for children with a balsakhi

24.80 51.22 26.42

Average score for children without a balsakhi

36.67 56.27 19.60

Difference 6.82

0 10 20 30 40 50 60 70

Income

Test Score (at Post Test)

post_tot_noB

post_tot_B

4 ‐ Accounting for other factors

Impact of Balsakhi ‐ Summary

Method Impact Estimate

(1) Pre‐post 26.42*

(2) Simple Difference ‐5.05*

(3) Difference‐in‐Difference 6.82*

(4) Regression with controls 1.92

*: Statistically significant at the 5% level

• There are more sophisticated non‐experimental and quasi‐experimental methods to estimate program impacts:– Multivariable Regression– Matching– Instrumental Variables– Regression Discontinuity

• These methods rely on being able to “mimic” the counterfactual under certain assumptions

• Problem: Assumptions are not testable

5 – Other Methods

• Counterfactual is often constructed by selecting a group not affected by the program

• Non‐randomized:– Argue that a certain excluded group mimics the counterfactual. 

• Randomized:– Use random assignment of the program to create a control group which mimics the counterfactual.

31

Constructing the counterfactual

Randomized Evaluations

32

Treatment Group

Comparison Group

• Individuals, clients, firms, villages are randomly selected to receive the treatment, while other units serve as a comparison

GROUPS ARE STATISTICALLY IDENTICAL BEFORE PROGRAM

ANY DIFFERENCES AT ENDLINE CAN BE ATTRIBUTEDD TO PROGRAM

Basic set‐up of a randomized evaluation

Target Population

Target Population

Not in evaluation

Not in evaluation

Evaluation Sample

Evaluation Sample

TotalPopulation

TotalPopulation

Random Assignment

Random Assignment

Treatment Group

Treatment Group

Control Group

Control Group

Randomly samplefrom area of interest

Random sampling and random assignment

Randomly samplefrom area of interest

Randomly assignto treatmentand control

Random sampling and random assignment

Randomly samplefrom both treatment and control

Basic setup of a randomized evaluation

Target Population

Target Population

Not in evaluation

Not in evaluation

Evaluation Sample

Evaluation Sample

TotalPopulation

TotalPopulation

Random Assignment

Random Assignment

Treatment Group

Treatment Group

Control Group

Control Group

Impact of Balsakhi ‐ Summary

Method Impact Estimate

(1) Pre‐post 26.42*

(2) Simple Difference ‐5.05*

(3) Difference‐in‐Difference 6.82*

(4) Regression with controls 1.92

(5)Randomized Experiment 5.87*

*: Statistically significant at the 5% level

Bottom Line: Which method we use mattersBe aware of the assumptions and possible biases

Impact of Balsakhi ‐ Summary

Method Impact Estimate

(1) Pre‐post 26.42*

(2) Simple Difference ‐5.05*

(3) Difference‐in‐Difference 6.82*

(4) Regression with controls 1.92

(5)Randomized Experiment 5.87*

*: Statistically significant at the 5% level

Method Comparison Works if….Pre‐Post Program participants before 

programThe program was the only factor influencing any changes in the measured outcome over time

Simple Difference

Individuals who did not participate (data collected after program)

Non‐participants are identical to participants except for program participation, and were equally likely to enter program before it started.

Differences in Differences

Same as above, plus:data collected before and after

If the program didn’t exist, the two groups would have had identical trajectories over this period.

Multivariate Regression

Same as above plus:Also have additional “explanatory” variables

Omitted (because not measured or not observed) variables do not bias the results because they are either: uncorrelated with the outcome, ordo not differ between participants and non‐participants

PropensityScore Matching

Non‐participants who have mix of characteristics which predict that they would be as likely to participate as participants

Same as above

Randomized Evaluation

Participants randomly assigned to control group

Randomization “works” – the two groups are statistically identical on observed and unobserved characteristics

Conditions required

Qualitative vs Quantitative

• Focus Group Discussions• Case Studies• Interviews – semi‐structured, structured• Participatory methods ‐ Participatory Rural Appraisal (PRA), Rapid Rural Appraisal (RRA)

• Most Significant Change• Observations

Qualitative methods

top related