Top Banner
Rochester City School District 2010 Symposium Improving Student Achievement While Overcoming Adversity Kent Gardner, PhD, President Center for Governmental Research
24

Kent Gardner, PhD, President Center for Governmental Research

Feb 18, 2016

Download

Documents

rozene

Rochester City School District 2010 Symposium Improving Student Achievement While Overcoming Adversity. Kent Gardner, PhD, President Center for Governmental Research. Practical Educational Program Evaluation. Challenges & Issues Examples 2001 WIN Schools Evaluation - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Kent Gardner, PhD, President Center for Governmental Research

Rochester City School District 2010 Symposium Improving Student Achievement While

Overcoming Adversity

Kent Gardner, PhD, PresidentCenter for Governmental Research

Page 2: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

Practical Educational Program EvaluationChallenges & IssuesExamples

2001 WIN Schools Evaluation2005 Rochester Charter SchoolsHarvard NYC CharterStanford National CharterMiddle CollegeHillside Work-Scholarship Connection

Page 3: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

What’s the goal?What’s the goal?

Middle College: College prepHillside Work-Scholarship Connection (HWSC): “Graduation is the Goal”

Who decides?What if the endeavor has multiple goals?Can you monitor progress by measuring intermediate or process goals?

Page 4: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

What does success look like?Does the goal have a measurable outcome?

Graduation is relatively easy to measureHow do you measure college readiness?

Are there intermediate outcomes that are measurable?

AttendanceCredits accumulatedWhich intermediate outcomes contribute most powerfully to the final outcome?

Page 5: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

Data PitfallsWhy were the data collected?

Unemployment insuranceNYS’s checkbookSchool lunch

If you intend to adapt data to a new use, are they accurate enough for the new purpose?

Page 6: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

Page 7: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

Data PitfallsBias/Fraud

High stakes tests: NYSED cut scoresAttendanceSuspensions

ConsistencyElementary grades across classes, schoolsCoding across yearsCoding across data systems—attendance can vary depending how & when measured

Page 8: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

Assessing impactConsider how the program affects outcomes—really want to compare how the outcomes for individual students would have been different had they not participated Instead, we compare outcomes for the “experimental” group (HWSC or Middle College participants, for example) to those of students who did not participateChallenges

What’s the comparison group? All others who might have participated?Can you control for all differences?

Page 9: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

Matched Group ComparisonExperimental design is the “platinum standard”

Random assignment to either control or experimental group“Double blind” to avoid placebo effectAssignment from homogeneous population

Random assignmentChallenging—how do you find a context in which you can randomly selectCostly—if you want to be sure of drawing from a homogeneous population, you need a big sample

Page 10: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

Fallback from random assignmentWhen random assignment infeasible or too costly, revert to “quasi-experimental” design: “Control group” is created by a process of selecting similar students

Case Control: match one to one based on common characteristicsPropensity Score Matching

Page 11: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

Propensity Score MatchingSophisticated statistical technique: Creates a statistical model that predicts group membership according to available characteristics of participants“Retroactive” selection of control group: Can employ large data sets, including demographic characteristics, test scores prior to program participation, etc. & guarantee a control group of a predetermined sizeStudents “in program” can be matched to multiple students not in program—1:1, 1:3, 1:5 matching proportions possible depending on size of comparison populationStill can’t control for unseen factors—family characteristics, motivation, etc.—that may be consistently different in one group over the other

Page 12: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

‘01:Wegman Inner City Voucher (WIN)98% of enrolled students in 6 inner city Catholic schools supported by WIN vouchersCase control model matching WIN students with demographically-comparable students from RCSD “schools of choice” (15, 20, 57, 58)

Intended to acknowledge motivational difference between Catholic & public school familiesMatched on age, sex, race, F/RPL, mother’s educationPoverty higher at WIN schools

Comparisons? Compared Iowa Test of Basic Skills trend performance against ITBS national normsCommon assessment across schools was 4th grade ELA & Math scores for both WIN and schools of choiceCouldn’t adjust for “starting point” as conversion from Stanford 9 to ITBS unreliableConclusion: WIN and students from schools of choice performed about the same on 4th grade ELA & Math

Page 13: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

‘05: Rochester Charter SchoolsCGR engaged by Gleason Foundation to monitor performance of newly-formed charter schools for first five years (beginning 2000)Expect “selection bias” for charter lottery applicants?Motivation, prior achievementSolution: Follow students not accepted by lottery

RCSD facilitated monitoring of state & local tests for students enrolled in charter schools & in lottery, but remaining in traditional schoolsCreated “value added” achievement using scores from year prior to enrollment for both groups

FindingsAttrition in both groups made comparisons difficultYet findings supported conclusion that two large charter schools (Edison & National Heritage) underperformed RCSD schoolsBoth schools were closed by NYS Charter Schools Institute

Page 14: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

Harvard School of Ed (Caroline Hoxby): New York City Charter Schools

Adopted same approach used by CGR in 2000: “lotteried in” v. “lotteried out”

All lottery participants more black (64% v. 34%), more poor (F/RPL 92% v. 72%) than all NYC public school studentsHispanic 29%/38%ELL 4%/14%; SPED 11%/13%Different in other ways?

Findings“Lotteried out” students remained on grade level in traditional NYC public schools, outperforming NYC students similarly disadvantaged“Lotteried in” did better

Key point: Studying only students who were part of a lottery “controls” for unseen factors like family motivation, etc.

Page 15: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

Stanford CREDO (Mackie Raymond): Multistate study

Employed state administrative records to create “pairwise comparison” of individual students in 15 statesMatched on grade level, gender, race/ethnicity, ‐F/RPL, ELL, SPED, prior test score on state achievement testsProfile

27% black, 30% Hispanic7% ELL, 7% SPED49% F/RPL

Page 16: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

Stanford CREDO (Mackie Raymond): Multistate study

Page 17: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

Stanford CREDO (Mackie Raymond): Multistate study

Page 18: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

Middle CollegeRCSD/RIT program aimed at “college readiness” for three Franklin high schools

Measurement problematic—How define college readiness?How assess college readiness?

Agreement on goals and objectives varied across RCSD & RIT facultyOne measurement idea, “before and after” ACCUPLACER scores, proved unrealisticCGR’s role evolved to be more about process than outcome

Page 19: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

Hillside Work-Scholarship ConnectionFocus on critical output indicator: Graduation ratesThrough 2005-06, CGR studies based on one-to-one match of HWSC participants to RCSD students

Matching conducted by individuals on Accountability staffMatched on by age, gender, race/ethnicity, F/RPL participation, grade, prior year GPA

Page 20: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

HWSC: Propensity Score MatchingNew study for students whose “on time” graduation years were 2007, 2008 and 2009Relied on very high level of cooperation w/ AccountabilityHWSC participants matched to nonparticipants by age, gender, race/ethnicity, poverty status, disability, English language learner status, grade, school quality, prior year GPA, prior year attendance, prior year suspensions, prior year state test scores

Page 21: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

HWSC: Propensity Score MatchingGrouped students in two ways

By entry grade (8th, 9th, or 10th) & on-time graduation year (2007, 2008 or 2009) for NINE groups or “cohorts” Groups are more homogeneous “Graduation” has a consistent definition BUT the groups are smallerBy enrollment year (02-03 through 06-07) across all grades for THREE cohorts HW-SC enrollment practices more consistent Groups are larger BUT graduation standards will vary

Page 22: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

Propensity score matching complexityConsidered many variations

Matched 1:1, 1:3, & 1:5 RCSD student(s) to each HWSC studentStudied on-time, on-time + 1yr graduation2 probability distributions: logit v. probit108 model “runs” (12 variants by 9 cohorts)

95% confidence interval: The true value will lie within the interval 95% of the time

Page 23: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

Final statistical commentsStatistical significance

How often would this result occur by chance?95% confidence interval: Given the size of the sample and an unbiased sampling procedure, the true “population parameter” will fall within this range 95 times out of 10099% confidence interval: true “population parameter” will fall within this range 99 times out of 100

“Effect size” or importance of result

Page 24: Kent Gardner, PhD, President Center for Governmental Research

Inform & EmpowerCGR

Questions?