Issues in Selecting Issues in Selecting Assessments for Measuring Assessments for Measuring Outcomes for Young Outcomes for Young Children Children Dale Walker & Kristie Pretti-Frontczak ECO Center and Kent State University Presentation at OSEP Early Childhood Conference Washington, DC, December 2005
23
Embed
Issues in Selecting Assessments for Measuring Outcomes for Young Children Issues in Selecting Assessments for Measuring Outcomes for Young Children Dale.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Issues in Selecting Issues in Selecting Assessments for Measuring Assessments for Measuring
Outcomes for Young Outcomes for Young ChildrenChildren
Issues in Selecting Issues in Selecting Assessments for Measuring Assessments for Measuring
Outcomes for Young Outcomes for Young ChildrenChildren
Dale Walker & Kristie Pretti-Frontczak
ECO Center and Kent State University
Presentation at OSEP Early Childhood Conference Washington, DC, December 2005
2
Why Assessment?Why Assessment?
Gather information about skills and capabilities to make decisions about practice
To determine eligibility for services To determine if a child is benefiting from
services or if changes need to be made To measure development over time To document outcomes
3
Purpose of Assessments –Purpose of Assessments – It’s all It’s all about the question(s) you want to about the question(s) you want to answeranswer
Screening – Is there a suspected delay? Does the child need further assessment?
Eligibility determination – Is the child eligible for specialized services?
Program planning – What content should be taught? How should content be taught?
Progress monitoring – Are children making desired progress?
Program evaluation/Accountability – Is the program achieving it intended outcomes and/or meeting required outcomes?
4
Assessment Options Assessment Options
Norm-Referenced Criterion-Referenced Curriculum-Based Direct Observation Progress Monitoring Parent or Professional Report
Table consists of a review of 18 norm-referenced assessments
Information regarding each assessment is provided including: Publisher information Areas of development assessed Test norms provided Scores produced Age range covered
Element Unsatisfactory (0) Basic (1) Satisfactory (2) Excellent (3)
Adaptable for Special Needs
No consideration of special needs
Limited consideration of special needs through the assessment process and instrument does not allow for additional accommodations or modifications for special needs
Upfront considerations for special needs are not comprehensive, but assessment allows for some accommodations and/or modifications for special needs
Considers and provides specific strategies and procedures for accommodating and/or modifying the assessment for special needs
Aligns with Federal/State/Agency Standards and/or Outcomes
Does not align with Federal/State/Agency Standards and/or Outcomes
Aligns with less than half of the big ideas or concepts from Federal/State/Agency Standards and/or Outcomes
Aligns with more than half of the big ideas or concepts from Federal/State/Agency Standards and/or Outcomes
Aligns with a clear majority or all of the big ideas or concepts from Federal/State/Agency Standards and/or Outcomes
11
Progress Monitoring Pros/ConsProgress Monitoring Pros/Cons Used to monitor ongoing
progress toward important outcomes over time
Compare to children of similar ages over time
Repeatable measures for monitoring progress
Standardized administration Standards for technical
adequacy Efficient to administer May also be used as a
screening tool
Indicators of progress may be viewed as not being comprehensive
Not used for eligibility determination
May not provide specific skills to teach but indicators of important skills
12
Parent & Professional Report Parent & Professional Report Pros/ConsPros/Cons High social validity Provides diverse
perspective Important for
informing intervention, program, IFSP/IEP
Parents and professionals know the child, the environments in which they interact
Collaboration requires time and effort to establish
May not be reliable across time
Does not permit comparison across children
May include personal bias
13
Using Multiple Sources of Data or Using Multiple Sources of Data or Single Source to Measure Single Source to Measure Outcomes?Outcomes?
Pros and Cons Recommended practices Need to summarize information
generated Ways data can be used beyond
reporting OSEP outcomes
14
Using Data Beyond OSEP Using Data Beyond OSEP ReportingReporting Good assessment data can be used to….
Reveal patterns regarding children’s strengths and emerging skills
Develop functional and meaningful IFSPs/IEPs Inform program staff and families about strengths
and weaknesses Guide the development of intervention Monitor children’s progress to inform intervention
Ongoing work and Ongoing work and challenges…challenges…
Existing assessment tools were not developed to measure the three outcomes
ECO’s response: “Cross-walking” or mapping frequently used assessments to the outcomes
Work with publishers and state staff to develop guidance for how to use assessment results to generate OSEP-requested data
16
Work with Publishers and Work with Publishers and DevelopersDevelopers
Finalizing crosswalks Alignment with OSEP outcomes How to determine what is “typical” performance Age-anchored benchmarks to measures How scores can be summarized using the ECO
Summary Form Possible recalibration of scores in a way that
maintains the integrity of different assessments Pilot studies with GSEG and interested states Data summary report forms that assist users
with alignment of information from assessment to OSEP outcomes
17
Example of Developing a Example of Developing a Validated CrosswalkValidated Crosswalk
First align On the face of it – which items appear to
align/match which outcomes? Second validate
Do experts agree? Check for internal consistency
Third examine the sensitivity of the assessment in measuring child change
Example of Interpreting the Example of Interpreting the evidenceevidence
Standard scores Residual Change Scores Goal Attainment Scaling Number of objectives
achieved/Percent objectives achieved Rate of Growth Item Response Theory Proportional Change Index Stoplight model
19
Interpreting the AEPS for Interpreting the AEPS for AccountabilityAccountability First administration (near entry)
Is the child above or below a cut off score?
If above – considered to be developing typically If below – development is suspect
Which level of the AEPS was administered?
Child is less than three and Level I is used Child is less then three and Level II is used Child is older than three and Level I is used Child is older than three and Level II is used
20
Interpreting the AEPS for Interpreting the AEPS for AccountabilityAccountability
Second administration (near exit) Use cut off scores again Examine which level was used Look for
changes in area percent scores changes in scoring notes changes in which level was administered
21
Sample Cutoff ScoresSample Cutoff Scores
Level Age Intervals (months) Cutoff Score
Birth to three 25-30 50
31-36 60
Three to six 37-42 20
43-48 30
49-54 40
22
Questions? Questions?
For More Information see: For More Information see: http://www.the-ECO-center.orghttp://www.the-ECO-center.org
For More Information see: For More Information see: http://www.the-ECO-center.orghttp://www.the-ECO-center.org