Top Banner
The Journal of Correctional Education 61 (1) • IVIarch 2010 Juvenile Correctional Schools: Assessment and Accountability Policies and Practices Joseph C. Gagnon, Ph.D. Todd Haydon, LCSW, Ph.D. Paula Maccini, Ph.D. Abstract Ttiis study focused on school-level approaches to assessment and accountability policies and practices. A national random sample of 131 (34.22%) principals from juvenile correctional schools for committed youth (JC) responded to a mail and on-line survey. No statistically significant differences existed between respondent and nonrespondent schools. Results indicated that the majority of students vi/ith or without disabilities in JC schools participated in state assessments. The most common basis of school policies for assessment accommodations was state accommodation guidelines. For most JC schools there was no process for accountability for student participation and performance on state assessments. Aimost half of principals did not know if their school made Adequate Yearly Progress. Other salient results, implications, and recommendations for future research are presented. Introduction In response to thirty years of concern with student achievement on both national and international assessments, the No Child Left Behind Act (NCLB, 2002) was developed to promote a rigorous education for all students, NCLB has led to an increased focus on estabiishing challenging standards, measuring student iearning against those standards, and holding schools and locai education agencies (LEAs) accountable for student achievement (Kohi, McLaughiin, & Nagie, 2006), The centrai components of NCLB ensure aii students participate in state assessments, assessment accommodations are appropriately used and assessment resuits are utiiized, as weii as pubiicly reported, NCLB also incorporates accountabiiity for student iearning, which emphasizes student performance on state assessments as a basis of providing rewards and sanctions to schoois (Yeli, Shriner, & Katsiyannis, 2006), 23
24

Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

Feb 27, 2023

Download

Documents

Antti Kauppinen
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61 (1) • IVIarch 2010

Juvenile Correctional Schools:Assessment and Accountability

Policies and Practices

Joseph C. Gagnon, Ph.D.Todd Haydon, LCSW, Ph.D.

Paula Maccini, Ph.D.

AbstractTtiis study focused on school-level approaches to assessment and accountability

policies and practices. A national random sample of 131 (34.22%) principals fromjuvenile correctional schools for committed youth (JC) responded to a mail andon-line survey. No statistically significant differences existed between respondent andnonrespondent schools. Results indicated that the majority of students vi/ith or withoutdisabilities in JC schools participated in state assessments. The most common basis ofschool policies for assessment accommodations was state accommodation guidelines.For most JC schools there was no process for accountability for student participationand performance on state assessments. Aimost half of principals did not know if theirschool made Adequate Yearly Progress. Other salient results, implications, andrecommendations for future research are presented.

IntroductionIn response to thirty years of concern with student achievement on bothnational and international assessments, the No Child Left Behind Act (NCLB,2002) was developed to promote a rigorous education for all students, NCLBhas led to an increased focus on estabiishing challenging standards, measuringstudent iearning against those standards, and holding schools and locaieducation agencies (LEAs) accountable for student achievement (Kohi,McLaughiin, & Nagie, 2006), The centrai components of NCLB ensure aiistudents participate in state assessments, assessment accommodations areappropriately used and assessment resuits are utiiized, as weii as pubiiclyreported, NCLB also incorporates accountabiiity for student iearning, whichemphasizes student performance on state assessments as a basis of providingrewards and sanctions to schoois (Yeli, Shriner, & Katsiyannis, 2006),

23

Page 2: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61(1) • March 2010Juvenile Correctional Schools Cagnon, et. al.

The assessment and accountability requirements within NCLB (2002) weredesigned to promote a high quaiity education for all youth, including studentswith disabilities, in a variety of educational settings. In fact, the Individuals withDisabiiities Education Improvement Act regulations (IDEIA, 2004) are alignedwith NCLB in an effort to ensure youth with disabilities are also included in thesystem of educational accountability.

Despite the intentions behind NCLB (2002) and IDEIA (2004), there areconcerns that the promises of educational assessment and accountability donot extend to all youth. Limited evidence suggests that adherence to federaleducation mandates are a significant and longstanding issue of concern foryouth with and without disabilities in exclusionary school settings (Cagnon &McLaughlin, 2004; Cagnon, Maccini, a Haydon, 2009). Juveniie correctionalschools for committed youth (JC) are one exclusionary school setting with theworst record of adhering to federal education reform (Browne, 2003; Coffeya Cemignani, 1994; Leone, 1994). The mandates in NCLB do not specificallyaddress the unique characteristics of youth (e.g., mentai health, short lengthof stay) and systemic difficulties (e.g., security issues) within JC schools (Leonea Cutting, 2004). However, there is no legal justification for denying youth inJC schools with and without disabilities the same educational opportunities,participation in state assessments, and inclusion in accountability measuresthat are significant components of pubiic schools. The sections that follow willbriefly discuss the fédérai mandates as they apply to JC schools and include (a)participation in state assessments, (b) assessment accommodations, and (c)accountability.

Participation In State AssessmentsNCLB (2002, see PL 107-110 § 1001(4)) requires that students with and withoutdisabilities participate in state assessments. Participation is viewed as critical toimproving educational opportunities for students and providing information toschools and communities concerning student performance (Nagle, Yunker, aMalmgren, 2006; Thurlow, Lazarus, Thompson, a Morse, 2005). One of theprimary ways policy makers aligned IDEIA with NCLB was to require thatstudents with disabilities also be included in state- or district-wide assessments(IDEA, 2004, see PL 108-446 Sec 612); NCLB, see PL 107-110 §1001(1)). Overtime, an increasing number of youth with disabilities have participated in stateassessments (Ysseidyke, Dennison, a Nelson, 2004). However, it is unclear theextent to which youth with or without disabilities in JC schools participate instate assessments.

24

Page 3: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61 (1) • March 2010Cagnon, et. al. Juvenile Correctional Schools

Assessment AccommodationsAssessment accommodations are defined as altering assessment materialsand/or procedures in a manner that eliminates the influence of the studentdisability and allow students to show their knowledge (Shriner a Ganguly,2007). States are required to develop assessment accorrimodations that areconsidered valid and do not nullify student scores (NCLB, 2002, see PL 107-110§ 704(b)(4)(x); 2004, see PL 108-446 §612(a)(16)(B) and 614(d)(1)(A)(VI)(aa));Thompson, Johnstone, Thurlow, & Altman, 2005; Zeninsky & Sireci, 2007). Allstates require schools to adhere to a state list of approved accommodations •and, as such, school personnel must be aware of these policies (Thurlow,Lazarus, Thompson, & Morse, 2005). Currently, there is no information availablethat indicates the basis for JC school assessment policy and adherence to policyhas not been studied.

AccountabilityHigh-stakes assessments are commonly used to place students in appropriateclassrooms, decide if students should be promoted to the next grade, andwhether or not they should graduate from high school (Ysseldyke, Dennison, &Nelson, 2004). Currently, with regard to JC schools, there is no information thatidentifies common uses of state assessment results. Existing accountabilityrequirements within NCLB (2002, see PL 107-110 § 1119(b)(1)(A) and (B)) also callfor schools to publicly report student results on state assessments. However, nopost-NCLB research exists that addresses JC school reporting of student scoreson state assessments. As such, school level approaches taken by JC schools touse and report state assessment data are unclear.

Rewards and sanctions. The emphasis on education accountability is acontinuation of the Elementary and Secondary Education Act (ESEA) originalgoal to close the achievement gap between disadvantaged students and theirpeers, as well as for all students to reach grade level proficiency in reading andmathematics by 2014 (Kohl, McLaughiin, a Nagle, 2006). To achieve studentproficiency, NCLB (2002) established state and school accountability thatemphasizes student performance as a basis for rewards and sanctions (2002,see PL 107-110 § (1111)(2)(A)(iii)); Yell, Shriner, a Katsiyannis, 2006). Indicatorsfor school accountability typicaily include student performance on stateassessments, performance growth on state assessments, attendance rates, anddropout rates (Bolt, Krentz, a Thurlow, 2002). A significant component of theaccountability system is Adequate Yearly Progress (AYP), which is basedprimarily on student scores on state assessments (NCLB, see PL 107-110 §

25

Page 4: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61(1) • March 2010Juvenile Correctional Schools Cagnon, et. al.

(6161 )(1)); Yell, Shriner, 8 Katsiyannis). Whether a school achieves AYP typicallyresults in rewards or sanctions that commonly include providing or withholdingmoney (Bolt, Krentz, 8 Thurlow).

AYP, as well as school rewards and sanctions, apply to schools across thecontinuum of services, including JC schools. However, it is currently unknownhow JC schools are held accountable, the indicators that are used by the stateto determine positive and negative consequences for these schools, and thepercentage of JC schools that make AYP are not known. For JC schools to beincluded in current accountability reform, it is critical that such questions areanswered.

Currently, there is no research that identifies if students with and withoutdisabilities in JC schools are included in current school-level accountabilitysystems. There is a critical need to understand the extent which students in JCschools are benefiting from the emphasis on participation in state assessments,appropriate use of assessment accommodations, and assessment results.Further, it is important to identify the extent to which these schools are reportingassessment results publicly and being held accountable for student learning.

PurposeThe purpose of the current national survey of public and private JC schoolprincipals was to identify policies, practices and philosophies concerning (a)participation in state assessments, (b) assessment accommodations, (c) howrespondent views toward participation of students in state assessmentcompares across achievement of AYP, and (d) school accountability.

MethodsInstrumentationThe current survey of principals was conducted throughout the U.S. andincluded five primary sections at the school-level: (a) school, principal, andstudent characteristics; (b) curriculum policies and practices; (c) assessmentpolicies and practices; and (d) accountability policies and practices. The currentreport focuses solely on assessment and accountability policies and practices.The other survey topics are discussed in a separate report by Gagnon, Barber,Van Loan, and Leone (2009).

Three procedures were completed to ensure reliability of survey data. First,surveys maintained the same format for both hard copy and online versions(Fink, 1995). Second, to ensure decisions were consistently followed, acodebook was developed and used during data entry (Litwin, 1995). Finally,

26

Page 5: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

TTie Journal of Correctional Education 61 (1) » IVIarch 2010Cagnon, et. al. Juvenile Correctional Schools

reliability checks were conducted on data entered for 30% (n = 39) of the131 returned surveys. Reliability was calculated by dividing the number ofagreements by the number of agreements and disagreements and muitiplyingby 100%. Reliability for data entry was 99.97%.

Specific methods were used to increase survey validity. To ensure contentvalidity, research questions were developed based on a review of literature,consideration of current educationai reform, personal expertise, and discussionwith experts in the field of special education. Next, an advisory group reviewedand made recommendations regarding the survey and study methodology.Also, principal focus groups commented on the format and content of thesurveys. The survey was modified based on the advisory group and focusgroup feedback.

The survey included questions concerning assessment (e.g.. Should studentsin your sciiooi participate in state assessments? Wiiat percentage of students in yoursciiooi participate in state assessments? if iess tiian 95% of students participated, winydid students not participate? What percentage of students of students witti disabiiitiesin your sciiooi participate in state assessments? What percentage of students participatein an assessment that is required by another state? What is tiie basis of schooi poiiciesfor assessment accommodations on state assessments? To what extent are stateaccommodations that students with EBD orLD have on their IndividualizedEducation Programs (iEPs) used during classroom instruction?) and schooiaccountability (e.g.. Which describes how resuits of state assessments are used in yourschooi? What is the basis for how your school uses state assessment resuits? Whichindicators are used by the state to determine schooi consequences? i-iow is your schooiheld accountable for student participation in state assessments? How does your schooireport assessment results for students with disabiiities? Did your school mai<e AYP lastschooi year?).

Sample and Participant SelectionThe sampie was initiaiiy identified using the description of the faciiities'programs. Three criteria needed to be present In the description: (a)committed/adjudicated youth (court commitment); (b) closed/secure faciiity; and(c) education services provided on-site. Variations in JC schoois for committedyouth required deiineation of specific programs that wouid not appiy. Programsthat did not quaiify were juvenile: (a) community corrections (i.e., not a securecare faciiity); (b) probation; (c) parole; (d) detention; (e) accountability camps;(f) programs with no education component; (g) incomplete or inaccurateinformation in the database; and (h) program was closed or not yet open. As

27

Page 6: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61(1) » IVIarch 2010Juvenile Correctional Schools Cagnon, et. al.

a secondary check of the universe, state websites were reviewed. As a resuit,63 additionai faciiities were identified from the website information as possiblymeeting the criteria for the study. These faciiities were individuaiiy caiied toensure they met the criteria for inciusion in the universe. Twelve additionaiprograms were induded based on phone caii verification.

During the pianning stages, both coverage error and sampling errorwere considered. Use of a comprehensive database is an important factorfor reducing coverage error (Wei Wei, 2003), Therefore, the original universeof juveniie correction schoois for committed youth was taken from the mostcomprehensive database avaiiabie. The 2003 Directory of Adult and JuvenileCorrectional Departments, Institutions, Agencies, and Probation and Parole Authorities(American Correctionai Association, 2003) was the basis of the universe. Tominimize sampiing error, the beginning of each survey included two questionsto verify that participants were eligible for the study. The questions inciuded: (a)Is your school a juveniie correctionai schooi for committed youth? and (b) Doesyour schooi inciude students in any of grades 7-12?

Potentiai participants were directed to return the survey withoutcompieting it, if they answered no to either of the initiai questions,

A totai of 483 schoois met criteria for inciusion in the study. Based onconcerns with a possible low response rate, 400 of the quaiifying schoolswere randomly seiected. Assuming approximateiy 50% (n = 200) response rate,consistent with other exciusionary schoois in nationai surveys (see Cagnon aiVlcLaughiin, 2004), responses wouid approach the 214 needed for a 95%confidence ievei and 5% confidence intervai. Upon return of the surveys, 17schools were excluded from the database due to the foiiowing issues: (a) not aJC school (n = 14); (b) faciiity ciosed (n = 2); and (c) no grades 7-12 (n = 1), Thus,the totai sampie size was 383,

Data CollectionConsistent with Heberiein and Baumgartner's (1978) recommendations, muitipiecontacts and reminders to participants induded an introductory ietter, fivesurvey maiiings, and foiiow-up phone caiis that began after the second maiiing.Respondents were aiso abie to respond via hard copies of the surveys oroniine, Additionaiiy, the surveys aii induded consistent and specific directions,as weii as notation of government sponsorship. Moreover, ali principáis in thesampie received an incentive at the time of the first survey maiiing in the formof a $2,00 biii. As suggested by Bourque and Fieider (1995), participants wereaiso assured of confidentiaiity and anonymity, provided an estimate of the time

28

Page 7: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61(1) • March 2010Cagnon, et. al. Juvenile Correctional Schools

needed to complete the survey, details concerning when and how to return thesurvey, and contact information in the event they wanted a summary of thefinal results.

Respondents and NonrespondentsThe response rate for the survey was 34.22% (131/383). One hundred andtwenty-one principals returned hard copies of surveys and another tencompleted the survey online. Respondent and nonrespondent comparisonswere conducted at the school level using information available from the 2003Directory of Adult andJuvenile Correctional Departrrients. Institutions, Agencies, andProbation and Parole Auttiorities (American Correctional Association, 2003) and,for 12 schools, the state website. Comparisons were completed across U.S.Census Bureau region, security level (i.e., maximum, medium, medium/maximum,minimum, muitiple), contract (i.e., private company) or non-contract, and genderserved (i.e., maie, female, co-gender). Based on chi-square analysis, nosignificant differences were noted for any of the comparisons.

Respondents included principals from 41 states and all U.S. census regions(n = 131) were represented (n = 31 Midwest; n = 19 Northeast; n = 60 South;n = 21 West). Descriptive data was not available for all variabies for allrespondent schools. However, available information indicates that each securityIevei (n = 109) was represented (n = 38 maximum; 34 = medium; n = 6combination medium/maximum; n = 19 minimum; n = 12 muitiple levels ofsecurity) and concerning the type of faciiity (n = 131) there were 32 contractfacilities and 99 non-contract facilities. Also, with regard to the gender servedat facilities (n = 125), 82 facilities were only for males, 12 facilities for females,31 co-gender facilities.

Data AnalysisData analysis included descriptive statistics and chi-square analysis. Descriptivestatistics included frequency, percent, mean, standard deviation (SD) and sum,as appropriate. Chi-square analyses were conducted to identify principal viewsof student participation in state assessment {yes, no) across schooi achievementof AYP {yes, no, dont know). Cramer s V was used when calculating effect size forthe comparison. A common aipha of .05 was maintained for ail data analysis.

Eor seven questions, respondents had the option to write in an Otherresponse (i.e., assessment (e.g.. If less than 95% of students participated, whydid students not participate? What is the basis of school policies for assessmentaccommodations on state assessments? Which describes how results of state

29

Page 8: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61(1) • March 2010Juvenile Correctional Schools Cagnon, et. ai.

assessments are used in your school? What is the basis for how your school usesstate assessment results? Which indicators are used by the state to determine schoolconsequences? How is your school held accountable for student participation in stateassessments? How does your school report assessment results for students withdisabilities?}. Analysis of data from open-ended responses was completedusing the following procedures: (a) a graduate student identified preliminarycategories through a review of responses and coded each response into one ormore categories; (b) data was independently placed into categories by anothergraduate student and categories were modified, as needed; (d) the graduatestudents discussed areas of convergence and divergence, (e) categories wereadjusted, added, or deleted, as needed, (f) each graduate student recoded thedata; and (g) a final discussion and calculation of reliability was completed(Goetz a LeCompte, 1984; Lincoln a Cuba, 1985).

ResultsParticipation in State AssessmentsPrincipals answered questions concerning student participation in stateassessments (see Table 1). Principals commonly asserted that students at theirschool should participate in state assessments (n = 89, 68.5%). Overall, 80.07%(n = 112, SD = 34.67) of respondents reported that students participated instate assessments. Also, six principals reported that they did not know thepercentage of students who participated in state assessments. Further, eightprincipals reported that participation in state assessments was not applicable.Concerning students with disabilities, 83.49% (n = 108, SD = 33.56)participated. Ten principals reported that they did not know the percentageof students with disabilities who participated. Another 10 principals notedthat participation in state assessments was not applicable for students withdisabilities. Respondents also reported the percentage of students thatparticipate in a state assessment that is required by another state. For the 121respondents, 94 reported that 0% of their students are served from outside thestate. One principal reported that 100% of his/her students were from anotherstate. In addition, 26 principals noted that they did not know if youth wereserved from states other than where the facility was located.

For those respondents who reported less than 95% of students participatedin state assessments, they were asked to check all the reasons that applied forstudent lack of participation (see Table 1). The most common reasons for notparticipating in state assessments were listed under the Other category (n = 38)were that students did not participate due to student or school exemption (n =

30

Page 9: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Joumal of Correctional Education 61(1) « March 2010Cagnon, et. al. Juvenile Correctional Schools

14) and students were not in a grade that required testing (n = 13). Also, listedwithin the Other category were (a) student length of stay was too short forparticipation (n = 4), (b) safety reasons (n = 3), (c) students already graduated(n = 2), and (d) students were preparing for the CED (n = 2).

Table 1. Participation in State Assessments

Ciiaracteristics No. (%)

Should Students Participate in State Assessments in Your School?Yes 89 (68.5%)No 41 (31.5%)

If less than 95% of Students Do Not Participate in Assessments, Why?Not Applicable, All Students Participate 40 (--)Emotional Distress 7 (--)The Assessments are Too Difficult 7 (--)Students are from Another LEA 3 (--)Students are from Another State 0 (--)Do Not Have a School Policy 1 (--)I Don't Know 3 (--)Other 38 (--)

Note: -- = Percentages not calculated because question asks respondents to, "cfioose all that apply";LEA = Local Education Agency

Assessment AccommodationsPrincipals were asked questions concerning the use of state assessmentaccommodations during instruction and the basis for these assessmentaccommodations (see Table 2). Principals most frequently reported that theaccommodations were used To a Great Extent during instruction (n = 91, 71.7%).Overall, the average response on the four-point scale (1 = not at all; 4 = to agreat extent), was 3.57 (SD = .79). The most common basis of school policiesfor assessment accommodations was state accommodation guidelines (n = 80,672%). Additionally, for the nine principals that noted an Other basis, theresponses were student lEPs (n = 4), students were exempt or the question wasnot applicable (n = 4), or the school used a combination of state and federalguidelines (n = 1).

31

Page 10: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61 (1) • March 2010Juveniie Correctional Schools

Table 2. Assessment Accommodations

Characferisfics

Extent to Which State Accommodations that Students with EBD or LD haveon their IEPs are Used During Classroom Instruction

Not at allVery LittleSomewhatTo a Great ExtentDon't Know

Basis tor School Policies tor Assessment Accommodations on State AssessmentsLEA Accommodations GuidelinesState Accommodations GuidelinesSchool Developed Assessment Accommodations GuidelinesNo Identitied Basis for Assessment AccommodationsDon't KnowOther

Cagnon, et. al.

No. (%)

5 (3.9%)9(7.1%)

22(17.3%)91 (71.7%)

0(0)

17(14.3%)80 (67.2%)10(8.4%)

1 (.8%)2(1.7%)9 (7.6%)

Note: -- = Percentages not calculated because question asks respondents to, "choose all that apply"LEA = Local Education Agency

Participation and CharacteristicsRespondent views toward participation of students in state assessment werecompared across achievement of AYP. No statisticai significance was notedfor achievement of AYP and whether or not principals asserted that studentsshould participate in state assessments.

AccountabilityRespondents reported aii applicabie uses of state assessments and factorson which the uses of assessments are based (see Table 3). Most commonly,principáis reported that state assessment results were used to adjust instructionor curriculum (n = 64) and identify areas in which the schooi performance isacceptabie and where improvement is needed (n = 63). Where principáis notedan Other use for state assessment results, responses included school is exempt/not applicabie (n = 7), deveiop individual plans of instruction (n = 2), andobtain accreditation (n = 1 ). The most common basis for schoois' use of stateassessments was the state guidelines (n = 54). The next most frequent responseconcerning the use of assessment results was school deveioped guidelines (n =37). Of the respondents, 59 reported a basis other than State Education Agency

32

Page 11: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Joumal of Correctional Education 61(1) « March 2010Cagnon, et. al. Juvenile Correctional Schools

(SEA) or Locai Education Agency (LEA) guideiines. Some principáis noted Otherconcerning the basis of using state assessments. Other assertions reiated to theuse of state assessments inciuded agency deveioped guideiines (n = 2) and theschooi was exempt from state assessments (n = 9),

Table 3. Basis and Use of Assessment Results

Characteristics No, (%)

How Results of State Assessments are Used in Your SchoolAdjust Instruction or Curriculum 64 (-)Make Decisions Regarding Student Placement within the School 21 (--)Make Decisions Regarding Student Grade-Level Promotion 18 (-)Make Decisions Regarding Student Return to Public or Home School 12 (-)Evaluate Teachers 13 (--)identify Areas in Which School Performance is Acceptable and Where 63 (-)

Improvement Is NeededResults are not used at the School Level 26 (-)Don't Know 3 (-)Other 12 ( - )

Basis for How School Uses State Assessment ResuitsLEA Guidelines 29(--) •State Guidelines 54 (-)School Developed Guidelines 37 (-)No Identified Guidelines 23 (--)Don't Know . 8 (-)Other 12 ( - )

Note: - = Percentages not calculated because question asks respondents to, "choose all that apply";LEA = Local Education Agency

Concerning reporting of assessment resuits for students with disabiiities,principáis reported that the most common approaches were reporting resuitsto the state as part of aggregate data (n = 63) and at the schooi ievei (n = 55),The two Other responses inciuded that the schooi was exempt from (n = 5) andresuits were reported to an education iiaison or case worker (n = 2),

33

Page 12: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61(1) • March 2010Juvenile Correctional Schools Cagnon, et. al.

Table 4. Reporting and Accountability

Accountability Issue No. (%)

How School Reports State Assessment Results for Students with DisabilitiesResults Reported at the School Level 55 (--)Results Reported to the LEA as Part of Aggregate Data 37 (--)Results Reported to the State as Part of Aggregate Data 63 (--)Individual Results Reported to Individual Parents/Guardians 43 (--)Results Reported to Student's Home School and LEA 37 (--)Resuits Not Reported 1 (--)Don't Know 9 (--)Other: 16(--)

Did School Make Adequate Yearly ProgressYes 40 (34.5%)No 20(17.2%)Don't Know 56 (48.3%)

indicators Used by State to Determine Schooi ConsequencesStudent Participation Rates on State Assessments 57 (--)Scores on State Assessments 48 (--)Improvement on State Assessment Scores 43 (--)Attendance 42 (--)Drop out Rates 21 (--)Graduation Rates 29 {--)Don't Know 28 (--)Other 25 (-)

How School is Held Accountable for Student Participation in State AssessmentsSchool Sanctions Based on Student Participation and Pertormance 22 (--)

on State AssessmentsMonetary Incentives Based on Student Participation and Pertormance 9 (-)

on State AssessmentsNo Formal Process to Hold Schools Accountable for Student Participation 42 (--)

and Performance on State AssessmentsDon't Know 24 (--)Other 37 (--)

Note. LEA = Local Education AgencyNote: -- = Percentages not calculated because question asks respondents to, "choose all that apply"

As shown in Table 4, respondents answered three questions concerningaccountability for student learning: (a) if the school made Adequate YearlyProgress (AYP) in the previous school year; (b) the methods used to hold schoolsaccountable for participation in state assessments; and (c) indicators used by

34

Page 13: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61(1) • March 2010Cagnon, et al. Juvenile Correctional Schools

the state to determine school consequences. While some respondents noted .that their school did make AYP (n = 40, 34.5%), the most frequent responsewas that principals did not know if their school made AYP (n = 56, 48.3%).Concerning methods of holding schools accountable, most JC schooi principalsreported there was no process for accountability for student participation andperformance on state assessments (n = 42). An additional 24 principals did .not know how their school was held accountable. For those respondents whoreported Other methods of being held accountable, answers included: (a) LEA,home school, or district is accountable (n = 8); (b) state or Dept. of JuvenileJustice requirements (n = 7); (c) school is exempt (n = 13); (d) student portfolios(n = 1); (e) graduation requirements (n = 1); (f) quality review (n = 4); (g) AYP(n = 1); (h) state assessment (n = 1); and (i) achievement tests (n = 1).

The most common indicators used by the state to determine schoolconsequences were student participation rates on state assessments (n = 57) .and student scores on state assessments (n = 48). Also 28 principals did notknow the indicators used by the state. Few patterns were noted for Otherindicators reported by respondents. Responses included: (a) consequencesbased on student classroom behavior (n = 1); (b) no indicators are used (n =10); (c) another agency/home district determines consequences (n = 8); (d)percentage of students that complete a CED (n = 2); (e) the extent to whichthere is minority overrepresentation (n = 1); (f) report cards/progress notes(n = 1); and (g) reading/math improvement (n = 1).

DiscussionThe results of the current investigation supply the first national picture ofassessment and accountability policies and practices in JC schools. Resultsindicate that numerous concerns must be addressed to ensure youth withand without disabilities in JC schools are included in current assessment andaccountability systems. The discussion focuses on three? key areas concerningJC school policies and practices (a) participation in state assessments,(b) assessment accommodations, and (c) accountability.

ParticipationOnly about 69% of principals asserted that students in their JC school shouidparticipate in state assessments. However, there was no statistical significancenoted for achievement of AYP and whether principals asserted that studentsshould participate in state assessments. Two conclusions can be made, in light ofthe lack of significance. First, principal personal views may not be a critical factor

35

Page 14: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61(1) • March 2010Juvenile Correctional Schools Cagnon, et al.

when school assessment policies and practices are developed and implemented.Alternatively, there could be a lack of LEA and SEA oversight of schools (see Cagnon,Barber, Van Loan, a Leone, 2009) that affects the validity of school attainmentof AYP. The possibility of insufficient oversight is even more of a possibility whenconsidering that almost half of principals in the current study did not know iftheir school achieved AYP. One could assume that, if JC schools were accountablefor achieving AYP, principals would be aware of their school's progress.

In contrast to principal views of whether students should participate instate assessments, it was reported that a higher percentage of students in JCschools actually participated. However, it remains a concern that only slightlymore than 80% of students with or without disabilities in JC schoolsparticipated in state assessments. Neither NCLB (2002) nor IDEIA (2004)contains provisions that specifically allow students with or without disabilitiesin JC schools to forego participation in state assessments. As such, it is expectedthat, consistent with NCLB mandates, at least 95% of students in JC schoolswould participate in state assessments. One possible complication of studentparticipation could be that JC schools provide education to students from otherstates (Gagnon, Barber, Van Loan, & Leone, 2009). In the current study, however,only one JC school had students from another state. What is particularlyinteresting is that approximately one-fourth of principals did not know if youthat their facility were from other states. A lack of principal knowledge couldinhibit student participation in another state's assessment, if needed.

Principals were asked why, if less than 95% of students participated instate assessments, some students do not participate. The most common answerwas that all students participate in state assessments (n = 40). Principalsreported varied reasons for lack of student participation. The most frequentresponse, by 17 principals, was that students did not participate due to anindividual or school exemption. For the principals that cited exemptions,additional research is necessary to understand the context surrounding theseexemptions and by whom the exemptions were given. Overall, remainingprincipal explanations were not ones that states typically accept as valid. Forexample, principals identified that students did not participate due to emotionaldistress; a reason for non-participation that is not valid in any state (Lazarus,Thurlow, Lail, Eisenbraun, & Kato, 2006). Principals reported other reasons forexemption, including the assessments were too difficult, students were fromanother LEA, and safety reasons. The varied responses from principals indicatethat there are several complications for JC schools concerning studentparticipation in state assessments. What is clear is that principals of JC schools

36

Page 15: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61(1) • March 2010Cagnon, et. al. Juvenile Correctional Schools

are in need of guidance and policies that maintain federal requirements andalso take into consideration the unique attributes of JC schools.

Assessment AccommodationsPrincipals were asked about the use of assessment accommodations in ciass,as weii as the basis of assessment accommodations. Oniy 71.1% of respondentsanswered that, for youth with EBD and LD, assessment accommodations wereused in class To a Great Extent. When students are not provided opportunities toapply appropriate accommodations on a regular basis, they might not be ableto demonstrate what they know and can do (Shriner & Canguiy, 2007). Slightiymore than two-thirds of principals responded that the most common basis ofschool poiicies for assessment accommodations was state accommodationguideiines. Results also indicate that close to 20% of schools did not baseaccommodations on SEA or LEA guidelines. This raises concern because whenschoois do not use LEA or SEAs guidelines as a basis for accommodation decisions,the reporting of student scores may be affected (Thompson et ai., 2005).Specificaiiy, use of school-deveioped assessment accommodations may resuitin the use of unapproved assessment accommodations, which couid invaiidatestudent assessment scores (Maimgren, McLaughiin, & Noiet, 2005). However, itis not ciear from the current data that those schoois that did not use SEA or LEAguideiines necessariiy used assessment accommodations that did not aiign withthe district and/or state. Additionai research is needed to provide a definitivestatement of schooi decisions regarding assessment accommodation choices.

AccountabilityUsing assessment results. The two most common responses reported byprincipáis for describing how resuits of state assessments are used in theirschoois was to adjust instruction or curricuium (n = 64), and to identifyareas of acceptabie schooi performance and needed improvement (n = 63).it is encouraging that principáis are appropriateiy using assessment resuits.However, some principáis reported using state assessments to piace studentswithin their schooi or to decide if a student shouid return to pubiic schooi. infact ciassroom piacement, at the secondary level, shouid be based on studentage/grade and courses needed for graduation. Moreover, it is whollyinappropriate to rely on state assessment results to establish if a student shouldreturn to their regular public schooi upon reiease from a JC schooi. No provisionexists with IDEIA (2004) or NCLB (2002) that relates formeriy incarcerated youthsuccess on state assessment with the right to attend pubiic school upon release.

37

Page 16: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61(1) • March 2010Juvenile Correctional Schools Cagnon, et. al.

Additionaiiy, it is a concern that some principáis did not use assessment resuits.When state assessment resuits are not utiiized, students are iess iiiceiy toadvance to the next grade or graduate from high schooi (Thuriow a Johnson,2000; Ysseidyi<e, Dennison, & Neison, 2004),

Principai responses varied with regard to the basis for their schoois' use ofstate assessment resuits. The most frequent basis was state guideiines (n = 54)foiiowed by schooi-deveioped guideiines (n = 37), and LEA guideiines (n = 29),A smaiier number of principáis reported that there were no guideiines used as abasis of their schoois' use of assessments (n = 23), The use of schooi-deveiopedguideiines raises questions as to whether schooi policies are aiigned withapproved and recommended LEA and SEA approaches to using stateassessments. What is more aiarming, is that many JC schoois have no identifiedguideiines for using state assessment resuits and that some principáis wereunaware of the existence of any guideiines on which to base their use of stateassessment. The iaci< of guideiines runs contrary to the emphasis within NCLB(2002) for the standardization of schooi accountabiiity.

Reporting assessment results. iViost responding principáis noted that forstudents with disabiiities, scores on state assessments were reported (a) to thestate as part of aggregate data (n = 63), (b) at the schooi ievei (n = 55), and (c)to individual parents on their chiid (n = 43), Less than haif of respondingprincipáis noted using any singie method of reporting assessment resuits.The reiativeiy few JC schoois that report resuits in any particuiar manner raiseconcerns because reporting of student assessments resuits is a i<ey componentof accountabiiity (Gagnon & iVlcLaughiin, 2004), When schoois do not reportscores for some students on state assessments, the message is that certainstudents are iess important and do not 'count" (Nationai Center on EducationaiOutcomes, 2008), in fact, aii 50 states have a method for reporting proficiencyof students with disabiiities on state assessments (VanCetson a Thuriow, 2007),

it is true, however, that schoois are not required to report the scores ofstudents with disabiiities if the number of students in a schooi does not meetstate requirements (Leone a Cutting, 2004), The current study does not identifycurrent ieveis of students with disabiiities and whether they are exempt fromreporting state assessment resuits. What is ciear is that many JC schoois arenever heid accountabie because there is no pubiic reporting of assessmentresuits. The iack of pubiic reporting mai<es it difficuit to know if these schooisare successfuiiy educating our most troubied youth.

Holding sciiools accountable. iVlost principáis (n = 42) reported there wasno process for accountabiiity for student participation and performance on state

38

Page 17: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

TTie Journal of Correctional Education 61(1) • IVIarch 2010Cagnon, et. al. Juvenile Correctional Schools

assessments. An additional 24 principals did not know how their school washeld accountable. Also of significant concern is that only about 35% of JCschools reportedly made AYP and another 48% of principals did not know iftheir school made AYP. The lack of accountability, principai knowiedge relatedto accountability, and school success in achieving AYP are all troubling issues.Concerns are amplified when considering that, nationally, 81 % of JC schoolsreport being accredited by their SEA (Gagnon, Barber, Van Loan, & Leone, 2009).

It is difficult to conjecture the reasons for the gaps in accountability andprincipals' understanding of accountability. However, some issues may shedlight on the current status of JC schoois. For example, recent research indicatedthat approximateiy one-third of JC Principals are not certified as principals oradministrators (Cagnon, Barber, Van Loan, & Leone, 2009). Aithough the iinkmay be indirect, certification is one common measure of principal quaiification(Gates et al., 2003). It is aiso possibie that accountability for student learningin JC schools is affected by minimum subgroup sizes. Specificaiiy, there maybe a reiatively high percentage of youth in a facility in special education, ascompared to the total population of that facility (Gagnon, Barber, Van Loan, &Leone). However, if the subgroup of students classified as speciai education isstill below the minimum subgroup size that is set by the state, this couid affectwhether these students are inciuded in accountability measures. 'In otherwords, if a school (or district) does not have the minimum number of studentsfor a subgroup, that subgroup is treated as meeting AYP for the purposes ofdetermining whether the school (or district) met AYP" (Cortiella, 2007, p. 18). ineffect, an entire school with a smaii popuiation and high percentage of studentswith disabilities could be overiooked with regard to AYP.

Principáis reported two common indicators used by the state to determineschool consequences: (a) student participation rates on state assessments(n = 57); and (b) student scores on state assessments (n = 48). However, 28principáis did not know the indicators used by the state. It can be assumed thatthose principals who are unaware of indicators use by the state are uninformedbecause such consequences do not apply to their school.

LimitationsTwo limitation to the study existed. First, similar to other survey research(Crawford & Tindai, 2006; Donovan & Nickerson, 2007), the response rateof 34.22% was less than the typicaily accepted 50% rate for maii surveys(Weisberg, Krosnick, Bowen, 1989). However, there was a concerted effort toensure appropriate power and a randomized sample. In addition, the reliabiiity

39

Page 18: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

TTie Journal of Correctional Education 61(1) • March 2010Juvenile Correctional Schools Cagnon, et al.

of the results is indicated by the fact that no significant differences existedacross responding and nonresponding schools. The second limitation wasthe possibility of principals reporting information that may not be accurate.However, principals made some rather unsettling admissions that included alack of knowledge on several issues. For example, the most common responseto whether their school made AYP was Do Not Know. The admission supportsthe idea that principals were willing to indicate their shortcomings, whenappropriate. In light of these factors, it is appropriate to assert that the studyresults are representative of JC schools for committed youth throughout the U.S.

ImplicationsFuture ResearciiThe current study provides an initial and troubling snapshot of assessment andaccountability policies and practices within JC schools. However, several issuesrequire additional investigation using a variety of research methodologieswithin the central areas of participation in state assessment, assessmentaccommodations, reporting and using assessment results, and schoolaccountability. For example, in-depth case study, observation, and interviewresearch are needed to identify why principals often believe that many studentsshould not participate and, in fact, do not participate in state assessments whilein JC schools. Additional studies would also provide data concerning thesituations under which JC schools can obtain an exemption or waiver forstudent participation in state assessments. As such, the reasons for andmethods of obtaining exemptions are areas in which there is a great need forresearch. Moreover, data is needed concerning the extent to which LEAs, SEA,and correctional education agencies in charge of education, delineate policiesand practices for youth with disabilities in JC schools that are consistent withNCLB (2002) and IDEIA (2004).

It is possible, given the complex academic, sociai and emotional needsof youth with and without disabilities in JC schools that a large number ofstudents in this setting may participate in a state assessment that is based onmodified achievement standards. The 2007 NCLB regulations that have beenpublished since the current study was conducted, allow for such assessments.However, it is unclear how this provision will alter assessment of these volatileyouth. Additional research is needed to monitor the potentially disproportionaleffects of this policy on youth in JC schools.

Results from the current study indicate a number of other future researchdirections for student assessment accommodations. Simiiar to other educational

40

Page 19: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61(1) » March 2010Cagnon, et. al. Juvenile Correctional Schools

settings, there are concerns that students in JC schools may not be receivingthe accommodations that are listed on their lEPs during the actual assessment(Bottsford-Miller, Thurlow, Stout, & Quenemoen, 2006; Shriner & DeStefano,2003). Future research could evaluate fideiity of teacher implementation ofaccommodations by using a combination of student and teacher surveys, directobservations of teachers, and lEP review. Also, research is needed to discoverthe actual accommodations provided by JC schools, how teachers are usingaccommodations, and the types of students to whom accommodations aregiven. Additionally, Shriner and Ganguly (2007) noted the importance ofproviding assessment accommodations based on student-specific characteristics,as well as individual test items. Future research should assess the extent towhich student characteristics and test items are considered when makingassessment accommodations decisions, as weii as 'which accommodations areappropriate for which individuals' (Thurlow, Thompson, & Lazarus, 2006, p. 665).

The present research sets the stage for additional investigations concerningthe use of state assessment results. For example, information is neededconcerning the alignment of school developed versus LEA or SEA guideiinesfor use of state assessment results. Moreover, additionai research can lead toan understanding of why schools develop their own guidelines, what they usedto develop guidelines, and what they do in lieu of either having or knowingguidelines to assist in the appropriate use of state assessment results.

Future research is also necessary to identify key accountability issues andthe extent to which JC schools: (a) provide an assessment that is aligned withgrade-level standards; (b) report results according to students that achieved atlevels of basic, proficient or advanced; (c) disaggregate data for youth withdisabilities; (d) establish specific performance objectives for subgroups; and (e)measure AYP according to sub-groups of students' who achieve proficient levels(Maimgren, McLaughlin, & Nolet, 2005). For example, one of the mainrequirements of NCLB is public reporting of student performance on stateassessments that are disaggregated by severai groups, inciuding students withdisabilities (National Center on Educationai Outcomes, n.d.). However, it remainsunclear if JC schools are reporting disaggregated assessment results and if theyare not, what barriers exist that inhibit the appropriate reporting of resuits. Schoolsare not obligated to report the scores of students with disabiiities if the numberof students in a school does not meet state requirements (Leone & Cutting, 2004).Additional issues may exist in light of recent amendments to NCLB (see Title 1-Improving the Academic Achievement of the Disadvantaged; Individuals withDisabilities Education Act (IDEIA), 2007). It is unciear how requirements for

41

Page 20: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61(1) • March 2010Juvenile Correctional Schools Gagnon, et. al.

minimum numbers of students and requirements of the same number of studentsin subgroups wiii affect JC schoois, in which there are typicaiiy relatively smaiinumbers of students. Conceivably, the low number of students in JC schooiscouid exclude students from important accountabiiity requirements and AYP.Future research should identify the number of students per schooi in subgroupsidentified by IDEIA in order to determine if resuits are appropriateiy reporting.

in the current study, aimost haif of schools were not held accountable forstudent learning. Some states allow accountabiiity measures for speciai schooisthat are different from pubiic schoois or aiiow these schools to choose theirown accountabiiity measure (Bolt, Krentz, & Thurlow, 2002). The allowedvariation in accountabiiity measures may expiain some of the variability inhow JC schoois are heid accountable and the reasons that many JC schoolsare not heid accountabie. Whiie these variations may take specific JC schooicharacteristics into consideration, this is not definiteiy known. Additionaiinformation is necessary to identify the reasons that JC schoois are functioningoutside of current accountability processes.Conclusions

The education of youth in JC schoois is a chaiienge that is often complicatedby a high percentage of youth with disabiiities (Quinn, Rutherford, Leone, Osher,a Poirier, 2005). As such, adherence to IDEIA (2004) and NCLB (2002) may bedifficuit (Leone & Cutting, 2004). However, the stakes are extremely high for youthwith and without disabilities in JC schools and academic failure may contributeto lifelong problems with recidivism and jobiessness (Bureau of Labor Statistics,2001; Katsiyannis, & Archwamety, 1999; U.S. Department of Labor, 2003).

Unfortunateiy, resuits of the current study provide a disturbing picture ofboth a iack of principal knowiedge and a iack of consistent assessment andaccountabiiity poiicies and practices in JC schoois. The concerns are ampiifiedwhen considering that over 80% of JC schoois nationaiiy are accredited by theirState Department of Education (Cagnon, Barber, Van Loan, 8 Leone, 2009).Ciearly, there is a need for principai professionai development andcommunication with LEAs and SEAs concerning assessment and accountabilitypoiicies. In fact, SEA Directors of Special Education have recognized the positiveeffects that increased communication can have on such issues as studentparticipation in state assessments (Thompson et al., 2005). Moreover, a clearlink between adherence to IDEiA and NCLB should be an integral part of schoolaccreditation, it is through communication and hoiding schools accountablethat youth in JC schoois wiii be assured of access to a free and appropriatepubiic education.

42

Page 21: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61 (1 ) * March 2010Cagnon, et. al. Juvenile Correctional Schools

R^erencesAmerican Correctional Association, (2003), The 2003 directory of aduit and juvenile

correctionai departments, institutions, agencies, and probation andparoie authorities.Aiexandria, VA: Author

Bolt, S,, Krentz, J,, & Thuriow, M, (2002), Are we there yet? Accountabiiity for the performanceof students with disabilities (NCEO Technical Report 33), Minneapolis, MN; University ofMinnesota, Nationai Center on Education Outcomes,

Bottsford-Miller, N,, Thurlow, M, L,, Stout, K, E,, a Quenemoen, R, F, (2006), A comparisonof iEP/504 accommodations under classroom and standardized testing conditions: Apreliminary report from SEELS data (Synthesis Report 63), Minneapolis, MN: Universityof Minnesota, National Center on Education Outcomes,

Bourque, L, B,, 8 Fielder, E, p, (1995), How to conduct self-administered and mail surveys.Thousand Oaks, CA: Sage,

Browne, J, (2003), Derailed: Theschoolhouse tojailhouse track. Advancement Project,Washington, DC,

Bureau of Labor Statistics, (2001 ), Employment experience of youths: Results from the first threeyears of a longitudinal survey. Washington, DC: U,S, Department of Labor,

Coffey, 0, D, a Cemignani, M, C, (1994), Effective practices in juvenile correctionai education: Astudy of the literature and research, 1980-1992. Washington, DC: U,S, Department ofJustice, The National Office for Social Responsibiiity,

Cortiella, C, (2007), Rewards S roadblocks: How special education students are faring under NoChild Left Behind. National Center for Learning Disabilities,

Crawford, L, a Tindal, C, (2006), Policy and practice: Knowledge and beliefs of educationprofessionals related to the inclusion of students with disabilities in a stateassessment. Remédiai and Special Education, 27, 208-217

Donovan, S, A,, a Nlckerson, A,B, (2007), Strength-based traditional social emotionalreports: Impact on multidisciplinary team members perceptions. Behavioral Disorders,32, 228-237

Fink, A, (1995), The survey handbook. (Vol, 1), Thousand Oaks, CA: Sage,Cagnon, J, C, Barber, B, R,, Van Loan, C, L,, a Leone, P, E, (2009), Juvenile correctional

schools: Characteristics and approaches to curriculum. Education and Treatment ofChildren, 32, 673-696,

Cagnon, J, C, Maccini, P,, a Haydon, T, (2009), Secondary day treatment and residentialschools: Assessment and accountability poiicies and practices. Unpublished manuscript,

Cagnon, J, C, a McLaughlin, M, J, (2004), Curricuium, assessment, and accountabiiity inday treatment and residential schools. Exceptional Children, 70, 263-283,

Cates, S, M,, Ringel, J, S,, Santianez, L,, Ross, K, E,, a Chung, C, H, (2003), Who is leading ourschools? Santa Monica, CA: Rand,

Coetz, J, P,, a LeCompte, M, D, (1984), Ethnography and qualitative design in educationairesearch. Orlando, FL: Academic Press,

43

Page 22: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61(1) • March 2010Juvenile Correctional Schools Cagnon, et. al.

Heberlein, T. A., and Baumgartner, R. (1978). Factors affecting response rates to mailedquestionnaires: a quantitative analysis of the published literature. AmericanSociological Review, 43, 447-462.

Individuals with Disabilities Education Improvement Act of 2004, Pub. L. No. 108-446, 118 Stat.2658 (2004).

Katsiyannis, A., & Archwamety, T. (1999). Academic remediation/achievement and otherfactors related to recidivism rates among delinquent youth. Behavioral Disorder, 24,93-101.

Kohl, F L, McLaughlin, M. J., 8 Nagle, K. (2006). Alternate achievement standards andassessments; A descriptive investigation of 16 states. Exceptional Children, 73, 107-123.

Lazarus, S. S., Thurlow, M. L, Lail, K. E., Eisenbraun, K. D., 8 Kato, K. (2006). 2005 statepolicies on assessment participation and accommodations for students with disabilities(Synthesis Report 64). Minneapolis, MN: University of Minnesota, National Center onEducation Outcomes.

Leone, P. E. (1994). Education services for youth with disabilities in a state-operated juvenilecorrectional system: Case study and analysis. The Journal c>f Special Education, 28(1), 43-58.

Leone, P. E., 8 Cutting, C. A. (2004). Appropriate education, juvenile corrections, and NoChild Left Behind. Behavioral Disorders, 29, 260-265.

Lincoln, Y., 8 Cuba, E. (1985). Naturaiistic inquiry. Newbury Park, CA: Sage Publications.

Litwin, M. S. (1995). How to measure survey reliability and validity. Thousand Oaks, CA: SAGEPublishing.

Malmgren, K. W., McLaughlin, M. J., 8 Nolet, V. (2005). Accounting tor the performance ofstudents with disabilities on statewide assessments. The Journal of Special Education,39, 86-96.

Nagle, K., Vunker, C, 8 Malmgren, K. W. (2006). Students with disabilities andaccountability reform: Challenges identified at the state and iocal levels. Journal ofDisability Policy Studies, 77(1), 28-39.

National Center on Educational Outcomes, (n. d.). Special topics area: Participation of studentswith disabilities. Retrieved February 13, 2008 fromhttp://cehd.umn.edu/nceo/topicareas/standards/standardsfaq.htm

No Child Left Behind Act of 2001, Pub. L No. 107-110,115 Stat. 1425 (2002).Quinn, M. M., Rutherford, R. B., Leone, P. E., Osher, D. M., 8 Poirier, J. M. (2005). Youth with

disabilities in juvenile corrections: A national survey. Exceptionai Children, 71, 339-345.Shriner, J. C, 8 Destafano, L. (2003). Participation and accommodation in state assessment:

The roie of individualized education programs. Exceptional Children, 69, 147-161.Shriner, J. C, 8 Ganguly, R. (2007). Assessment and accommodation issues under the No

Child Left Behind Act and the Individuals with Disabilities Education improvementAct. Assessment for Effective instruction, 32, 231-243.

ITiompson, S. J., Johnstone, C. J., Thuriow, M. L, 8 Altman, J. R. (2005). 2005 state specialeducation outcomes: Five years into the decade means many changes. Minneapolis,MN: University of Minnesota, National Center on Educationai Outcomes.

44

Page 23: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

The Journal of Correctional Education 61(1) • March 2010Gagnon, et. al. Juvenile Correctional Schools

Thurlow, M, L, & Johnson, D. R. (2000). High-stakes testing of students with disabiiities.Journal of Teacher Education, 57, 305-314.

Thurlow, iVI. L., Lazarus, S. S., Tiiompson, S. J., 8 IVIorse, A. B. (2005). State policies onassessment participation and accommodations for students with disabiiities. TheJournal of Special Education, 38, 232-240.

Thurlow, M. L, Thompson, S. J., B Lazarus, S. S. (2006). Considerations in theadministration of tests to special needs students: Accommodations, modifications,and more, in S. M. Downing and T iVI. Haladyna (Eds.), Handbook of test developmentiVlahwah, NJ: Lawrence Eribaum Associates.

Title 1 -Improving the Academic Achievement of the Disadvantaged: Individuals with DisabilitiesEducation Act (iDEiA), 34 CF.R.72, § 200-30 (2007).

U.S. Department of Labor (2003). Educationai Resources: So you are thinking about droppingout of school? Washington, DC: Author. Retrieved January 14, 2005 fromhttp://www.doi.gov/asp/fibre/dropout.htm

VanCetson, G. R., a Thuriow, iVi. L., (2007). Nearing the target in disaggregated subgroupreporting to the pubiic on 2004-2005 assessment results (Technicai Report 46).Minneapolis, iVIN: University of iVIinnesota, Nationai Center on Education Outcomes.

Wei Wei, C. (2003). Reducing error in mail surveys. Practical Assessment, Research, 8Evaluation, 8(18) 1-5. Retrieved January 2, 2006 formhttp://PAREonline.net/getvn.asp?v=8an=18

Weisberg, H. F., Krosnici<, J. A,, 8 Bowen, B. D. (1989). An introduction to survey research anddataanaiysis (2nd ed.). Cienview, iL: Scott, Foresman and Company.

Yell, IVI. L, Shriner, J. C, & Katsiyannis, A. (2006). individuals with Disabilities Educationimprovement Act of 2004 and IDEIA Regulation of 2006: implications for educators,administrators, and teacher trainers. Focus on Exceptional Children, 39(1), 1-24.

Ysseidyke, J., Dennison, A., & Nelson, R. (2004). Large-scale assessment and accountabiiitysystems: Positive consequences for students with disabiiities (Synthesis Report 51 ).iViinneapoiis, iVIN: University of iVIinnesota, Nationai Center on Education Outcomes.

Zeninsky, A. L, a Sired, S. C. {2007). A summary of the research on the effects of testaccommodations: 2005-2006 (Technicai Report 47). iViinneapoiis, MN: University ofMinnesota, National Center on Education Outcomes.

Biographicai SketchJOSEPH GACNON S is an Assistant Professor in the Special Education Department at theUniversity of Florida. Dr. Cagnon s research includes a focus on curricuium, assessment,and accountability policies and practices in Juveniie corrections schoois for committedyouth, as well as day treatment and residentiai psychiatric schoois.

This research was funded by Crant #H324C030043 U.S. Department of Education, Officeof Speciai Education a Rehabiiitative Services (OSERS)

45

Page 24: Juvenile Correctional Schools: Assessment and Accountability Policies and Practices

Copyright of Journal of Correctional Education is the property of Correctional Education Association and itscontent may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder'sexpress written permission. However, users may print, download, or email articles for individual use.