Top Banner
RREO - ANEXO I (LRF, art.52, inciso I, alinea "a" e "b" do inciso II e § 1º) R$ 1,00 NO BIMESTRE % ATÉ O BIMESTRE % (b) (b/a) ( c ) (c/a) RECEITAS (EXCETO INTRA-ORÇAMENTÁRIAS) (I) 12.140.241.982,00 1.641.188.931,31 13,52% 1.641.188.931,31 13,52% RECEITAS CORRENTES 10.191.002.134,00 1.631.052.988,53 16,00% 1.631.052.988,53 16,00% RECEITA TRIBUTÁRIA 5.222.537.671,00 867.996.357,09 16,62% 867.996.357,09 16,62% Impostos 5.125.894.583,00 852.490.513,84 16,63% 852.490.513,84 16,63% (-) PARTE FUNDEB * 808.679.087,00 135.474.124,85 16,75% 135.474.124,85 16,75% Taxas 96.643.088,00 15.505.750,71 16,04% 15.505.750,71 16,04% Contribuição de Melhoria - 92,54 0,00% 92,54 0,00% RECEITAS DE CONTRIBUIÇÕES 1.028.885.728,00 149.148.538,03 14,50% 149.148.538,03 14,50% Contribuições Sociais 332.184.361,00 49.677.594,11 14,95% 49.677.594,11 14,95% Contrib Interversão Dominio Econômicas 696.701.367,00 99.470.943,92 14,28% 99.470.943,92 14,28% Contribuições de Iluminação Pública RECEITA PATRIMONIAL 61.433.193,00 11.235.296,50 18,29% 11.235.296,50 18,29% Receitas Imobiliárias 2.574.759,00 120.759,95 4,69% 120.759,95 4,69% Receitas de Valores Mobiliários 50.124.653,00 10.799.233,40 21,54% 10.799.233,40 21,54% Receita de Concessões e Permissões 8.686.676,00 307.059,27 3,53% 307.059,27 3,53% Compensações Financeiras - - - 0,00% - Outras Receitas Patrimoniais 47.105,00 8.243,88 0,00% 8.243,88 0,00% RECEITA AGROPECUÁRIA 157.500,00 18.289,49 11,61% 18.289,49 11,61% Receita de Produção Vegetal - - 0,00% - 0,00% Receita de Produção Animal e Derivados 72.500,00 11.059,49 15,25% 11.059,49 15,25% Outras Receitas Agropecuárias 85.000,00 7.230,00 8,51% 7.230,00 8,51% RECEITA INDUSTRIAL 3.094.900,00 351.358,95 11,35% 351.358,95 11,35% Receita da Industria de Transformação 3.094.900,00 351.358,95 0,00% 351.358,95 11,35% Receita da Industria de Construção - - 0,00% - 0,00% Outras Receitas Industriais - - 0,00% - 0,00% RECEITA DE SERVIÇOS 378.655.303,00 57.229.886,42 0,00% 57.229.886,42 15,11% TRANSFERÊNCIAS CORRENTES 2.927.358.770,00 466.711.668,85 15,94% 466.711.668,85 15,94% Transferências Intergovernamentais 2.792.914.352,00 451.273.234,08 16,16% 451.273.234,08 16,16% (-) RECURSOS FUNDEB* 310.243.583,00 57.283.097,68 18,46% 57.283.097,68 18,46% Transferências de Instituções Privadas - - 0,00% - 0,00% Transferências do Exterior - - 0,00% - 0,00% Transferências de Pessoas - - 0,00% - 0,00% Transferências de Convênios 134.444.418,00 15.438.434,77 11,48% 15.438.434,77 11,48% Transferências Para o Combate à Fome - - 0,00% - 0,00% OUTRAS RECEITAS CORRENTES 568.879.069,00 78.361.593,20 13,77% 78.361.593,20 13,77% Multas e Juros de Mora 173.824.466,00 22.166.348,64 12,75% 22.166.348,64 12,75% (-) RECURSOS FUNDEB* - 21.213,83 0,00% 21.213,83 0,00% Indenizações e Restituições 2.521.735,00 103.977,23 4,12% 103.977,23 4,12% Receita da Dívida Ativa 51.803.036,00 6.737.037,59 13,01% 6.737.037,59 13,01% (-) RECURSOS FUNDEB* 9.141.712,00 21.683,73 0,24% 21.683,73 0,00% Receitas Correntes Diversas 340.729.832,00 49.354.229,74 14,48% 49.354.229,74 14,48% RECEITAS DE CAPITAL 1.949.239.848,00 10.135.942,78 0,52% 10.135.942,78 0,52% OPERAÇÕES DE CRÉDITO 1.465.136.364,00 1.841.923,09 0,00% 1.841.923,09 0,00% Operações de Créditos Internas 1.459.614.210,00 1.841.923,09 0,00% 1.841.923,09 0,00% Operação de Créditos Externas 5.522.154,00 - 0,00% - 0,00% ALIENAÇÃO DE BENS 194.156.556,00 1.287.150,33 0,66% 1.287.150,33 0,66% Alienação de Bens Móveis 669.236,00 33.743,70 5,04% 33.743,70 5,04% Alienação de Bens Imóveis 193.487.320,00 1.253.406,63 0,65% 1.253.406,63 0,65% AMORTIZAÇÕES DE EMPRÉSTIMOS 3.692.219,00 546.622,03 14,80% 546.622,03 14,80% TRANSFERÊNCIAS DE CAPITAL 286.254.709,00 6.459.025,74 2,26% 6.459.025,74 2,26% Transferências Intergovernamentais - - 0,00% - 0,00% Transferências de Instituições Privadas - - 0,00% - 0,00% Transferências do Exterior - - 0,00% - 0,00% Transferências de Pessoas - - 0,00% - 0,00% Transferências de Outras Instituições Públicas - - 0,00% - 0,00% Transferências de Convênios 286.254.709,00 6.459.025,74 2,26% 6.459.025,74 2,26% Transferências para o Combate à Fome - - 0,00% - 0,00% OUTRAS RECEITAS DE CAPITAL - 1.221,59 0,00% 1.221,59 0,00% Integralização do Capital Social - - 0,00% - 0,00% Dív. Atv. Prov. da Amort. de Emp. e Financ. - - 0,00% - 0,00% Receitas de Capital Diversas - 1.221,59 0,00% 1.221,59 0,00% RECEITAS INTRA-ORÇAMENTÁRIAS (II) 861.683.993,00 129.735.897,63 15,06% 129.735.897,63 15,06% SUBTOTAL DAS RECEITAS (III)=(I+II) 13.001.925.975,00 1.770.924.828,94 13,62% 1.770.924.828,94 13,62% OPERAÇÕES DE CRÉDITO/REFINANCIAMENTO (IV) - - - - - Operações de Crédito Internas - - - - - Mobiliária - - - - Contratual - - - - - Operações de Crédito Externas - - - - - Mobiliária - - - - - Contratual - - - - SUBTOTAL COM REFINANCIAMENTO (V) = (III+IV) 13.001.925.975,00 1.770.924.828,94 13,62% 1.770.924.828,94 13,62% DÉFICIT (VI) - - - - - TOTAL (VII) = (V+VI) 13.001.925.975,00 1.770.924.828,94 13,62% 1.770.924.828,94 13,62% SALDO DE EXERCÍCIOS ANTERIORES (UTILIZADOS PARA CRÉDITOS ADICIONAIS) Superávit Financeiro - Reabertura de Créditos Adicionais - - No Bimestre Até o Bimestre No Bimestre Até o Bimestre % (d) (e) (f) = (d + e ) (g) ((g+h)/f) (f -(g+h)) DESPESAS (EXCETO INTRA-ORÇAMENTÁRIAS) (VIII) 12.156.973.125,31 171.452.901,80 12.328.426.027,11 3.541.464.268,40 3.541.464.268,40 1.281.867.692,25 1.281.867.692,25 10,40% 11.046.558.334,86 DESPESAS CORRENTES 8.973.214.310,72 (9.385.778,33) 8.963.828.532,39 3.024.814.813,67 3.024.814.813,67 1.118.338.307,79 1.118.338.307,79 12,48% 7.845.490.224,60 PESSOAL E ENCARGOS SOCIAIS 4.605.512.873,26 (1.594.782,18) 4.603.918.091,08 756.759.128,76 756.759.128,76 719.510.575,89 719.510.575,89 15,63% 3.884.407.515,19 JUROS E ENCARGOS DA DÍVIDA 376.877.465,66 - 376.877.465,66 184.683.282,13 184.683.282,13 31.583.732,81 31.583.732,81 8,38% 345.293.732,85 OUTRAS DESPESAS CORRENTES 3.990.823.971,80 (7.790.996,15) 3.983.032.975,65 2.083.372.402,78 2.083.372.402,78 367.243.999,09 367.243.999,09 9,22% 3.615.788.976,56 Transferências a Municípios 1.530.007.767,00 - 1.530.007.767,00 1.185.976.010,00 1.185.976.010,00 236.913.970,31 236.913.970,31 15,48% 1.293.093.796,69 Demais Despesas Correntes 2.460.816.204,80 (7.790.996,15) 2.453.025.208,65 897.396.392,78 897.396.392,78 130.330.028,78 130.330.028,78 5,31% 2.322.695.179,87 DESPESAS DE CAPITAL 3.100.470.714,59 180.838.680,13 3.281.309.394,72 516.649.454,73 516.649.454,73 163.529.384,46 163.529.384,46 4,98% 3.117.780.010,26 INVESTIMENTOS 2.637.155.088,45 180.838.680,13 2.817.993.768,58 268.151.061,82 268.151.061,82 7.894.194,52 7.894.194,52 0,28% 2.810.099.574,06 INVERSÕES FINANCEIRAS 5.348.100,00 - 5.348.100,00 - - - - 0,00% 5.348.100,00 AMORTIZAÇÃO DA DÍVIDA 457.967.526,14 - 457.967.526,14 248.498.392,91 248.498.392,91 155.635.189,94 155.635.189,94 33,98% 302.332.336,20 RESERVA DE CONTINGÊNCIA 83.288.100,00 - 83.288.100,00 - - - - 0,00% 83.288.100,00 RESERVA DO RPPS - - - - - - - 0,00% - DESPESAS (INTRA-ORÇAMENTÁRIAS) (IX) 844.952.849,69 477.719,02 845.430.568,71 198.295.634,06 198.295.634,06 140.246.597,65 140.246.597,65 16,59% 705.183.971,06 SUBTOTAL DAS DESPESAS (X) = (VIII + IX) 13.001.925.975,00 171.930.620,82 13.173.856.595,82 3.739.759.902,46 3.739.759.902,46 1.422.114.289,90 1.422.114.289,90 10,79% 11.751.742.305,92 AMORTIZAÇÃO DA DÍVIDA - REFINANCIAMENTO (XI) - - - - - - - - - Amortização da Dívida Interna - - - - - - - - - Dívida Mobiliária - - - - - - - - - Outras Dívidas - - - - - - - - - Amortização da Dívida Externa - - - - - - - - - Dívida Mobiliária - - - - - - - - - Outras Dívidas - - - - - - - - - SUBTOTAL COM REFINANCIAMENTO (XII) = (X + XI) 13.001.925.975,00 171.930.620,82 13.173.856.595,82 3.739.759.902,46 3.739.759.902,46 1.422.114.289,90 1.422.114.289,90 10,79% 11.751.742.305,92 SUPERÁVIT (XIII) 348.810.539,04 TOTAL (XIV) = (XII + XIII) 13.001.925.975,00 171.930.620,82 13.173.856.595,82 3.739.759.902,46 3.739.759.902,46 1.422.114.289,90 1.770.924.828,94 10,79% 11.751.742.305,92 NO BIMESTRE % ATÉ O BIMESTRE % (b) (b/a) ( c ) (c/a) RECEITAS CORRENTES 861.683.993,00 129.735.897,63 15,06% 129.735.897,63 15,06% RECEITAS DE CONTRIBUIÇÕES 773.023.633,00 122.807.228,68 15,89% 122.807.228,68 15,89% Contribuições Sociais 773.023.633,00 122.807.228,68 15,89% 122.807.228,68 15,89% Contribuições Econômicas - - 0,00% - 0,00% RECEITA PATRIMONIAL - - 0,00% - 0,00% Receita Imobiliárias - - 0,00% - 0,00% RECEITA INDUSTRIAL 9.523.616,00 429.535,35 4,51% 429.535,35 4,51% Receita da Indústria de Transformação 9.523.616,00 429.535,35 4,51% 429.535,35 4,51% Receita da Indústria de Construção - - 0,00% - 0,00% Outras Receitas Industriais - - 0,00% - 0,00% RECEITA DE SERVIÇOS 79.136.744,00 5.670.706,94 7,17% 5.670.706,94 7,17% OUTRAS RECEITAS CORRENTES - 828.426,66 0,00% 828.426,66 0,00% Transferências de Convênios - - 0,00% - 0,00% Indenizações e Restituições - 828.426,66 0,00% 828.426,66 0,00% TOTAL 861.683.993,00 129.735.897,63 15,06% 129.735.897,63 15,06% No Bimestre Até o Bimestre No Bimestre Até o Bimestre I % (d) (e) (f) = (d+e) (g) (g/f) (f-g) DESPESAS CORRENTES 844.902.849,69 477.719,02 845.380.568,71 198.295.634,06 198.295.634,06 140.246.597,65 140.246.597,65 16,59% 705.133.971,06 PESSOAL E ENCARGOS SOCIAIS 734.717.693,74 1.594.782,18 736.312.475,92 163.223.066,49 163.223.066,49 137.410.076,48 137.410.076,48 18,66% 598.902.399,44 JUROS E ENCARGOS DA DÍVIDA - - - - - - - 0,00% - OUTRAS DESPESAS CORRENTES 110.185.155,95 (1.117.063,16) 109.068.092,79 35.072.567,57 35.072.567,57 2.836.521,17 2.836.521,17 2,60% 106.231.571,62 DESPESAS DE CAPITAL 50.000,00 - 50.000,00 - - - - 0,00% 50.000,00 INVESTIMENTOS 50.000,00 - 50.000,00 - - - - 0,00% INVERSÕES FINANCEIRAS - - - - - - - 0,00% - AMORTIZAÇÃO DA DÍVIDA - - - - - - - 0,00% - TOTAL 844.952.849,69 477.719,02 845.430.568,71 198.295.634,06 198.295.634,06 140.246.597,65 140.246.597,65 16,59% 705.183.971,06 GOVERNO DO ESTADO DE MATO GROSSO RELATÓRIO RESUMIDO DA EXECUÇÃO ORÇAMENTÁRIA BALANÇO ORÇAMENTÁRIO ORÇAMENTOS FISCAL E DA SEGURIDADE SOCIAL PERÍODO DE REFERÊNCIA: JANEIRO A FEVEREIRO DE 2012/BIMESTRE JANEIRO-FEVEREIRO - 1º BIMESTRE/2012 RECEITAS PREVISÃO INICIAL PREVISÃO ATUALIZADA RECEITAS REALIZADAS SALDO A REALIZAR 5.222.537.671,00 4.354.541.313,91 5.125.894.583,00 4.273.404.069,16 808.679.087,00 673.204.962,15 (a) (a - c) 12.140.241.982,00 10.499.053.050,69 10.191.002.134,00 8.559.949.145,47 332.184.361,00 282.506.766,89 696.701.367,00 597.230.423,08 61.433.193,00 50.197.896,50 96.643.088,00 81.137.337,29 - (92,54) 1.028.885.728,00 879.737.189,97 47.105,00 38.861,12 157.500,00 139.210,51 - - 2.574.759,00 2.453.999,05 50.124.653,00 39.325.419,60 8.686.676,00 8.379.616,73 3.094.900,00 2.743.541,05 - - - - 72.500,00 61.440,51 85.000,00 77.770,00 3.094.900,00 2.743.541,05 310.243.583,00 252.960.485,32 - - - - 378.655.303,00 321.425.416,58 2.927.358.770,00 2.460.647.101,15 2.792.914.352,00 2.341.641.117,92 568.879.069,00 490.517.475,80 173.824.466,00 151.658.117,36 - (21.213,83) - - 134.444.418,00 119.005.983,23 - - 340.729.832,00 291.375.602,26 1.949.239.848,00 1.939.103.905,22 1.465.136.364,00 1.463.294.440,91 2.521.735,00 2.417.757,77 51.803.036,00 45.065.998,41 9.141.712,00 9.120.028,27 669.236,00 635.492,30 193.487.320,00 192.233.913,37 3.692.219,00 3.145.596,97 1.459.614.210,00 1.457.772.286,91 5.522.154,00 5.522.154,00 194.156.556,00 192.869.405,67 - - - - - - 286.254.709,00 279.795.683,26 - - - - - - - - - (1.221,59) 286.254.709,00 279.795.683,26 - - - (1.221,59) - - - - 861.683.993,00 731.948.095,37 13.001.925.975,00 11.231.001.146,06 - - 13.001.925.975,00 11.231.001.146,06 - - 13.001.925.975,00 11.231.001.146,06 - - - - - - RECEITAS INTRA-ORÇAMENTÁRIAS PREVISÃO INICIAL PREVISÃO ATUALIZADA RECEITAS REALIZADAS SALDO A REALIZAR - - - DESPESAS DOTAÇÃO INICIAL CRÉDITOS ADICIONAIS DOTAÇÃO ATUALIZADA DESPESAS EMPENHADAS DESPESAS EXECUTADAS - - - - - - (a) (a - c) 861.683.993,00 731.948.095,37 773.023.633,00 650.216.404,32 SALDO DESPESAS LIQUIDADAS - - 9.523.616,00 9.094.080,65 9.523.616,00 9.094.080,65 773.023.633,00 650.216.404,32 - - - - - (828.426,66) - - - (828.426,66) - - - - 79.136.744,00 73.466.037,06 FONTE: FIPLAN - CINF- Relatórios emitidos entre a 17:13 e 17:40 horas do dia 21/03/2012. * Os valores do FUNDEB estão sendo expurgados em Impostos, Transferências Intergovernamentais, Multas e Juros de Mora e Receita da Dívida Ativa. A diferença correspondente à R$: 14.100,00 da Previsão de Receita e Fixação de Despesa com a LOA, Lei Nº 9.686 de 28/12/2011, refere-se ao orçamento de investimento da MT Fomento. 861.683.993,00 731.948.095,37 DESPESAS INTRA-ORÇAMENTÁRIAS DOTAÇÃO INICIAL CRÉDITOS ADICIONAIS DOTAÇÃO ATUALIZADA DESPESAS EMPENHADAS DESPESAS LIQUIDADAS SALDO O SUBTOTAL DAS RECEITAS SÃO APRESENTADOS LÍQUIDOS DOS VALORES DO FUNDEB E DAS RESTITUIÇÕES.
14

Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Jan 11, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance

�arsha IngStanford University

AbstractWithin the conte�t of a �athe�atics refor� effort to i��le�ent algebra in ele�entary schools, there is �ressure to �rovide results in the for� of student achieve�ent scores. However, widely-used �easures of student achieve�ent �ay be unrelated to the ideas and instructional �ractices encouraged by the refor� effort. �he ina��ro�riate use of student achieve�ent scores often leads to inaccurate inferences about the quality of instruction. �his study e��lores the validity of inferences about instructional quality using two �easures of �athe�atics achieve�ent: a �easure of algebraic reasoning designed to closely relate to instructional activities and a �easure o� grade-level specific California content standards. �his e��loration includes �ulti�le �easures of classroo� instruction to evaluate the instructional sensitivity of �ulti�le �easures of �ath achieve�ent and a��lies an analytic �ethod that �akes it �ossible to relate student-level outco�es to teacher-level �easures of instruction. Findings suggest that �articular ite�s �easuring equality and relational thinking fro� the �easure of algebraic reasoning were sensitive to instruction. �he ability of these �easures to deter�ine the i��act of instruction on student per�or�ance depends on the variables that define students’ o��ortunities to learn and the characteristics of the student assess�ent ite�s. �hese factors should be considered when evaluating the relationshi� between instructional quality and student �erfor�ance. Instructional sensitivity �rovides a fra�ework to inter�ret student �erfor�ance by creating a link between instructional o��ortunities and �erfor�ance on �articular assess�ent ite�s.

Page 2: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Journal of Educational Research & Policy Studies

�4Introduction

Student performance on achievement tests is often used to make inferences about the content and quality of instruction students have received. If test scores are high or show improvement over time, instructional quality is assumed to be good or improving. Conversely, if test scores are low or decrease over time, instructional quality is assumed to be poor or declining. This assumption is at the heart of conclusions made about educational quality from test scores and is captured in the following statement by the executive director of Education Trust West in a Los Angeles �i�es article (Vaughn, 2005) concerning California’s scores on the National Assessment of Educational Progress: “No matter how you look at this data, California is at the bottom. There is something systematically wrong with the way we are approaching educating all students in this state” (p. A20).

The assumption that something is wrong with education is based on sparse information about classroom instruction or little knowledge of the implementation of various reform efforts. This leads to imprecise evaluations of the quality of education and the effectiveness of educational reform. Such inferences about educational reform assume that test scores adequately reflect instructional content and quality (Airasian & Madaus, 1983; Airasian & Miranda, 2002; Amrein & Berliner, 2002; Moss, Pullin, Gee, & Haertel, 2005), that is, that test scores are sensitive to instruction. �he term instructional sensitivity (Burstein, 1983, 1989; Haladyna & Roid, 1981; Miller & Linn, 1988; Popham, 2006; Ruiz-Primo, Shavelson, Hamilton, & Klein, 2002), then, describes a property of an assessment that addresses these assumptions and inferences.

A Study of Instructional SensitivityThe purpose of this study is to use the concept of instructional sensitivity

to explore the validity of inferences about instructional quality, and to explain student performance on two different assessments. One of the assessments was designed to measure algebraic reasoning (Carpenter, Franke, & Levi, 2003; Carpenter & Levi, 2004; Carpenter, Levi, Berman, & Pligge, 2005) and the other to measure California math content standards.

An item classified as instructionally sensitive describes the degree to which it “reflects student knowledge/ability as the consequence of instruction”

Page 3: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Spring 2008 / Volu�e 8, Nu�ber 1

25 (Burstein, 1989, p. 5). Burstein (1983) emphasizes that:

An exact explanation of how a student responds to given test items is unanswerable under all but the most trivial circumstances. Nonetheless, it is reasonable to attempt to narrow the range of plausible explanations and to investigate the likelihood that particular instructional experiences activate processes that account for student responses. (p. 99)

While we will never know exactly why students respond to an item in a particular way, instructional sensitivity is a quality of an assessment that can be used to help explain performance of students with different instructional experiences.

Among students with similar ability levels, if students with one set of instructional opportunities perform better on an item than students with a different set of instructional opportunities, the item is considered to be sensitive to the effects of instruction. If students with similar ability levels who have different instructional experiences all perform the same on an item, the item is considered to be insensitive to the effects of instruction. In the latter case, student performance on an item does not depend on instructional experiences but depends instead on other factors such as student ability level or earlier achievement.

Educators recognize that not all assessments are designed to have the same degree of instructional sensitivity. Some assessments are meant to be measures of general achievement not necessarily influenced by particular instructional opportunities while other assessments are designed as measures of student understanding of a specific lesson and are assumed to be influenced by the specific types of instructional opportunities provided. Furthermore, the instructional opportunities that predict performance on the assessment of a particular lesson (close proximity to instruction) might not predict performance on the assessment of general achievement (more distant from instruction).

In considering the instructional sensitivity of different kinds of assessments, it is useful to consider the multilevel framework for evaluating the instructional sensitivity of different assessments proposed by Ruiz-Primo et al. (2002). �heir framework (Figure 1) considers student performance on assessments that vary in terms of proximity from instruction provided.

It is possible is to draw conclusions about student proficiency based on assessments that are closer to instruction. However, researchers have cautioned about focusing only on measures close to instruction (Cronbach, 1963; Koretz,

Page 4: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Journal of Educational Research & Policy Studies

�61996; Linn, 1983; Mehrens & Ebel, 1979; Ruiz-Primo et al., 2002; Shepard, 2004). Cronbach (1963) suggested that “an ideal evaluation would include measures of all the types of proficiency that might reasonably be desired in the area of question, not just the selected outcomes to which this curriculum directs substantial attention” (p. 680). Linn (1983) continued the argument that “such a tight link of the test to what is taught has considerable appeal” and is “apt to yield results that are more sensitive to instruction that is taking place” (p. 186) but concluded that doing so is at the “risk of reduction in the importance of learning to apply skill and knowledge to new problems” (p. 187). Thus, central to investigating the effects of educational reform is the inclusion of assessments at different proximities to instruction.

Figure 1. Characterization of Assessments Based on Proximity to Classroom Instruction from Ruiz-Primo et al. (2002).

Literature Review

Instructional Sensitivity and Related ConceptsThe increased use of test scores as measures of educational quality calls

for greater attention to the types of measures used to make such judgments (National Research Council, 2001a, 2001b). Less attention has focused on how these different assessments can be used to create more comprehensive judgments about educational quality. Instructional sensitivity helps focus attention on the validity of these different measures by providing guidance about which items are sensitive to the effects of instruction. Instructional

Remote: Standardized National Science Achievement Tests

Distal: Large-Scale Assessment from State/National Curriculum Framework

Proximal: Same Concept/Principle-New Assessment

Close: “Embedded” Assessments –Assessments from More Advanced Activities in the Units

Immediate: Science Journals/Notebooks & Classroom Tests

Classroom Instruction

Depth of Assessment Probe

Outside School Influences

Page 5: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Spring 2008 / Volu�e 8, Nu�ber 1

�7 sensitivity is related to concepts such as alignment (e.g., Porter, 2002; Rothman, 2004, Webb, 1997, 2002;), opportunity to learn (e.g., Anderson, 1985; Floden, 2002; Guiton & Oakes, 1995; Herman, Klein, & Abedi, 2000; Wang, 1998; Wiley & Yoon, 1995), and test preparation (e.g., Anastasi, 1981; DerSimonian & Laird, 1983; Herman & Ing, 2007; Messick, 1982) all of which seek to strengthen the validity of inferences about instructional quality based on test performance.

Compared to opportunity to learn (OTL) and alignment, instructional sensitivity, however, is more narrowly defined and focused. As an example of this narrow definition, the term instructional validity is used by Yoon and Resnick (1998) to describe a quality of the assessment that “is systematically sensitive to differences in opportunity to learn” and “registers differences in the amount and kind of instruction to which students have been exposed” (p. �).

Different Approaches to Measuring Instructional SensitivityEarly approaches to measuring instructional sensitivity did not include

information about instruction. A typical approach flagged response patterns that seemed unusual or unexpected (Donlon & Fischer, 1968; Hanna & Bennett, 1984; Harnisch, 1983; Kane & Brennan, 1980; Sato, 1975; �atsuoka & �atsuoka, 1980). An unusual response pattern was identified for profiles of responses with the same total score but different responses to particular items. For example, a large value of Sato’s caution index (1975) indicated an unusual response pattern and served as a caution against the use of the total score as an accurate measure for a particular examinee. An examinee who answered 8 out of 10 items correctly was expected to answer all of the easy items correctly but miss the 2 most difficult items. However, if the examinee with a score of 8 out of 10 missed the 2 easy items but answered all of the other items correctly, Sato’s caution index would be high. Studies used these unusual patterns to flag schools or students who might be randomly responding to items. Researchers suggested that random responses could be due to test anxiety or carelessness but might also indicated that respondents were not instructed on the test material. If respondents were not instructed in the material, their responses might be more random than those of instructed students and might not follow the usual pattern of responses.

Harnisch and Linn (1981) compared several of these unusual response pattern indices for seven different schools and hypothesized that these unusual

Page 6: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Journal of Educational Research & Policy Studies

�8response patterns might be a result of school variability in content coverage and emphasis as well as attendance patterns. The researchers did not, however, obtain measures of content coverage and emphasis nor any other indicators of school variability pertaining to instructional quality or opportunities to learn.

�his early approach to studying instructional sensitivity finds a parallel in a recent measurement approach to explaining differences in performance known as differential item functioning (Holland & Wainer, 1993) based on item response theory (e.g., Embretson & Reise, 2000; Hambleton & Swaminathan, 1985; Lord, 1980; Van der Linden & Hambleton, 1997). Item response theory is a psychometric approach to estimating an unobservable trait based on item responses. Differential item functioning occurs when respondents with similar trait levels have a different probability of responding correctly to a given item. The item that demonstrates differential item functioning is thought to be biased toward a particular group of students. In terms of instructional sensitivity, if students with the same trait level are given different instructional opportunities, an item would be biased toward students who were provided with instruction.

An example of this can be found in a study by Clauser, Nungester, and Swaminathan (1996) who used differential item functioning to detect unusual response patterns due to educational experiences. �heir initial analysis of data from the National Board of Medical Examiners indicated differences between males and females with similar ability levels on over 130 items. �he authors hypothesized that this gender bias might be explained by differential educational opportunities of the respondents. When the authors included information about the residency fields (internal medicine, surgery, pediatrics, or obstetrics-gynecology) in their analyses, they found a decrease in the number of items displaying differential item functioning. The authors suggested that gender is confounded with educational experiences because gender and choice of residency fields are related variables. �hus, attributing differences in item performance to gender alone is misleading. Although Clauser et al. (1996) did not measure instructional opportunities directly, their study suggests that instructional information can be used to identify unusual response patterns.

Other early approaches to measuring instructional sensitivity used a pre-post test design to compare instructed and non-instructed students (Brennan & Stolurow, 1971; Haladyna & Roid, 1981; Popham, 1971). As described in

Page 7: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Spring 2008 / Volu�e 8, Nu�ber 1

�9 Haladyna and Roid (1981), Cox and Vargas (1972) calculated the difference between the difficulty level of the item at the time of the pretest and the posttest when given to students who were instructed and students who were not instructed. �he difficulty level of the items on the post-test was expected to be lower for students who were provided instruction than students who were not provided with any instruction. That is, items should appear easier for students who have experience with the material. �he authors found differences in the item difficulty levels between instructed and non-instructed students and attributed these differences to instructional sensitivity of the items. While these early approaches seemed to confirm expectations about change in performance between groups of students with different instructional experiences, they did not include information about the actual instructional experiences.

Later studies of instructional sensitivity included more explicit measures of instruction, such as content coverage, measured using student or teacher questionnaires about the opportunities provided to students (Miller & Linn, 1988; Muthén, 1989a, 1989b, 1994; Muthén, Huang, Khoo, Goff, Novak, & Shih, 1995; Muthén, Kao, & Burstein, 1991), textbook analysis (Mehrens & Phillips, 1986, 1987; Phillips & Mehrens, 1987) or assigned students to particular instructional conditions (Hanson, McMorris, & Bailey, 1986).

For example, Muthén and colleagues (Muthén, 1989a, 1989b, 1994; Muthén et al., 1995; Muthén et al., 1991) used more explicit measures of instruction from the Second International Mathematics Study (SIMS) in their studies on instructional sensitivity. In a 1994 article, Muthén describes the instructional measures used in his modeling of eighth-grade student performance on the SIMS mathematics assessment. �eachers were asked two questions regarding each of the items on the SIMS mathematics assessment: (a) During this school year did you teach or review the mathematics needed to answer the item correctly; and (b) If in the school year you did not teach or review the mathematics needed to answer this item correctly, was it mainly because it had been taught prior to this school year; it will be taught later; it is not in the school curriculum at all; or for other reasons? Muthén (1994) found that these instructional questions had a smaller effect in predicting math achievement (as defined by performance on the SIMS math items) than did prior achievement and demographics such as gender and father’s level of education. Given the framework presented by Ruiz-Primo et al. (2002) on measuring instructional sensitivity, the SIMS items would be considered

Page 8: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Journal of Educational Research & Policy Studies

30re�ote items. By design, re�ote items are far removed from particular instruction experiences and are not easily influenced by instruction. Muthén’s approach to measuring instructional sensitivity, however, was unique in the way that it included attention to content coverage and used this information to predict performance.

Ruiz-Primo et al. (2002) studied instructional sensitivity by including multiple achievement measures with different proximities to instruction to evaluate science reform effort using student work as a measure of instruction, and administering pre-tests and post-tests for the close and proximal achievement measures. For some curriculum units, changes between pre- and post-test total scores (not item level) indicated that the close assessments were more sensitive to instruction than were the proximal assessments. Ruiz-Primo et al.’s approach of using a direct measure of instruction is an important feature. Science notebooks were collected from students in each classroom as a sample of student work. Students used their science notebooks as a record of class activities which were then used by researchers as an indicator of content coverage. �his direct measure of instruction was assumed to be reflective of the types of activities that actually occurred in the classrooms.

Considering whether experiences in particular classrooms influenced student performance could be explored by looking at performance for each classroom and taking into consideration the shared experiences of students in that particular classroom. Researchers have called for attention to the hierarchical structure of educational data (e.g., Burstein, 1980; De Boeck & Wilson, 2004; Raudenbush, 1988; Raudenbush & Bryk, 2002; Seltzer, 2004). Previous research on instructional sensitivity acknowledged the nested nature of the data but did not incorporate this concern into the analyses. For example, as described earlier, Harnisch and Linn (1981) used grouping variables to describe differences in performance between different schools or different regions. Effects of group membership were estimated separately, rather than simultaneously as can be carried out through multilevel analysis. The analysis of variance as carried out by Harnisch and Linn compared overall group differences and did not provide information about what contributes to group differences, nor did it consider shared experiences of students within these groups that might explain group differences. �hrough a simultaneous multilevel analysis, the unique effects on performance as a result of students within classrooms or classrooms within districts are estimated. In other words, questions about particular teacher level characteristics could be used to address

Page 9: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Spring 2008 / Volu�e 8, Nu�ber 1

31 differences in student performance.

Previous studies investigating the instructional sensitivity of assessments suggest a link between the proximity of the assessment to instruction and the sensitivity of the assessment to instruction. To further clarify this link depends on (a) having in-depth and detailed measures of instruction; (b) administering multiple measures of student performance; and (c) carrying out analyses that directly investigate the link between instruction and student performance on the assessments. The present study addressed these three issues by using a multilevel approach to account for performance of students nested within classrooms. At the first level, referred to as Level 1 or the student-level, information about student general achievement was used to predict performance on the proximal and distal measures. At the second level, referred to as Level � or the teacher-level, information about classroom instructional opportunities was used to predict student performance.

Methods and Procedures

The data for this study were collected as part of a professional development effort in a large urban school district (Jacobs, Franke, Carpenter, Levi, & Battey, 2007). �he current study focused on 24 third grade teachers who participated in a year-long mathematics professional development program on algebraic reasoning. Algebraic reasoning was defined as “generalizing and formalizing patterns and regularities” (p. 259). Researchers involved in the professional development activities further conceptualized algebraic reasoning as “woven throughout the curriculum so that teachers viewed algebraic reasoning as pervading the mathematics curriculum rather than simply as one more topic to teach” (p. 260).

Measures�he two different student-level outcome measures (proximal and distal

items) were collected at the end of the school year. In addition, general measures of student prior achievement collected at the end of the previous school year were used as covariates in the multilevel analyses. �he proximal items were assumed to be close to instruction in that they matched the types of things that occurred in the classroom in terms of content emphasis and cognitive demand. In this study, proximal items were designed to closely relate to the ideas and

Page 10: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Journal of Educational Research & Policy Studies

3�processes advocated by the professional development program. Items from a district-wide assessment were considered distal items. �hese grade-specific assessments were administered to all students in the district and targeted particular state math content standards. These items were assumed to be less related to particular instructional opportunities provided but similar to the proximal items in terms of content. �he disattenuated correlations between the proximal items, distal items, and general measures of student achievement suggested that these measures were moderately related. Multilevel regression analyses were carried out separately for each of the two outcome measures.

The instructional opportunity variables collected from each teacher and the average classroom prior achievement were the classroom level variables included in this study to predict performance on the outcome measures. For this study, the following kinds of information about instructional opportunities were collected from a written teacher assessment and an oral teacher interview: teacher perceptions of students’ opportunities to learn algebraic reasoning, teacher algebraic reasoning content knowledge, teacher confidence in their own algebraic reasoning content knowledge, and teacher knowledge of their students’ algebraic reasoning strategies. This information represented different but related dimensions of teacher practice that previous research suggested as important to capture. These measures of instructional opportunities served as proxies of the instruction opportunities that students actually experienced in each classroom. �here was no evidence to validate the extent to which these measures captured differences in the instructional opportunities in each classroom but the risk of this assumption was taken to explore instructional sensitivity.

SampleThe initial sample for this study included 486 students across �4 third-

grade classrooms (�4 teachers), and eight schools. These eight schools were fairly similar in terms of their academic performance, percent of students receiving free or reduced lunch and percent of students designated as English language learners. All of these schools served predominantly African American and Hispanic student populations.

Page 11: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Spring 2008 / Volu�e 8, Nu�ber 1

33 Results

The correlation between prior student achievement and the outcome measures (Table 1) was highest for the distal items, r = .61, � < .01. �he correlations between student prior achievement and the equality items from the proximal measure were moderate, r = .30, � < .01, but lower compared to the correlation between prior achievement and performance on the distal items, r = .44, � < .01. �his suggests that the relationship between prior achievement and performance on the proximal items was less than the relationship between prior achievement and performance on the distal items.

Table 1Correlations Between Prior Achieve�ent, Instructional ���ortunity Variables and Perfor�ance on Pro�i�al and Distal Ite�sVariable Proximal Distal

Student-level (n = 321)

Prior student achievement 0.30* 0.61*

Classroom-level (n = 24)

Opportunity to learn 0.28 0.05Content knowledge 0.29 0.24Confidence 0.61* 0.23Awareness of student strategies 0.59* 0.30

Prior average classroom achievement -0.06 0.57*Note. *� < .01.

The correlation between teacher awareness of students’ strategies and performance on the proximal items was positive and moderate, r = .59, � < .01. �he correlation was low and not significant between teacher awareness of students’ strategies and the distal items, r = .30, � > .01. None of the other instructional opportunity variables correlated with performance on the proximal or distal items.

Predicting Performance on the Proximal Items�he final conditional multiple level linear regression model included

prior achievement at the student-level and teacher awareness of students’

Page 12: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Journal of Educational Research & Policy Studies

34relational thinking strategies at the classroom-level. Average classroom prior achievement and other classroom-level instructional opportunity variables were not included in the final conditional model because these variables were not significantly correlated with performance on the equality items and were not significant predictors of performance in the multilevel models. �eacher confidence correlated with performance on the equality items but was excluded because it was highly correlated with teacher awareness of student strategies, r = .78, � < .01. �eacher awareness of student strategies was a focal point in the professional development program so this variable was selected for substantive reasons.

�he fixed effect results for the final model (�able 2) indicated that the effect of student level prior achievement and teacher awareness of student strategies on performance on the equality items were small (less than one point) but significant. �eacher awareness of student strategies was not a significant predictor of the within-class achievement slopes. In other words, student prior achievement was an important consideration in student performance on the equality items. Teacher awareness of student strategies was also predictive of the average class means on the equality items but not on within-class differences.

Table �Fi�ed Effects for the Final Conditional �odel of Student Perfor�ance on the Equality Ite�sFixed Effect Coefficient SE t ratio

Model for classroom means

Intercept (γ00) 2.20 0.17 - - - Awareness of student strategies (γ01) 0.58 0.11 5.17*

Model for achievement slopes

Intercept (γ10) 0.71 0.11 - - - Awareness of student strategies (γ11) -0.01 0.07 -0.15

Note. *� < .01.

A smaller percentage of the variance is attributed to classroom differences compared to the unconditional model. �his percentage shifted from 26% in the unconditional model to 24% in the final conditional model. �he final results

Page 13: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Spring 2008 / Volu�e 8, Nu�ber 1

35 for grade 3 indicated that there were two variables that helped explain average class performance on the equality items: student level prior achievement and teacher awareness of students’ relational thinking strategies. A majority of the variation in performance on these items was due to differences between students. A smaller, but still significant, proportion of the variation was due to differences between classrooms.

Predicting Performance on the Distal ItemsThe multilevel models predicting performance on the distal items did

not include any of the classroom-level instructional opportunity variables because none of them significantly correlated with performance. Student prior achievement and average classroom achievement significantly predicted performance but none of the instructional opportunity variables were predictive of performance on the distal items. The variance between students and variance between classrooms did not indicate that this student outcome data was very different from other achievement outcomes (Raudenbush, Martinez, & Spybrook, 2007) but these results suggest that the same instructional opportunity variables were not equally predictive of performance on the different student outcomes.

Discussion and Conclusions

Mislevy, Wilson, Ercikan, and Chudowsky (2003) describe educational assessments as “data that becomes evidence in some analytic problem only when we have established their relevance to some conjecture we are considering” (p. 495). Educational assessments are often taken as evidence of instructional quality without attention to whether and how instruction and student performance are related. This study attempted to address this issue by exploring the link between instructional opportunities and performance on multiple measures of math achievement using the concept of instructional sensitivity. As Popham (2006) proposed, “instructional sensitivity is best conceived of as a continuum rather than a dichotomy” and that “rarely will one encounter an accountability test that is totally sensitive or totally insensitive to instruction” (p. 3).

Page 14: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Journal of Educational Research & Policy Studies

36Limitations

This study addressed this notion that a particular test is better (or worse) than another for making judgments about instructional quality by exploring the power of subscores from each test to distinguish between different instructional opportunities. There was limited evidence to suggest that performance on items on a proximal measure was influenced by instructional opportunities compared to items on a distal measure. Evidence was limited to third graders participating in this study and instruction opportunities are narrowly defined as teacher awareness of students’ relational thinking strategies. There are several possible reasons why the instructional opportunity variables, other than teacher awareness of relational thinking as a possible strategy students might use to solve problems, failed to significantly predict student performance on the proximal and distal items.

First, instructional opportunities simply do not matter in terms of student performance on these items. While this is certainly a possibility, one might argue that a premise or goal of instruction is to influence student performance over and above their prior achievement. Second, in addition to instructional opportunity variables that related to equality and relational thinking, opportunities to learn the items on the distal measure were not available but should be included in future analyses. A final possible reason for the lack of relationships is that there were restricted ranges in the responses to the instructional opportunity variables. For example, all teachers included in this study identified relational thinking strategies their students might use to solve problems. This select sample of teachers might not represent the entire population of teachers in terms of these variables and thus would only represent the relationship between instructional opportunities and performance for a limited portion of the population. Broadening the sample of teachers to create a greater range of responses (beyond the teachers participating in the professional development program), might address the limited range of talent available for this study.

Future DirectionsThis study raised broader issues that push researchers to think more

richly about how instructional opportunities are defined. �he assumption that previous research has led us to believe is that given the best possible measures of instruction and the appropriate analytic methods, empirical evidence for the

Page 15: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Spring 2008 / Volu�e 8, Nu�ber 1

37 relationship between instruction and student performance would be clear. By necessity, quantitative measures of instruction will always be approximations of the actual instruction that occurs in each classroom (e.g., Burstein, McDonnell, Van Winkle, Ormseth, Mirocha, & Guitton, 1995). In other words, it is obvious that these large-scale measures will always be indirect measures of instruction and will be reductive in some way. Thus, the goal of creating these measures is to capture key features of instructional opportunities and to articulate the limitations of these particular measures. In a large scale study of the relationship between instruction and student performance, it is unlikely and logistically overwhelming to observe key features of instruction in every classroom. Even if researchers observed instructional opportunities in every single classroom participating in the large scale study, it is difficult to make sense of all of this information in an efficient manner. �hus, the need to create quantitative measures of instruction is absolutely essential when conducting large scale quantitative studies. The goal when creating these measures is to come as close as possible to describing the key features of instructional opportunities and to acknowledge the limitations of these approximations.

Acknowledgements

�he author would like to thank Noreen Webb of the University of California, Los Angeles, for her comments on an earlier version of this article.

References

Airasian, P. W., & Madaus, G. F. (1983). Linking testing and instruction: Policy issues. Journal of Educational �easure�ent, 20(2), 103-118.

Airasian, P. W., & Miranda, H. (2002). �he role of assessments in the revised taxonomy. Theory into Practice, 41(4), 249-254.

Amrein, A. L., & Berliner, D. C. (2002). High-stakes testing, uncertainty, and student learning Education Policy Analysis Achieves, 10(18). Retrieved October 1, 2007 from http://epaa.asu.edu/epaa/v10n18/

Anastasi, A. (1981). Coaching, test sophistication and developed abilities. A�erican Psychologist, 36(10), 1086-1093.

Page 16: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Journal of Educational Research & Policy Studies

38Anderson, L. W. (1985). Opportunity to learn. In �. Husen & �.N. Postelethwaite (Eds.), �he international encyclo�edia of education research

and studies, (Volu�e 6, pp. 368�-3686). New York: Pergamon Press.Brennan, R. L., & Stolurow, L. M. (1971). An e��irical decision �rocess

for for�ative evaluation. Research Memorandum No. 4. Cambridge: MA: Harvard University CAI Laboratory.

Burstein, L. (1980). �he analysis of multilevel data in educational research and evaluation. Review of Research in Education, 8, 158-233.

Burstein, L. (1983). A word about this issue [Editor’s note]. Journal of Edu-cational �easure�ent, 20, 99-102.

Burstein, L. (1989). Conce�tual considerations in instructionally sensitive assess�ent. (CSE Technical Report 333). Los Angeles: Center for Research on Evaluation, Standards, and Student Testing.

Burstein, L., McDonnell, L. M., Van Winkle, J., Ormseth, �., Mirocha, J., & Guitton, G. (1995). Validating national curriculu� indicators. Santa Monica, CA: RAND.

Carpenter, �. P., Franke, M. L., & Levi, L. (2003). �hinking �athe�atically: Integrating arith�etic and algebra in ele�entary school. Portsmouth, NH: Heinemann.

Carpenter, �. P., & Levi, L. (2004). Develo�ing conce�tions of algebraic reasoning in the �ri�ary grades. (National Center for Improving Student Learning and Achievement in Mathematics and Science Research Report No. 00-2). Madison, WI: NCISLA, Wisconsin Center for Education Research.

Carpenter, �. P., Levi, L., Berman, P. W., & Pligge, M. (2005). Developing algebraic reasoning in elementary school. In T. A. Romberg, T. P. Carpenter, & F. Dremock (Eds.), Understanding �athe�atics and science �atters (pp. 81-98). Mahwah, NJ: Lawrence Erlbaum Associates, Publishers.

Clauser, B. E., Nungester, R. J., & Swaminathan, H. (1996). Improving the matching for DIF analysis by conditioning on both test score and an educa-tional background. Journal o� Educational Measure�ent, 33(4), 453-464.

Cox, R. C., & Vargas, J. (1972). A co��arison of ite� selection techniques for nor�-referenced and criterion-referenced tests. Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago, IL.

Cronbach, L. J. (1963). Course improvement through evaluation. �eachers College Record, 64(8), 67�-683.

Page 17: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Spring 2008 / Volu�e 8, Nu�ber 1

39 De Boeck, P., & Wilson, M. (Eds.). (2004). E��lanatory ite� res�onse �od-

els: A generali�ed linear and nonlinear a��roach. New York: Springer-Verlag.

DerSimonian, R., & Laird, N. M. (1983). Evaluating the effect of coaching on SAT scores: A meta-analysis. Harvard Educational Review, 53, 1-15.

Donlon, �. F., & Fischer, F. E. (1968). An index of an individual’s agreement with group determined item difficulties. Educational and Psychological �easure�ent, 28, 105-113.

Embretson, S. E., & Reise, S. P. (2000). Ite� res�onse theory for �sycholo-gists. Mahwah, NJ: Lawrence Erlbaum Associates, Publishers.

Floden, R. E. (2002). �he measurement of opportunity to learn. In A.C. Porter & A. Gamoran (Eds.), �ethodological advances in cross-national surveys of educational achieve�ent (pp. 231-267). Washington, DC: National Re-search Council.

Guiton, G., & Oakes J. (1995). Opportunity to learn and conceptions of educational equality. Educational Evaluation and Policy Analysis, 17(3), 3�3-336.

Haladyna, �. M., & Roid, G. (1981). �he role of instructional sensitivity in the empirical review of criterion-referenced test items. Journal of Educational Measure�ent, 18(1), 39-53.

Hambleton, R. K., & Swaminathan, H. (1985). Ite� res�onse theory: Prin-ci�les and a��lications. Boston: Kluwer-Nijoff Publishers.

Hanna, G. S., & Bennett, J. A. (1984). Instructional sensitivity expanded. Educational and Psychological �easure�ent, 44, 583-596.

Hanson, R. A., McMorris, R. F., & Bailey, J. D. (1986). Differences in in-structional sensitivity between item formats and between achievement test items. Journal o� Educational Measure�ent, 23(1), 1-1�.

Harnisch, D. L. (1983). Item response patterns: Applications for educational practice. Journal of Educational �easure�ent, 20(2), 191-206.

Harnisch, D. L., & Linn, R. L. (1981). Analysis of item response patterns: Questionable test data and dissimilar curriculum practices. Journal of Edu-cational Measure�ent, 18(3), 133-146.

Herman, J., & Ing, M. (2007). �est Preparation. In K. M. Borman, S. E. Cahill, & B. A. Cotner (Eds.), �he Praeger handbook of A�erican high schools (pp. 416-420). Westport, C�: Praeger.

Page 18: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Journal of Educational Research & Policy Studies

40Herman, J. L., Klein, D. C., & Abedi, J. (2000). Assessment students’ oppor-

tunity to learn: Teacher and student perspectives. Educational �easure�ent: Issues and Practice, 19(4), 16-�4.

Holland, P. W., & Wainer, H. (Eds.). (1993). Differential ite� functioning: �heory and �ractice. Hillsdale, NJ: Lawrence Erlbaum Associates.

Jacobs, V. R., Franke, M. L., Carpenter, �. P., Levi, L., & Battey, D. (2007). Professional development focused on children’s algebraic reasoning in el-ementary schools. Journal �or Research in Mathe�atics Education, 38(3), 258-288.

Kane, M. �., & Brennan, R. L. (1980). Agreement coefficients as indices of dependability for domain-referenced tests. A��lied Psychological �easure-�ent, 4, 105-126.

Koretz, D. (1996). Using student assessments for educational accountability. In E. A. Hanushek, & D. W. Jorgenson (Eds.), I��roving A�erica’s schools: �he role of incentives (pp. 171-195). Washington, DC: National Academy Press.

Linn, R. L. (1983). Testing and instruction: Links and distinctions. Journal of Educational �easure�ent, 20(�), 179-189.

Lord, F. M. (1980). A��lications of ite� res�onse theory to �ractical testing �roble�s. Mahwah, NJ: Erlbaum.

Mehrens, W. A., & Ebel, R. L. (1979). Some comments on criterion-referenced and norm-referenced achievement tests. NC�E, �easure�ent in Education, 10(1), 1-8.

Mehrens, W. A., & Phillips, S. E. (1986). Detecting impacts of curricular dif-ferences in achievement test data. Journal of Educational �easure�ent, 23(3), 185-196.

Mehrens, W. A., & Phillips, S. E. (1987). Sensitivity of item difficulties to curricular validity. Journal of Educational �easure�ent, 24(4), 357-370.

Messick, S. (1982). Issues of effectiveness and equity in the coaching con-troversy: Implications for educational and testing practice. Educational Psychologist, 17(�), 67-91.

Miller, M. D., & Linn, R. L. (1988). Invariance of item characteristic functions with variations in instructional coverage. Journal of Educational �easure-�ent, 25(3), 205-219.

Page 19: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Spring 2008 / Volu�e 8, Nu�ber 1

41 Mislevy, R. J. Wilson, M., Ercikan, K., & Chudowsky, N. (2003). Psychomet-

ric principles in Student Assessment. In, �. Kellaghan, & D. L. Stufflebeam (Eds.), International handbook of educational evaluation, (pp. 489-532). Dordrecht, �he Netherlands: Kluwer Academic Press.

Moss, P. A., Pullin, D., Gee, J. P., & Haertel, E. H. (2005). �he idea of testing: Psychometric and sociocultural perspectives. Measure�ent, 3(�), 63-83.

Muthén, B. O. (1989a). Using item-specific instructional information in achievement modeling. Psycho�etrika, 54(3), 385-396.

Muthén, B. O. (1989b). Latent variable modeling in heterogeneous popula-tions. Psycho�etrika, 54(4), 557-586.

Muthén, B. O. (1994). Instructionally sensitive psychometrics: Applications to the Second International Mathematics Study. In I. Westbury, C. A. Ethington, L. A. Sosniak, & D. P. Baker (Eds.), In search of �ore effective �athe�atics instruction (pp. 293-324). Norwood, NJ: Ablex Publishing Corporation.

Muthén, B. O., Huang, L. C., Khoo, S. K., Goff, G. H., Novak, J. R., & Shin, J. C. (1995). Opportunity-to-learn effects on achievement: Analytical aspects. Educational Evaluation and Policy Analysis, 17(3), 371-403.

Muthén, B. O., Kao, C. F., & Burstein, L. (1991). Instructionally sensitive psychometrics: Application of a new IRT-based detection technique to mathematics achievement test items. Journal of Educational �easure�ent, 28, 1-��.

National Research Council. (2001a). Classroo� assess�ent and the National Science Education Standards. Committee on Classroom Assessment and the National Science Education Standards. J. M. Atkin, P. Black, & J. Coffey (Eds.). Center for Education, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press.

National Research Council. (2001b). Knowing what students know: �he sci-ence and design of educational assess�ent. Committee on the Foundation of Assessment. J. Pellegrino, N. Chudowsky and R. Glaser (Eds.). Washington, DC: National Academy Press.

Phillip, S. E., & Mehrens, W. A. (1987). Curricular differences and unidi-mensionality of achievement test data: An exploratory analysis. Journal of Educational �easure�ent, 24(1), 1-16.

Popham, W. J. (1971). Criterion-referenced �easure�ent. Englewood Cliffs, NJ: Educational Technology Publications.

Page 20: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Journal of Educational Research & Policy Studies

4�Popham, W. J. (2006). Deter�ining the instructional sensitivity of account-

ability tests. Paper presented at the annual Large-Scale Assessment Confer-ence, Council of Chief State School Officers, San Francisco, CA.

Porter, A. C. (2002). Measuring the content of instruction: Uses in research and practice. Educational Researcher, 31(7), 3-14.

Raudenbush, S. W. (1988). Educational applications of hierarchical linear models: A review. Journal o� Educational Statistics, 13(2), 85-116.

Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear �odels: A�-�lications and data analysis �ethods (�nd ed.). Thousand Oaks, CA: Sage Publications.

Raudenbush, S. W., Martinez, A., & Spybrook, J. (2007). Strategies for im-proving precision in group-randomized experiments. Educational Evaluation and Policy Analysis, 29, 5-29.

Rothman, R. (2004). Benchmarking and alignment of state standards and assessments. In S. H. Furman & R. F. Elmore (Eds.), Redesigning account-ability syste�s for education (pp. 96-114). New York: Teachers College Press.

Ruiz-Primo, M. A., Shavelson, R. J., Hamilton, L. S., & Klein, S. (2002). On the evaluation of systematic science education reform: Searching for instructional sensitivity. Journal o� Research in Science Teaching, 39(5), 369-393.

Sato, �. (1975). �he construction and inter�retation of S-P tables. Tokyo, Japan: Meiji �osho.

Seltzer, M. H. (2004). �he use of hierarchical models in analyzing data from experiments and quasi-experiments conducted in field settings. In D. Kaplan (Ed.), �he Handbook of Statistical �ethods for the Social Sciences (pp. 259-280). �housand Oaks, CA: Sage Publications.

Shepard, L. (2004). Curricular coherence in assessment design. In M. Wilson (Ed.), �oward coherence between classroo� assess�ent and accountability (pp. �39-�49). Chicago: National Society for the Study of Education.

�atsuoka, K., & �atsuoka, M. M. (1980). Detection of aberrant res�onse �atterns and their effects on di�ensionality (Research Report 80-4.). Urbana, IL: University of Illinois, Computer-based Education Research Laboratory.

Van der Linden, W. J., & Hambleton, R. K. (Eds.) (1997). Handbook of �odern ite� res�onse theory. New York: Springer.

Page 21: Using Instructional Sensitivity and Instructional ...Using Instructional Sensitivity and Instructional Opportunities to Interpret Students’ Mathematics Performance arsha Ing Stanford

Spring 2008 / Volu�e 8, Nu�ber 1

43 Vaughn, E. (2005, October 20). California students are still struggling:

Reading and math test scores for fourth- and eighth-graders rank near the bottom in the nation. Los Angeles �i�es, pp. A20.

Wang, J. (1998). Opportunity to learn: The impacts and policy implications. Educational Evaluation and Policy Analysis, 20(3), 137-156.

Webb, N. L. (1997). Criteria for align�ent of e��ectations and assess�ents in �athe�atics and science education. Madison, WI: National Institute for Science Education and Council of Chief State School Officers.

Webb, N. L. (2002). An analysis of the align�ent between �athe�atics standards and assess�ents for three states. Paper presented at the meet-ing of the American Educational Research Association, New Orleans, LA.

Wiley, D. E., & Yoon, B. (1995). �eacher reports on opportunity to learn: Analyses of the 1993 California Learning Assessment System (CLAS). Educational Evaluation and Policy Analysis, 17(3), 355-370.

Yoon, B., & Resnick, L. B. (1998). Instructional validity, o��ortunity to learn and equity: New standards e�a�ination of the California �athe-�atics renaissance (CSE �echnical Report 484). Los Angeles: University of California, National Center for Research on Evaluation, Standards and Student Testing.