Top Banner
Item Analysis in Language Testing Brown, CHAPTER 4 Recommended for Testing Course Offered By Dr. Sarkeshikian
15

Brown, chapter 4 By Savaedi

Jan 19, 2017

Download

Education

Savaedi
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Brown, chapter 4 By Savaedi

Item Analysis in Language TestingBrown, CHAPTER 4

Recommended for Testing Course Offered By Dr. Sarkeshikian

Page 2: Brown, chapter 4 By Savaedi

In the Name of ALLAH

Page 3: Brown, chapter 4 By Savaedi

By

Fatima Savaedi

Page 4: Brown, chapter 4 By Savaedi

IF: is used to examine the percentage of students who correctly answer a given item.

To calculate IF, add up the number of students who correctly answered a particular item and divide that sum by the total number of students who took the test.This value is the percentage of correct answers for a given item.

Page 5: Brown, chapter 4 By Savaedi

ID:indicates the degree to which an item separates the students who performed well from those who did poorly on the test as a whole.

Page 6: Brown, chapter 4 By Savaedi

ID Steps! line up the students' names, their individual item responses, and total

scores in descending order based on the total scores. Divide the students into three groups to determine the upper and lower

groups of scores. Separately calculate the IF for the lower and upper group. Then subtract the IF for the lower group from the IF for the upper group

on each item.

This gives you an index of the contrasting performance of those students who scored "high" on the whole test with those who scored "low."

Those items that have a high ID are performing most like the total test scores and will probably be the best items for testing those abilities for NRT purposes

Page 7: Brown, chapter 4 By Savaedi

Basic Steps in Developing NRTs

(a) Pilot a relatively large number of test items on a group of students similar to the group that will ultimately be assessed with the test;

(b) Analyze the items using format analysis and statistical techniques;

(c) Select the best items to make up a shorter, more effective revised version of the test.

Page 8: Brown, chapter 4 By Savaedi

Difference between NRTs and CRTs NRTs are constructed to produced normal

distributions, while CRTs do not necessarily do so. The item selection process for developing NRTs is

designed toe. retain items that are well-centered (with /Fs of .30

to70) and spread students out efficiently (with IDs as high as you can get).

In contrast CRTs area designed to measure student achievement so the DI and or B-index item analysis statistics are used instead of ID.

Page 9: Brown, chapter 4 By Savaedi

Item Quality Analysis

• Item quality analysis determines the degree to which each item is measuring the content that it was designed to measure and the degree to which that content should he measured at all.

• From a teacher's perspective, content congruence may be more important. Teachers would be more interested in content applicability.

Page 10: Brown, chapter 4 By Savaedi

Difference Index

DI indicates the degree to which an item is reflecting gain in knowledge or skill. The IF for the pre-test results (or non-masters) is subtracted from the IF for post-test results (or masters) to calculate the difference index.

Page 11: Brown, chapter 4 By Savaedi

The difference index usage

• The difference index uses the intervention pre-test/post-test strategy and subtracts the pre-test results from the post-test results. The B-index uses differential group strategies and avoids the problem of two administrations of the CRT by comparing the IFs of those students who passed a test with the IFs of those who failed it.

Page 12: Brown, chapter 4 By Savaedi

Calculating DI

• Dl = IF post-test – IF pre-test. Indicates the percentage of increase or decrease in knowledge of a concept or skill after instruction.

• B-Index IF pass – IF fail. Indicates the degree to which students who passed the test outperformed the students who failed the test on each item.

Page 13: Brown, chapter 4 By Savaedi

In Selecting CRT Items:Calculating difference indexes (comparing pre-test and post-test results) would provide additional information about how sensitive each item was to instruction. Calculating B-indexes (for the post-test results) would help teachers understand how effective each item was for deciding who passed the test and who failed.

Page 14: Brown, chapter 4 By Savaedi

Differences Between CRT And NRT Strategies

NRT item statistics like item facility and item discrimination analyses are used to determine which items were too easy or too difficult to demonstrate a spread of scores. Criterion-referenced item analysis techniques include the difference index and the B-index to help determine which subsets of CRT items are most closely related to the instruction and learning in a course and/or that subset most closely related to the distinction between students who passed or failed the test.

Page 15: Brown, chapter 4 By Savaedi

The END