Cut Points

Post on 05-Jan-2016

26 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Cut Points. ITE - 695. Section One. What are Cut Points?. I. Introduction. A . The more critical the issue (task) the more critical the cut point (example: programming a machine). 1. Interpretation of readouts. 2. Tolerances in measurement. B . Assumption: Test has both of these: - PowerPoint PPT Presentation

Transcript

Cut Points

ITE - 695

Section OneWhat are Cut Points?

I. IntroductionA. The more critical the issue (task) the more critical

the cut point (example: programming a machine).1. Interpretation of readouts.2. Tolerances in measurement.

B. Assumption: Test has both of these:1. Validity.2. Reliability.

C. Select instrument that best measures action needed (performance vs. explanation).

Validity Reliability

Definition:

The appropriateness, meaningfulness andusefulness of the specific inferences made formtest scores (Standards of .Educational andPsychological Testing, 1985).

Definition:

The degree to which test scores are free from errors inmeasurement (S.E.P.T., 1985).

Types:

Content - domain adequately represented. Construct - degree of ability in subject. Criterion-related - performance on differentdomains.

Types:

Test-Retest - method of estimating reliability over aperiod of time. Internal Consistency - method of estimating reliabilitywithin the test instrument. Equivalent Forms - method of estimating reliability overdifferent forms of test instruments. Interrater reliability - establishing consistency amongdifferent raters.

II Types

A. Normative-Referenced Testing (NTR)

1. Significance

- Accepted reliability & validity

2. Measurement

a. Common Averages:

- mode

- median

- mean

II Types (cont.)

b. Variability:

- range

- quartile deviation

- standard deviation

3. Reliability

- Historical acceptance

II Types (cont.)

B. Criterion-Referenced testing (CRT)

1. Significance

a. Testing

b. Distribution

2. Measurement

a. Judgements

b. Variables

II Types (cont.)

3. Reliability

a. Criterion not based on normal distribution.

b. Data dichotomous, mastery/non-mastery.

NORM REFRENCED TESTING

1. Separate test takers

2. Seek Normal Distribution Curve

NORM REFRENCED TESTING

1. Test items separate test - takers from one another.

2. Normal Distribution Curve.

MEASURES of CENTRAL TENDENCIES MODE MEDIAN MEAN MEASURES of VARIBILITY or SCATTER

– RANGE– DEVIATION (QUARTILE)– DEVIATION (STANDARD)

CRITERION REFERENCED TESTING

1. Test items based on specific objectives.

2. Mastery Curve

Standard normal curve withstandard deviations

SEE HANDOUT

CRITERION REFRENCED TEST

1. Test Compares to Objectives

2. Mastery Distribution

Norm-Reference Testing

GOALS

RELIABILITY

VALIDITY

ADMINISTRATION

STANDARD

MOTIVATION

COMPETITION

INSTRUCTIONALDOMAIN

Criterion Referenced Testing

Test Achievement

Usually High

Instruction Dependent

Standard

Averages-Based

Avoidance of Failure

Student to Student

Low Level Cognitive

Test Performance Mastery

Usually Unknown

Usually High

Variable

Performance Levels Based

Likelihood of Success

Student to Criterion

Cognitive or Psychomotor

Comparison models?

INPUT PRODUCT

(Instruction) (NRT Results)

Model For NRT Construction

DESIGN TEST INPUT PRODUCT

MODIFY? NOYES

(Instruction) (CRT Results)

(Test, Objectives, or Instruction)

Model For CRT Construction

Mastery curve

SEE HANDOUT

Frequency distributions withstandard deviations of various sizes

SEE HANDOUT

Section II

Establishing Cut Points

Three Primary Procedures

ESTABLISHING CUT-POINT

1. Informed Judgement

2. Conjectural Approach

3. Contrast Group

I. Informed Judgement

A. Significance: Separates mastery from non- masteryB. Procedure:

1. Analyze consequences of mid- classification (political, legal, or operational).2. Gather previous test-taker data.3. Ask other stakeholders.4. Make decision.

II Conjecture Method

A. Significance: “Angoff-Nedeisky Method” - most useful.

B. Procedure:1. Select three informed judges.2. Estimate probability of correct

response.3. Chosen cut-off is average of the three judges.

III Contrast Group Method

A. Significance: Single strongest technique; should still use human

judgement.

B. Procedure:1. Select judges to identify mastery/non-mastery.2. Select equal groups (15 minimum, 30 optimum).3. Administer mastery/non-mastery test to both groups.4. Plot scores on distribution chart.5. Make critical cut-off where two distributions intersect.6. Adjust score between highest non-master and lowest

master. score.

Establishing A Criterion Cut-Point

Mastery Level - (Separates master from non-master)

1. Informed judgement

2. Conceptual Approach

3. Control groups

Establishing A Criterion Cut-Point (cont.)Mastery Level - (Separates master from non master)

1. Informed judgement

2. Conceptual Approach

3. Control groups

Establishing A Criterion Cut-Point (cont.)

Mastery Level - (Separates master from non master)

1. Informed judgement

2. Conceptual Approach

3. Control groups

Contrasting group method of cut-off score selection chart.

SEE HANDOUT

Section Three:

Reliability

I. Types

A. Internal Consistency

1. Kuder-Richardson Method.

2. Computer Statistical Package.

3. Problem: Lack of variance.

4. Problem: Excludes items that measure unrelated objectives.

B. Test-Retest Score Consistency.

Review

Types of Validity: Methods of Establishing Cut-Points1. Content 1. Informed Judgment 2. Construct 2. Conjecture Method3. Criterion-related 3. Contrast Group Method

Types of Reliability:1. Test-Retest2. Internal Consistency3. Equivalent forms4. Interrupter reliability

Section Four: Review Questions Validity cannot exist without reliability. (True or

False) Since CRT relies on judgment rather than normal

distribution for scoring, how is reliability assured? If it becomes necessary for you to establish cut-

point for your training program, which of the three methods would you use and why? (Informed judgment, Conjecture method, or Contrast group method)

Norm-Reference Testing

GOALS

RELIABILITY

VALIDITY

ADMINISTRATION

STANDARD

MOTIVATION

COMPETITION

INSTRUCTIONALDOMAIN

Criterion Referenced Testing

Test Achievement

Usually High

Instruction Dependent

Standard

Averages-Based

Avoidance of Failure

Student to Student

Low Level Cognitive

Test Performance Mastery

Usually Unknown

Usually High

Variable

Performance Levels Based

Likelihood of Success

Student to Criterion

Cognitive or Psychomotor

top related