Top Banner
Reliability & Validity the Bada & Bing of YOUR tailored survey design
28

Reliability & Validity

Jan 16, 2016

Download

Documents

kanan

Reliability & Validity. the Bada & Bing of YOUR tailored survey design. Pre-Presentation credit. This presentation has been influenced  not at all  a little bit  CONSIDERABLY by the work & wisdom of Dan Koretz. Thanks!. - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Reliability & Validity

Reliability & Validity

the

Bada

&

Bing

of YOUR tailored survey design

Page 2: Reliability & Validity

• This presentation has been influenced

not at all

a little bit

CONSIDERABLY

by the work & wisdom of Dan Koretz.

Thanks!

Pre-Presentation credit

Page 3: Reliability & Validity

Core concept of validity• You wish to measure a construct, but can never know true score for sure(e.g. 6th grade math proficiency, self esteem)

• You must draw an inference about the construct based on a sample or indicator of behavior--something you can actually “touch”

(e.g., 6th grade math test, self esteem survey)

• VALIDITY describes how well performance on your indicator justifies your inference about the construct

Page 4: Reliability & Validity

Error: Validity’s arch nemesis

• Sampling error: occurs from sampling units of observation (i.e, populations of humans)

• Measurement error (M.E.): occurs across instances of measurement from

each unit (i.e. individual humans)

Page 5: Reliability & Validity

More on measurement error

• Outcomei = True score + M.E. • Measurement error can be… systematic (does not “wash out” across repeated measurements) or random (does “wash out” across repeated measurements)

Page 6: Reliability & Validity

Systematic error

• Chris repeatedly takes a self esteem survey written in Latin

Outcome repeatedly affected by factor not relevant to construct being measured

“Construct irrelevant variance”

**Note: the only Latin that Chris speaks involves inesway**

Page 7: Reliability & Validity

Systematic error cont’d

• “Construct Under-representation”

If construct is poorly represented, repeated measurements will not converge on true score with regard to entire construct

e.g. global self esteem survey asks only about Chris’ confidence on golf course

Page 8: Reliability & Validity

Random error

• Chris repeatedly takes self esteem survey

Sometimes mood =

Sometimes mood =

• Over time, outcomes will converge on Chris’ true score

Page 9: Reliability & Validity

Some details• Validity is an attribute of your inference, not

the instrument itself, which may be more valid for some inferences and/or populations than others (e.g. self-esteem survey in Latin).

• Validity is not an all or nothing phenomenon, but a matter of degree, and we must piece together evidence suggesting how valid our inferences may or may not be.

Page 10: Reliability & Validity

Types of validity/validity evidencenote: other terms exist, but this will be the

focus of S-015

Content Validity Convergent Evidence Discriminant Evidence

Construct Validity

Page 11: Reliability & Validity

Types of validity/validity evidence

Construct Validity:

• How well does performance on our instrument justify inferences about the construct?

• “Validity”

• What we’re ultimately shooting for

Page 12: Reliability & Validity

Types of validity/validity evidence

Content based evidence(a/k/a content validation study): •Compare your instrument to your very

thoroughly defined construct… • Does the instrument adequately

represent the construct? • Harder than it seems (constructs can be messy)

Page 13: Reliability & Validity

Types of validity/validity evidence

Convergent-discriminant evidence: •Measures of similar constructs should

converge. •Measures of less similar constructs

should diverge.(e.g. Two math tests should correlate

more strongly than a math and reading test)

Page 14: Reliability & Validity

Multitrait-Multimethod Matrix(MTMM)

• A fun way to display convergent-discriminant validity (or not)

Pass out hand outs: Now

Page 15: Reliability & Validity

But alas, complications abound…

• What constitutes “similar”

• What constitutes “less similar”

• What constitutes “convergence”

• What constitutes “divergence”

????

Page 16: Reliability & Validity

Plausible toy correlations

Math 1

Math 1 1.00

Math 2 .82

Read 1 .74

Read 2 .70

Page 17: Reliability & Validity

Closing thoughts on validity

• We must piece together evidence that is often murky and incomplete to reach judgment.

• An instrument that is fairly valid for one use, inference, or population may not be valid for others.

Page 18: Reliability & Validity

Oh, BTW…

• The more reliable your instrument, the better your chance of drawing fairly valid inferences.

(Old Faithful)

Page 19: Reliability & Validity

Core concept of reliability

• Reliability is consistency of results across repeated measurements

(e.g. assuming no interventions or natural attitudinal shifts in between, a subject taking a highly reliable survey would perform quite similarly each time s/he took it.)

Page 20: Reliability & Validity

Some details

• Reliability is also a matter of degree, often expressed as a coefficient ranging from 0 - 1.

• A test or survey may be more reliable for some populations than others

(e.g. surveys tend to be more reliable among older/more educated populations.)

Page 21: Reliability & Validity

POP QUIZ

• True or false…

1.) An instrument that allows us to draw a

reasonably valid inference must

be reasonably reliable

2.) A reasonably reliable instrument must allow

us to draw a reasonably valid inference

Page 22: Reliability & Validity

POP QUIZ cont’d

Regarding R & V, how might one describe…

Page 23: Reliability & Validity

A few (of many) ways to assess reliability

• Assess internal consistency

Assuming a survey taps one and only one construct, the results from the first half should correlate highly with results from the second half; the odd items should correlate highly with the evens, etc. (split-half correlations)

Page 24: Reliability & Validity

Coefficient Alpha

• a/k/a Cronbach’s alpha

The average of all possible split-half correlations in a given sample

generally preferred to single split-half correlation

Page 25: Reliability & Validity

A few (of many) ways to assess reliability

• Test-retest

Assuming no interventions or natural shifts in attitude, a reasonably reliable survey will yield similar results from the same person across repeated administrations

Page 26: Reliability & Validity

WARNING

• A test or survey with a high reliability coefficient does not guarantee that your results will be highly reliable.

(e.g. Differences in administrative conditions can effect the consistency of your results across repeated administrations.

Page 27: Reliability & Validity

Questions???

Page 28: Reliability & Validity

References, etc

Linn, R.L. & Gronlund, N. E. (2000). Measurement and Assessment in Teaching, 8th Ed. New Jersey: Prentice-Hall, Inc.

See diagram displayed on page 75 of the reference textbook.

A very cool link that covers a TON of stuff on all forms of social research…

http://www.socialresearchmethods.net/kb/index.php