Educational Research Chapter 5 Selecting Measuring Instruments Gay and Airasian
Dec 20, 2015
Educational Research
Chapter 5Selecting Measuring Instruments
Gay and Airasian
Collecting Data The collection of data is an extremely
important part of all research endeavors, for the conclusions for a study are based on what the data reveal.
As a result, the kind (s) of data to be collected, the method (s) of collection to used, and the scoring of the data need to be considered with care.
“Data” In this chapter: Define data Present several
types of instruments that can be used to collect data in a research study
Different properties that scores are assumed to possess
Objectives: By the end of this chapter you should be able to:
1) Explain what is meant by the term “data”
2) Explain what is meant by the term ‘instrumentation”
3( Name three ways in which data can be collected by researchers
Explain what is meant by the term “data-collection instrument”
Describe five types of researcher-completed instruments used in educational research
Describe five types of subject-completed instruments used in educational research
Objectives Explain what is meant
by the term ‘unobtrusive measures” and give two examples of such measures
Name four types of measurement scales and give an example of each
Name three different types of
scores used In educational
research and give an example of each
Objectives: Describe briefly the difference between
norm-referenced and criterion-referenced instrument
Describe how to score, tabulate, and code data for analysis
Flow of Activities in Collecting Data
Identify the variable
Operationally define the variable
Locate data (measures,
observations, documents with questions and scales)
Collect data on instruments yielding numeric scores
Self-efficacy for learning from others
Level of confidence that an individual can learn something by being taught by others
13 items on a self-efficacy attitudinal scale from Bergin (1989)
Scores of each item ranged from 0-10 with 10 being “completely confident.”
Flow of Activities Example
Data Collection
Scientific and disciplined inquiry requires the collection, analysis, and interpretation of data Data – the pieces of information that
are collected to examine the research topic
Issues related to the collection of this information are the focus of this chapter
Data Collection
Terminology related to data Constructs – abstractions that cannot
be observed directly but are helpful when trying to explain behavior
Intelligence Teacher effectiveness Self esteem
Identify Data Options: Specify Variables Independent Variables Dependent Variables Intervening Variables Control Moderating Confounding
Identify Data Options: Operationalize Variables Operational Definition: The specification of
how the variable will be defined and measured typically based on the literature often found in reports under “definition of
terms” Sometimes the researcher must construct it
88
Some Times When Operational Definitions Would Be Helpful
Figure 2.2
77
Which of the Following Definitions Are Operational?
Page 34
1. As shown by enthusiasm in class 2. As judged by the student’s math teacher using a
rating scale she developed 3. As measured by the “Math Interest” questionnaire 4. As shown by attention to math tasks in class 5. As reflected by achievement in mathematics 6. As indicated by records showing enrollment in
mathematics electives 7. As shown by effort expended in class 8. As demonstrated by number of optional
assignments completed 9. As demonstrated by reading math books outside
class10.As observed by teacher aides using the
“Mathematics Interest” observation record
Data Collection Data terminology (continued)
Operational definition – the ways by which constructs are observed and measured
Weschler IQ test Virgilio Teacher Effectiveness Inventory Tennessee Self-Concept Scale
Variable – a construct that has been operationalized and has two or more values
WHAT ARE DATA?
The term "data" refers to the kinds of information researchers obtain on the subjects of their research.
The term "instrumentation" refers to the entire process of collecting data in a research investigation.
KEY QUESTIONS
An important consideration in the choice of an instrument to be used in a research investigation is validity:
the extent to which results from it permit researchers to draw warranted conclusions about the characteristics of the individuals studied.
CONDITIONS
It involves not only the selection or design of the instruments but also the conditions under which the instruments will be administered.
1. Where? -- location 2. When? - - Time 3. How often?- -
Frequency
4. Who? --administration of the instruments
How you answer these questions may affect the data obtained!
Good Instruments? The data provided by
any instrument may be affected by nay or all of the preceding considerations
If administered incorrectly, disliked
Noisy or inhospitable conditions
Subjects are exhausted
Every instrument if it is of any value must allow
researchers to draw accurate conclusions about the capabilities or other characteristics of the people being studied
VALIDITY, RELIABILITY, AND OBJECTIVITY
1) Validity An important consideration in the
choice of an instrument to be used in a research investigation is validity:
the extent to which results from it permit researchers to draw warranted conclusions about the characteristics of the individuals studied.
Reliability and Objectivity
2) A reliable instrument is one that gives consistent results.
3) Whenever possible, researchers try to eliminate subjectivity from the judgments they make about the achievement, performance, or characteristics of subjects. That is, the researchers try to be objective.
USABILITY
Is it easy to use? How long will it take to administer? Are directions clear? Is it appropriate for the ethnic group or other
groups to whom it will be administered? How easy is it to score? To interpret the scores?
Practical Questions How much does it cost? Do equivalent forms exist? Have any problems been reported? Does Evidence of its reliability and validity
exist?
Save time, energy and headaches!!!
Who Provides the Information Research instruments can be classified
in many ways.
Some of the more common are in terms of who provides the data, the method of data collection, who collects the data, and what kind of response they require from the subjects.
Data Obtained Research data are data obtained by
directly or indirectly assessing the subjects of the study.
Self-report data are data provided by the subjects of the study themselves.
Informant data are data provided by other people about the subjects of a study.
Researcher Instruments Many types of researcher-completed
instruments exist.
Some of the more commonly used are rating scales, interview schedules, tally sheets, flowcharts, performance checklists, anecdotal records, and time-and-motion logs.
Subject Instruments The types of items or questions
used in subject-completed instruments can take many forms,
but they all can be classified as either selection or supply items.
Subject Instruments The types of items or questions
used in subject-completed instruments can take many forms, but they all can be classified as either selection or supply items.
Subject Instruments There are also many types of instruments
that are completed by the subjects of a study rather than the researcher.
Some of the more commonly used of this type are questionnaires; self-checklists; attitude scales; personality inventories; achievement, aptitude, and performance tests; projective devices; and socimetric devices.
Subject Instruments (con.t)
Examples of selection items include true-false items, multiple-choice items, matching items, and interpretive exercises. Examples of supply items include short-answer items and essay
questions.
Where Did the Instruments come From?
1) Find and administer a previously existing instrument of some sort, or
2) administer an instrument the researcher personally developed or had developed by someone else
An excellent source for locating already available tests is the ERIC Clearinghouse on Assessment and Evaluation.
Data Collection Measurement scales
Nominal – categories Gender, ethnicity, etc.
Ordinal – ordered categories Rank in class, order of finish, etc.
Interval – equal intervals Test scores, attitude scores, etc.
Ratio – absolute zero Time, height, weight, etc.
Four Types of Measurement Scales
Figure 7.25
5050
SCALE
Nominal
Interval
Ratio
Ordinal
EXAMPLE
Gender
Temperature (in Fahrenheit)
Money
Position in race
An Ordinal Scale: The Winner of a Horse Race
Figure 7.27
5151
Data Collection Types of variables
Categorical or continuous Categorical variables reflect nominal scales Continuous variables reflect ordinal, interval
or ratio scales Independent or dependent
Independent variables are the purported causes
Dependent variables are the purported effects
Measurement Instruments Types of instruments (continued)
Affective (continued) Scales used for responding to items on
affective tests Likert Semantic differential Thurstone Guttman Rating scales
Examples of Items from a Likert Scale Measuring Attitude toward Teacher Empowerment
Figure 7.14
4444
Instructions: Circle the choice after each statement that indicates your opinion.
1. All professors of education should be required to spend at least six months teaching at the elementary or secondary level every five years.
Strongly Stronglyagree Agree Undecided Disagree disagree
(5) (4) (3) (2) (1)
2. Teachers’ unions should be abolished.
Strongly Stronglyagree Agree Undecided Disagree disagree
(1) (2) (3) (4) (5)
3. All school administrators should be required by law to teach at least one class in a public school classroom every year.
Strongly Stronglyagree Agree Undecided Disagree disagree
(5) (4) (3) (2) (1)
Example of the Semantic Differential
Figure 7.15
4545
Instructions: Listed below are several pairs of adjectives. Place a checkmark () on the line between each pair to indicate how you feel. Example Hockey:
exciting :_____:_____:_____:_____:_____:_____:_____:_____: dull
If you feel that hockey is very exciting, you would place a check in the first space next to the word “exciting.” If you feel that hockey is very dull, you would place a checkmark in the space nearest the word “dull.” If you are sort of undecided, you would place a checkmark in the middle space between the two words. Now rate each of the activities that follow [only one is listed]:
Working with other students in small groups
friendly :_____:_____:_____:_____:_____:_____:_____:_____: unfriendlyhappy :_____:_____:_____:_____:_____:_____:_____:_____: sad
easy :_____:_____:_____:_____:_____:_____:_____:_____: hardfun :_____:_____:_____:_____:_____:_____:_____:_____: workhot :_____:_____:_____:_____:_____:_____:_____:_____: cold
good :_____:_____:_____:_____:_____:_____:_____:_____: badlaugh :_____:_____:_____:_____:_____:_____:_____:_____: cry
beautiful :_____:_____:_____:_____:_____:_____:_____:_____: ugly
Measurement Instruments Issues for cognitive, aptitude, or
affective tests Bias – distortions of a respondent’s
performance or responses based on ethnicity, race, gender, language, etc.
Responses to affective test items Socially acceptable responses Accuracy of responses Response sets
Problems inherent in the use of self-report measures and the use of projective tests
Criterion-Referenced vs. Norm-ReferencedEvaluation Instruments
Page 158
4848
Criterion-referenced: A student . . .
• spelled every word in the weekly spelling list correctly.• solved at least 75 percent of the assigned problems.• achieved a score of at least 80 out of 100 on the final exam.• did at least 25 push-ups within a five-minute period.• read a minimum of one nonfiction book a week.
Norm-referenced: A student . . .
• scored at the 50th percentile in his group.• scored above 90 percent of all the students in the class.• received a higher grade point average in English literature
than any other student in the school.• ran faster than all but one other student on the team.• and one other in the class were the only ones to receive A’s
on the midterm.
Selection of a Test Designing you own tests
Get help from others with experience developing tests
Item writing guidelines Avoid ambiguous and confusing wording and
sentence structure Use appropriate vocabulary Write items that have only one correct answer Give information about the nature of the desired
answer Do not provide clues to the correct answer See Writing Multiple Choice Items
Selection of a Test Test administration guidelines
Plan ahead Be certain that there is consistency
across testing sessions Be familiar with any and all
procedures necessary to administer a test
Identify Data Options: Select Scales of Measurement Nominal (Categorical):
categories that describe traits or characteristics participants can check
Ordinal: participants rank order a characteristic, trait or attribute
Identify Data Options: Select Scales of Measurement Interval: provides “continuous”
response possibilities to questions with assumed equal distance
Ratio: a scale with a true zero and equal distances among units
Record and Administer Data Collection: Locate or Develop an Instrument
Develop your own instrument Locate an existing instrument Modify an existing instrument
Record and Administer Data Collection: Obtain Reliable and Valid Data
Validity: the ability to draw meaningful and justifiable inferences from the scores about a sample or a population
Types of validity Content (representative of all possible questions that
could be asked) Criterion-referenced (scores are a predictor of an
outcome or criterion they are expected to predict Construct (determination of the significance,
meaning, purpose and use of the scores)
Record and Administer Data Collection: Develop Administrate Procedures for Data Collection
Develop standard written procedures for administering an instrument
Train researchers to collect observational data
Obtain permission to collect and use public documents
Respect individuals and sites during data gathering
Illustration of Types of Evidence of Validity
Figure 8.1
5252
Reliability and Validity
Figure 8.2
5353
Methods of Checking Validity and Reliability
Table 8.2, page 180
5454
VALIDITY (“TRUTHFULNESS”)
Method ProcedureContent-related evidence Expert judgmentCriterion-related evidence Relate to another measure of the same
variableConstruct-related evidence Assess evidence on predictions made
from theory
RELIABILITY (“CONSISTENCY”)
Method ContentTimeInterval Procedure
Test-retest Identical Varies Give identical instrument twice
Equivalentforms
Different None Give two forms of instrument
Equivalentforms/ retest
Different Varies Give two forms of instrument, with timeinterval between
Internalconsistency
Different None Divide instrument into halves and scoreeach or use KR
Observeragreement
Identical None Compare scores obtained by two or moreobservers
More About Research: Threats to Internal Validity in Everyday Life
Box 9A, page 199
5555
Consider the following commonly held beliefs:• Because “failure” often precedes “suicide,” it is therefore the
cause of “suicide.” (probable history and mortality threat)• Boys are genetically more talented in mathematics than are girls.
(probable subject attitude and location threats)• Girls are genetically more talented in language than are boys.
(probable location and subject attitude threats)• Minority students are less academically able than students from
the dominant culture. (probable subject characteristics, subject attitude, location, and instrumentation threats)
• People on welfare are lazy. (probable subject characteristics, location, and history threats)
• Schooling makes students rebellious. (probable maturation and history threats)
• A policy of temporarily expelling students who don’t “behave” improves a school’s test scores. (probable mortality threat)
• Indoctrination changes attitude. (probable testing threat)• So-called miracle drugs cure intellectual retardation. (probable
regression threat)• Smoking marijuana leads eventually to using cocaine and heroin.
(probable mortality threat)
Illustration of Threats to Internal Validity
Figure 9.2
5656
Note: We are not implying that any of these statements are necessarily true; our guess is that some are and some are not. *This seems unlikely.†If these teacher characteristics are a result of the type of school, then they do not constitute a threat.
General Techniques for Controlling Threats to Internal Validity
Table 9.1, page 202
5757
ThreatStandardizeConditions
Obtain MoreInformationon Subjects
ObtainMore
Informationon Details
ChooseAppropriate
Design
Subject characteristics X XMortality X XLocation X X XInstrumentation X XTesting XHistory X XMaturation X XSubject attitude X X XRegression X XImplementer X X X
Technical Issues
Validity (continued) Consequential – to what extent are the
consequences that occur from the test harmful
Estimated by empirical and expert judgment Factors affecting validity
Unclear test directions Confusing and ambiguous test items Vocabulary that is too difficult for test takers
Technical Issues Validity (continued)
Factors affecting validity Overly difficult and complex sentence
structure Inconsistent and subjective scoring Untaught items Failure to follow standardized
administration procedures Cheating by the participants or someone
teaching to the test items
Technical Issues Validity – extent to which
interpretations made from a test score are appropriate Characteristics
The most important technical characteristic Situation specific Does not refer to the instrument but to the
interpretations of scores on the instrument Best thought of in terms of degree
Technical Issues
Validity (continued) Four types
Content – to what extent does the test measure what it is supposed to measure
Item validity Sampling validity Determined by expert judgment