Research Methods Content Content Area Area Researchable Questions Research Research Design Design Measuremen Measuremen t t Methods Methods Sampling Sampling Data Data Collection Collection Statistical Statistical Analysis Analysis Report Report Writing Writing ? ?
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Research MethodsContentContentAreaArea
ResearchableQuestions
ResearchResearchDesignDesign
MeasurementMeasurementMethodsMethods
SamplingSampling DataDataCollectionCollection
StatisticalStatistical AnalysisAnalysis
ReportReportWritingWriting??
Assessment of Observation(Measurement)
Observed Score = True Score + Error
AAaahh!!A shark!
Error component may be either:
• Random Error = Varaiation due to unknown or uncontrolled factors
• Systematic Error = variation due to systematic but irrelevant elements of the design
• Concern of scientific research is management of the error component
• Number of criteria by which to evaluate success
1 .Reliability
• Does the measure consistently reflect changes in what it purports to measure?
• Consistency or stability of data across• Time• Circumstances
• Balance between consistency and sensitivity of measure
2 .Validity
• Does the measure actually represent what it purports to measure?
• Accuracy of the data (for what?)• Number of different types:
A. Internal Validity
Semmelweis Pasteur
Lister
Semmelweis Pasteur
Lister
• Effects of an experiment are due solely to the experimental conditions
• Extent to which causal conclusions can be drawn
• Dependent upon experimental control• Trade-off between high internal
validity and generalizability of results
B. External Validity• Can the results of an experiment be
applied to other individuals or situations?
• Extent to which results can be generalized to broader populations or settings
• Dependent upon sampling subjects and occasions
• Trade-off between high generalizability and internal validity
C. Construct Validity
• Whether or not an abstract, hypothetical concept exists as postulated
• Examples of Constructs:• Intelligence• Personality• Conscience
Based on:
• Convergence = different measures that purport to measure the same construct should be highly correlated (similar) with one another
• Divergence = tests measuring one construct should not be highly correlated (similar) to tests purporting to measure other constructs
D. Statistical Conclusion Validity
• The extent to which a study has used appropriate design and statistical methods to enable it to detect the effects that are present
• The accuracy of conclusions about covariation made on the basis of statistical evidence
Can have a reliable, but invalidCan have a reliable, but invalidmeasuremeasure..
If measure is valid, thenIf measure is valid, thennecessarily reliablenecessarily reliable..
3 .Utility
• Usefulness of methods gauged in terms of:
A. EfficiencyB. Generality
A. Efficient Methods provide:
• Precise, reliable data with relatively low costs in:
• time• materials• equipment• personnel
B. Generality
• Refers to the extent to which a method can be applied successfully to a wide range of phenomena
• a.k.a. Generalizability
Threats to Validity
• Numerous ways vailidity can be threatend
• Related to Design
• Related to Experimenter
Related to Design
1. Threats to Internal Validity (Cook & Campbell, 1979)
A. History = specific events occurring to individual subjectB. Testing = repeated exposure to testing instrumentC. Instrumentation=changes in the scoring procedure over time
D. Regression = reversion of scores toward the mean or toward less extreme scoresE. Mortatility = differential attrition across groupsF. Maturation = developmental processesG. Selection = differential composition of subjects among samples
H. Selection by Maturation interaction
I. Ambiguity about casual direction
J. Diffusion of Treatments = information spread between groups
K. Compensatory Equalization of Treatments = lack of treatment integrity
L. Compensatory Rivalry = “John Henry” effect on nonparticpants
2. Threats to External Validity (LeCompte & Goetz, 1982)
A. Selection = results sample-specific
B. Setting = results context-specific
C. History = unique experiences of sample limit generalizability
D. Construct efffects = constricts are sample specific