Top Banner

of 64

Objectives To enable you to recognize Forms of bias Forms of bias Threats to validity, internal and external Threats to validity, internal and external

Dec 13, 2015



  • Slide 1

Objectives To enable you to recognize Forms of bias Forms of bias Threats to validity, internal and external Threats to validity, internal and external Sampling errors Sampling errors Slide 2 The Scientific Method Slide 3 Research** Systematic process to answer questions that generate knowledge Formal standards and conditions that guide the procedure Can be applied to many situations rather than one situation Seeks to confirm knowledge or discover new knowledge Reproducible. Slide 4 The scientific method has three steps***. Observation and description of a phenomenon or group of phenomena. Formulation of an hypothesis to explain the phenomena. Experimental tests to test predictions by several independent experimenters and properly performed experiments. Slide 5 Scientific Method - Rules of Testing** Operational Definition Generality Controlled observation ConfirmationConsistency Slide 6 Operational Definition** Descriptive statements made in a research study should be carefully defined These relate to either observations or measurements made Forces the researcher to define concepts so they can be tested and retested. Slide 7 Generality** Findings must be able to explain more than the specific items/subjects being studied. Slide 8 Controlled Observation** Change in variable A produces a change in variable B Only if all variables other than A can be discounted, can you show A as the cause Controls are established to account for other factors that may produce change. Slide 9 . CAUSE & EFFECT RELATIONSHIPS** Kochs Postulates: developed in 19th century by Robert Koch. Conditions that needed to be fulfilled before microorganisms could be considered the cause of a disease. Slide 10 Kochs Postulates** If the cause is present the effect is present. The cause is sufficient to produce the effect. If the cause is absent, the effect is absent. The cause is necessary to produce the effect. Useful in studying conditions with a single cause. Contributory cause, less rigid, useful for multiple causes or chronic conditions. Slide 11 Repeated observation*** A single observation does not guarantee something is true, so repeated observations are necessary. Slide 12 Confirmation** Once an explanatory statement is made, other events can also be explained One unsuccessful prediction and the statement is disproved. Slide 13 Consistency** If two explanatory statements are contradictory, then one must be false Slide 14 The great tragedy of science, the slaying of a beautiful hypothesis by an ugly fact Thomas Henry Huxley Slide 15 PURPOSE OF Experimental RESEARCH DESIGN** *To help the researcher answer the research question. *To control for possible rival hypotheses or extraneous variables that might compete with the independent variable as an explanation for the cause-effect relationship. Slide 16 *Goal of Experimental Research *All experimental research should attempt to control ALL the threats to internal validity *Research should try to control for as many threats to external validity as is possible *The best way to insure the validity of an experiment is for the researcher (or another researcher) to replicate the experiment. Slide 17 Hypothesis*** Research hypothesis: a general prediction of results Null hypothesis: a difference does not exist between experimental groups Alternative hypothesis: a difference does exist between experimental groups Rival hypothesis: Other explanation for outcome of study Slide 18 Errors in testing the null hypothesis*** Type I error: Rejecting the Ho when it is true (based on statistical tests) or claiming the effect exists when it does not Type II error: Accepting the Ho when it is not true (based on statistical tests) or failing to detect the effect that exists Slide 19 Subjects/Sampling *Sampling process includes two steps * Choosing the subjects to be included in the study * Determining which subjects receive treatment Slide 20 Sample** *Must define the POPULATION. The more a definition is limited, the less applicability to the general population. *A SAMPLE is a portion of the population investigated to draw conclusions about the entire population * Sampling is used because it is not practical to use the entire population Slide 21 Sampling Bias** *Occurs when one of the two (or three) study groups differ in one or more variables that would affect the outcome of the study. *Choosing a sample that is not reflective of the target population of the treatment Slide 22 Reporting of age data in clinical trials of arthritis. Deficiencies and solutions Arch Intern Med 1993:153:243-8*** Review of 73 studies 9664 patients 2.1% of sample of was over age of 65 62% of NSAID are consumed by population over 65. Older people more likely to have adverse reactions In order to keep side effects minimal, older people excluded. Slide 23 Types of Samples ** *Convenience *Random * Definition: each member of the population has an equal opportunity to be included in the study Slide 24 Types of Random Samples*** *Simple random sampling *Stratified random sampling *Cluster sampling Slide 25 Simple random sampling*** *Purest form of sampling, but not necessarily the best. After defining the population each individual is randomly assigned a group. Slide 26 Stratified random sampling*** *Used when certain characteristics of a population exhibit established proportions. Slide 27 Cluster sampling** *Gives everyone within a population an equal chance of being chosen for the study *But subject to sampling error at each stage of clustering Slide 28 Random Sampling Methods* *Table of random numbers *Computer generated numbers *Draw straws, marbles, etc. Slide 29 *Sample size*** *The larger the sample the easier it is to measure small but significant effects *If the effects of the study are to be great, a smaller sample size could be used *If using parametric tests you need at least 30 subjects in each subgroup Slide 30 Bias* *Websters Dictionary: a one-sided inclination of the mind *In research: the systematic disposition of certain trial designs to produce results consistently better or worse than other trial designs Slide 31 Areas where bias can be introduced** Selection bias Reporting quality BlindingDuplicationGeography Size of sample StatisticsLanguagePublication Slide 32 Importance of Randomization*** Bandolier, Does TENS Work, Mar 1997;37-3 In randomized studies TENS found to be effective in 2, ineffective in 15 In randomized studies TENS found to be effective in 2, ineffective in 15 In inadequately or not randomized studies TENS was found to be effective in 17 and ineffective in 2 In inadequately or not randomized studies TENS was found to be effective in 17 and ineffective in 2 Non or poorly randomized trials increase effect 30 to 41% (JAMA, 1995, 273:408- 12) Slide 33 Importance of Blinding*** *17 % increase in effect (JAMA,1995, 273:408- 12) *Completely different result in blind and non-blind studies (Arch Int Med 1998, 158:2235-2241) Slide 34 Importance of Quality Reporting*** *Overall quality which includes randomizing, blinding, dropout rate, threats to validity, etc *Increases efficacy 25% (Arch Int Med 1996, 156:661-6 and Lancet 1998, 352:609-13) Slide 35 Importance of Duplication (Covert)** *Results of some trial are reported more than once *Effect on meta-analysis increases efficacy 20% (BMJ 1997, 315:635-40 Slide 36 Importance of Geography*** *Of particular importance to alternative therapies *Acupuncture almost universally positive when conducted in Asia but only positive 50% of the time when conducted in western countries *Therapies other than acupuncture are overwhelmingly positive when conducted in China, Taiwan, Japan, or Russia., much more so than in other parts of the world *Control Clin Trials 1998, 19:159-166 Slide 37 Importance of size of Sample*** *Small trials may overestimate treatment effects by 30% (BMJ 1998, 316:33-8 and Pain 1998 78:217-220) *Some researchers feel trials with less than 10 subjects should be ignored Slide 38 Importance of Statistics** *Statistical mistakes * Data presented as statistically significant when it is not * Fishing or data trawling, where a single statistical significance is obtained and a paper is written around it * Power of words: even when there is no statistical significance, words can make the test sound as if it was successful. Especially apparent in abstracts * Data manipulation Slide 39 Importance of Language and Publication Bias*** *Often search strategies limit themselves to the English language. Positive findings are more likely to appear in English language journals and negative findings in non-English language journals (Lancet 1997,350:326-29) *There is a greater likelihood for positive trials than negative trials to be published TEST Slide 40 Pick a Number 1 2 3 4 Slide 41 All wild and wicked party people choose 3 Slide 42 Threats to Validity *Statistical Conclusion Threats *Threats to Construct Validity *Internal *External Slide 43 Statistical Conclusion Threats*** *Inadequacies of the power of the statistical test used Parametric Vs. Nonparametric *Fishing *Use of unreliable measures *Unreliable implementation of treatment Slide 44 *Threats to Construct Validity*** *One must clearly define independent and dependent variable *Without clear definitions study cannot be generalized Slide 45 *Internal Vs External Validity*** *Internal validity-- refers to the causal relationship. The effect between the independent and dependent variables *External validityrefers to how representative were the subjects in the study and can one generalize the findings to other populations, settings, treatments, etc. Slide 46 *Threats to Internal Validity** *Maturation *Testing *Instrumentation *Mortality *Selection *Compensatory rivalry Slide 47 Threats to Internal Validity (1)** *Maturation * Not only do events (history) around the subject change during the course of a research study, but the subje