1 Analysis of Variance (ANOVA) Analysis of Variance (ANOVA) ! Treatment effect ! The existence of at least one difference between the population means categorized by the independent variable ! Random error ! The combined effects of all uncontrolled factors (on the scores of subjects) Analysis of Variance (ANOVA) ! The more that the variability between groups exceeds the variability within groups, the more unlikely the null hypothesis will be true ! F ratio: Variability between groups Variability within groups F = Analysis of Variance (ANOVA) ! F test: ! It is based on the notion that if the null hypothesis really is true, both the numerator and the denominator of the F ratio will tend to be similar ! But if the null hypothesis really is false, the numerator will tend to be larger than the denominator Variability between groups Variability within groups F = Analysis of Variance (ANOVA) ! If the H 0 really is true ! If the H 0 really is false random error random error F = treatment effect + random error random error F = Analysis of Variance (ANOVA) ! This whole ANOVA business looks pretty complicated and tedious … ! If we want to analyze more than two population means, couldn’t we simply perform several t tests comparing pairs of population means?
6
Embed
Analysis of Variance (ANOVA) Analysis of Variance (ANOVA)nunez/COGS14B_W17/W7.pdf · really is true ! If the H 0 really is false random error random error F = treatment effect + random
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
1
Analysis of Variance (ANOVA) Analysis of Variance (ANOVA) ! Treatment effect
! The existence of at least one difference between the population means categorized by the independent variable
! Random error ! The combined effects of all uncontrolled factors
(on the scores of subjects)
Analysis of Variance (ANOVA) ! The more that the variability between groups
exceeds the variability within groups, the more unlikely the null hypothesis will be true
! F ratio: Variability between groups
Variability within groups F =
Analysis of Variance (ANOVA) ! F test:
! It is based on the notion that if the null hypothesis really is true, both the numerator and the denominator of the F ratio will tend to be similar
! But if the null hypothesis really is false, the numerator will tend to be larger than the denominator
Variability between groups
Variability within groups F =
Analysis of Variance (ANOVA) ! If the H0 really is true
! If the H0 really is false
random error
random error F =
treatment effect + random error
random error F =
Analysis of Variance (ANOVA) ! This whole ANOVA business looks pretty
complicated and tedious … ! If we want to analyze more than two
population means, couldn’t we simply perform several t tests comparing pairs of population means?
2
Analysis of Variance (ANOVA) ! The answer is NO, because that would increase the
Type I error rate:
1 – (1 - α)c
• α = level of significance of each separate test • c = number of independent t tests
! Therefore, we have to use a test that deals with more than two population means while keeping the type I error low: ANOVA
2.- Between-groups design
! Assumptions: ! The observations are random and independent
samples from the populations ! The distributions of the populations from which
the samples are taken are normal ! The variances of the distributions in the
populations are equal (homoscedasticity)
Between-groups design ! Example
Between-groups design ! Example
Between-groups design ! Example
Between-groups design
3
3.- Effect size ! A most straightforward estimate:
! Proportion of explained variance, η2
! η2 = SSbetween /SStotal
! Proportion of variance in the dependent variable that can be explained by or attributed to the independent variable
Cohen’s guidelines for η2
η2 Effect .01 Small .09 Medium .25 Large
In the case of the previous numerical example:
η2 = 150.51/ 237.94 = 0.63
4.- Multiple comparisons ! The possible comparisons whenever more
than two populations are involved ! As we saw already, t test is not appropriate
because it increases the probability of a type I error
! Tukey’s HSD Test
Tukey’s HSD Test ! HSD: ‘Honestly significant difference’ test ! Can be used to test all possible differences
between pairs of means, and yet the cumulative probability of at least one type I error never exceeds the specified level of significance
• HSD = q √(MSwithin / n)
• n: sample size in each group • q: ‘Studentized Range Statistic’
Tukey’s HSD Test ! HSD: ‘Honestly significance difference’ test
• HSD = q √(MSwithin / n) • n: sample size in each group • q: ‘Studentized Range Statistic’ (Table G: α, k, dfwithin)
• For the case of the previous numerical example with (α=.05, k=5, dfwithin=26): HSD = 4.17 √(3.36 / 6.2) = 3.0698