This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Slide 1
Slide 2
Significance and Meaningfulness Effect Size & Statistical
Power 1
Slide 3
1. Effect Size How meaningful is the significant difference?
1
Slide 4
KNR 445 Statistics Effect sizes Slide 3 Significance vs.
meaningfulness As sample size increases, likelihood of significant
difference increases The fact that this sample size is buried down
here in the denominator of the test statistic means that as n , p
0. So if your sample is big enough, it will generate significant
results 1 2
Slide 5
KNR 445 Statistics Effect sizes Slide 4 Significance vs.
meaningfulness As sample size increases, likelihood of significant
difference increases So statistical difference does not always mean
important difference What to do about this? Calculate a measure of
the difference that is standardized to be expressed in terms of the
variability in the 2 samples, but independent of sample size =
EFFECT SIZE 1
KNR 445 Statistics Effect sizes Slide 6 Effect Size EFFECT SIZE
from SPSS Using appendix B data set 2, and submitting DV salary to
test of difference across gender, gives the following output
(squashed here to fit): T-Test 1
Slide 8
KNR 445 Statistics Effect sizes Slide 7 Effect Size EFFECT SIZE
from SPSS T-Test SDs to pool Mean difference to use 1
Slide 9
KNR 445 Statistics Effect sizes Slide 8 Effect Size EFFECT SIZE
from SPSS So 1 2 so
KNR 445 Statistics Effect sizes Slide 11 Effect Size From
Cohen, 1988: d =.20 is small d =.50 is moderate d =.80 is large So
our effect size of.25 is small, and concurs on this occasion with
the insignificant result The finding is both insignificant and
small (a pathetic, measly, piddling little difference of no
consequence whatsoever trivial and beneath us) 1 2
Slide 13
2. Statistical Power Maximizing the likelihood of significance
1 2 3 4
Slide 14
KNR 445 Statistics Effect sizes Slide 13 Statistical Power The
likelihood of getting a significant relationship when you should
(i.e. when there is a relationship in reality) Recall from truth
table, power = 1 - 1 Truth Table Reality (unknown) Null TrueNull
False Decision Accept Null Type II error () Reject Null Type I
error () Power = 1 - (1- type II error)
Slide 15
KNR 445 Statistics Effect sizes Slide 14 Factors Affecting
Statistical Power The big ones: Effect size (bit obvious) Select
samples such that difference between them is maximized Combines the
effects of sample SD (need to decrease) and mean difference (need
to increase) Sample size Most commonly discussed: as n increases,
SE M decreases, and test statistic then increases 1 2
Slide 16
KNR 445 Statistics Effect sizes Slide 15 Factors Affecting
Statistical Power The others: Level of significance Smaller , less
power Larger , more power 1-tailed vs. 2-tailed tests With good a
priori info (i.e. research literature), selecting 1-tailed test
increases power Dependent samples Correlation between samples
reduces standard error, and thus increases test statistic 1 2
3
Slide 17
KNR 445 Statistics Effect sizes Slide 16 Calculating sample
size a priori 1. Specify effect size 2. Set desired level of power
3. Enter values for effect size and power in appropriate table, and
generate desired sample size: Applet for calculating sample size
based on above: http://www.stat.uiowa.edu/~rlenth/Power/ Applets
for seeing power acting (and interacting) with sample size, effect
size, etc http://statman.stat.sc.edu/~west/applets/power.html
http://acad.cgu.edu/wise/power/powerapplet1.html
http://www.stat.sc.edu/%7Eogden/javahtml/power/power.html 1