NORMALITY TEST Definition of Normality Test Normality test is to see whether the residual values are normally distributed or not. A good regression model is to have a residual value is normally distributed. So the normality test was not performed on every variable but the residual value. Often there are multiple errors is that the normality test performed on each variable. It is not prohibited but require normality in the regression model rather than the residual value of each variable of the study. Definition of normal, simply to be analogous to a class. In class the number of students who are stupid and clever once few and mostly located in the category of moderate or average. If a class member stupid all, it is not normal, or a special school. And conversely if class members of class who are clever more than the stupid, so the class is not normal or an excellent class. Observations of normal data will provide extreme value extreme low and high bit and most of the pile in the middle. Similarly, the value of mode, mean and median are relatively close. Normality test can be done with the test histograms, normal test P Plot, Chi Square test, skewness and Kurtosis or Kolmogorov Smirnov. No method is best or most appropriate. How is that the test graph method often results in differences in perception among some observers, so the test for normality by using
22
Embed
Normality Test is to See Whether the Residual Values
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
NORMALITY TEST
Definition of Normality Test
Normality test is to see whether the residual values are normally distributed or not. A
good regression model is to have a residual value is normally distributed. So the normality test
was not performed on every variable but the residual value. Often there are multiple errors is that
the normality test performed on each variable. It is not prohibited but require normality in the
regression model rather than the residual value of each variable of the study.
Definition of normal, simply to be analogous to a class. In class the number of students
who are stupid and clever once few and mostly located in the category of moderate or average. If
a class member stupid all, it is not normal, or a special school. And conversely if class members
of class who are clever more than the stupid, so the class is not normal or an excellent class.
Observations of normal data will provide extreme value extreme low and high bit and most of
the pile in the middle. Similarly, the value of mode, mean and median are relatively close.
Normality test can be done with the test histograms, normal test P Plot, Chi Square test,
skewness and Kurtosis or Kolmogorov Smirnov. No method is best or most appropriate. How is
that the test graph method often results in differences in perception among some observers, so
the test for normality by using statistical free from doubt, though there is no guarantee that a test
with better statistical test of the test graph method.
If residuals are not normal but closer to the critical value (eg Kolmogorov Smirnov test of
significance of 0.049) can be tested by other methods which may provide justification to normal.
However, if far from the normal value, then it can be done several steps: data transformation,
data observation cutting or adding data outliers. The transformation can be made into shapes
natural logarithm, square root, inverse, or other forms depending on the shape of the normal
curve, is tilted to the left, right, collects in the middle or spread to the right and left side.
If α = 0,05 and from the chi square distribution table with df = 1 we get:
Apparently 1.6173 < 3.81so the hypothesis accepted when the significance level 0.05, so that it can be said both data derived from a homogeneous population variance.
B=( log s2) Σ (ni−1 )B=(1 . 67412 ) (27 )=45 .201
χ2=( ln10 ) {B−Σ (ni−1) log s i2 }
χ2=( ln10 ) {46 .81874−( 45 .201 ) }=1.6173
χ0 ,95 ( 2 )2 =3 .841
H0 : σ12=σ2
2
Steps testing with SPSS 16.
The Result testing by SPSS
Case Processing Summary
Gender
Cases
Valid Missing Total
N Percent N Percent N Percent
Value Female 15 100.0% 0 .0% 15 100.0%
Male 14 100.0% 0 .0% 14 100.0%
Descriptives
Gender Statistic Std. Error
Value Female Mean 87.6667 1.88140
95% Confidence Interval for Mean
Lower Bound 83.6315
Upper Bound 91.7019
5% Trimmed Mean 88.2407
Median 90.0000
Variance 53.095
Std. Deviation 7.28665
Minimum 70.00
Maximum 95.00
Range 25.00
Interquartile Range 10.00
Skewness -.955 .580
Kurtosis .872 1.121
Male Mean 86.7857 1.70706
95% Confidence Interval for Mean
Lower Bound 83.0978
Upper Bound 90.4736
5% Trimmed Mean 87.2619
Median 87.5000
Variance 40.797
Std. Deviation 6.38723
Minimum 70.00
Maximum 95.00
Range 25.00
Interquartile Range 5.00
Skewness -1.307 .597
Kurtosis 2.865 1.154
Tests of Normality
Gender
Kolmogorov-Smirnova Shapiro-Wilk
Statistic df Sig. Statistic df Sig.
Value Female .176 15 .200* .872 15 .037
Male .247 14 .020 .859 14 .029
a. Lilliefors Significance Correction
*. This is a lower bound of the true significance.
Test of Homogeneity of Variance
Levene Statistic df1 df2 Sig.
Value Based on Mean .587 1 27 .450
Based on Median .354 1 27 .557
Based on Median and with adjusted df
.354 1 26.414 .557
Based on trimmed mean .532 1 27 .472
Test Criteria: accept H0 if sig> 0.05. in the results table with spss program, it appears that sig exceeds 0.05. H0 so that we have received, in other words that the population value can be conclude selection for female and male students is having the same variance