Top Banner
Preliminary Analysis --Descriptive Statistics. --Checking the reliability of a scale.
92
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: spss

Preliminary Analysis

--Descriptive Statistics.--Checking the reliability of a scale.

Page 2: spss

Statistical Analysis Using SPSS 12.0

By Pallant

Page 3: spss

Descriptive Statistics

Categorical VariablesContinuous VariablesAssessing NormalityChecking For Outliers

Page 4: spss

Procedure for Categorical Variables

From the menu at the top of the screen click on: Analyze, then click on Descriptive statistics, then frequencies

Choose and highlight the categorical variables you are interested in. Move these variables into the variable box.

Click on the statistics button. In the dispersion section tick Minimum and Maximum. Click on continue and then OK.

Page 5: spss

Procedure for Continuous Variables

From the menu at the top of the screen click on: Analyze, then click on Descriptive Statistics, then Descriptives.

Click on all the continuous variables that you wish to obtain descriptive statistics for. Click on the arrow button to move them into the variables box.

Click on the options button. Click on mean, standard deviation, minimum, maximum, skewness, kurtosis.

Click on continue, and then OK.

Page 6: spss

Interpretation of Descriptive output

This Information may be needed if these variables are to be used in parametric statistical techniques.

Positive skewness values indicate positive skew. Negative skewness values indicate clustering of

scores at the high-end. Positive kurtosis values indicate that the distribution

is rather peaked with long thin tails. Kurtosis values below 0 indicate a distribution that is

relatively flat.

Page 7: spss

Procedure for Assessing Normality

From the menu at the top of the screen click on: Analyze, then click on Descriptive Statistics, then Explore.

Click on the variable/s you are interested in. Click on the arrow button to move them into the Dependent List box.

Click on any independent or grouping variables that you wish to split your sample by. Click on the button to move them into the Factor List box.

Page 8: spss

Procedure for Assessing Normality (Cont)

In the Display section make sure that Both is selected. This displays both the plots and statistics generated.

Click on the plots button. Under Descriptive Click on the Histogram. Click on Normality Plots with Tests.

Click On Continue. Click on the Options button. In the Missing Values

Section click on Exclude cases pair wise. Click on continue and then OK.

Page 9: spss

Interpretation of output from Assessing Normality

Compare the original mean with new trimmed mean you can see whether some of your more extreme scores are having a strong influence on the mean.

Skew ness and Kurtosis values are also provided as part of this output.

Test of Normality will also give the result of the Kolmogorov-Smirnov statistics. A non significant (sig. value more than .05)result indicates normality.

Page 10: spss

Interpretation of output from Assessing Normality (Cont)

The Detrended Normal Q-Q plots displayed in the output are obtained by plotting the actually deviation of the scores from the straight line.

The final plot that is provided in the output is a box plot of the distribution of scores for two groups.

Page 11: spss

Procedure For Identifying Outliers

From the menu at the top of the screen click on: Analyze, then click on Descriptive Statistics, then Explore.

In the Display section make sure Both in selected. This provides both statistics and plots.

Click on your variables and move it into the dependent list box.

Click on Id from your variable list and move into the section Label cases. This will give you the ID number of the outlying case.

Page 12: spss

Procedure For Identifying Outliers (Cont)

Click on the statistics button. Click on outliers. Click on Continue.

Click on the plots button. Click on Histogram. Click on the options button. Click on Exclude cases

pair wise. Click on continue and then OK.

Page 13: spss

Interpretation of output from Outliers

First, You have to look at the Histogram. Look at the tails of the distribution.

Second, inspect the box plot. To check the outlier’s score is genuine, not just an

error. The value you are interested in is the 5% Trimmed

Mean. Finally, inspected the Extreme Values table.

Page 14: spss

Checking the Reliability of a Scale

It is necessary to check that each of your scales is reliable with your particular sample.

Ideally, the alpha coefficient of a scale should be above .7

Page 15: spss

Procedure for checking Reliability

From the menu at the top of the screen click on: Analyze, then click on Scale, then Reliability Analysis

Click on all of the individual items that make up the scale. Move these into the box marked Items.

In the Model section, make sure Alpha is selected. Click on the statistics button. In the Descriptives for

section, click on Item, scale, and scale of item deleted. Click on continue and then Ok.

Page 16: spss

Interpretation the output from reliability

When alpha coefficient is .89 this value is above .7 so the scale can be considered reliable.

If the low values (less than .3) here indicate that the item is measuring something different from the scale as a whole.

If your scale’s overall alpha is too low (less than .7) you may need to consider removing items with low item-total correlations.

Page 17: spss

Statistical Techniques

Correlation (Pearson) Multivariate analysis of variance. (MANOVA)

Multiple regression Chi-Square

Factor Analysis Spearman’s rank Order

T-tests Friedman Test

One-way analysis of

variance Kruskal-Wallis Test

Two-way between groups ANOVA

Conjoint Analysis

Page 18: spss

Correlation( Parametric test)

Pearson correlation is used when you want to explore the strength of the relationship between two continuous variables.

Page 19: spss

Summary for Correlation

Example: Is there a relationship between the amount of control people have over their internal states and their levels of perceived stress?

Do people with high levels of perceived control experience lower levels of perceived stress?

What You need: Two variables: both continuous, or one continuous and other dichotomous (two values).

What it does: Correlation describes the relationship between two continuous variables, in terms of both the strength of the relationship and the direction.

Page 20: spss

Summary for Correlation (Cont)

Assumptions: Level of measurement, Related pairs, Independence of Observations, Normality, Linearity, Homoscedasticity.

Non-parametric alternative: Spearman’s Rank Order Correlation.

Page 21: spss

Procedure for generating a Scatter plot

From the menu at the top of the screen click on: Graphs, then click on scatter.

Click on simple. Click on the Define button. Click on the first variable and move it into the Y-axis

box. Click on the second variable and move to the X-axis

box. If you would like to add a title, click on Titles. Type in

a little. Click on Continue and then OK.

Page 22: spss

Steps of output from scatter plot:

Step 1: Checking for outliers- Data points that are out on their own, either very high or very low, or away from the main cluster of points.

Step 2: Inspecting the distribution of data points- It suggests a very low correlation & strong correlation.

Step 3: Determining the direction of the

relationship between the variables- The scatter plot can tell you whether the relationship between your variables is positive or negative.

Page 23: spss

Steps of output from scatter plot (Cont)

An upward trend indicates a positive relationship, high scores on X associated within scores on Y.

A downward line suggests a negative correlation, low scores on X associated with high scores on Y.

Page 24: spss

Procedure for Calculating PearsonCorrelation

From the menu at the top of the screen click on: Analyze, then click on Correlate, then on Bivariate.

Select your two variables and move them into the box marked variables.

Check that the Pearson box and the 2 tail box have a cross in them.

Click on the options button. click on the Exclude cases pair wise box.

Click OK.

Page 25: spss

Steps of output from Correlation

Step1: Checking the information about the sample- The first thing to look at in the table labeled Correlations is the N. If a case was missing information on either of these variables it would have been excluded from the analysis.

Step 2: Determining the direction of the relationship- Is there a negative sign in front of the r value? This means there is a negative correlation between two variables.

Page 26: spss

Steps of output from Correlation (Cont)

Step 3: Determining the strength of the relationship- This can range from -1.00 to 1.00 This value will indicate the strength of the relationship between your two variables.

r=.10 to .29 or r=.10 to -.29 smallr=.10 to .29 or r=.10 to -.29 small

r= .30 to .49 or r=.30 to -.4.9 mediumr= .30 to .49 or r=.30 to -.4.9 medium

r= .50 to 1.0 or r=-.50 to -1.0 larger= .50 to 1.0 or r=-.50 to -1.0 large Step 4: Calculating the coefficient of determination- A

correlation of r=.5, however, means 25% shared variance (.5x.5=.25).All you need to do square your r value.

Page 27: spss

Steps of output from Correlation (cont)

Step 5: Assessing the significance level: The significance of r is strongly influenced by the size of the sample. In a small sample (N=30), you may have moderate correlations that do not reach significance at the traditional p<.05 level.

Page 28: spss

Multiple regression (Parametric test)

Explore the relationship between one continuous dependent variable and a number of independent variables or predictors.

Page 29: spss

Summary for multiple regression

Example: 1) How well do the two measures of control, predict perceived stress?

2) Which is the best predictor of perceived stress: control of external events, or control of internal states?

What you need: One continuous dependent variable and two or more continuous independent variable.

What it does: It tells you how much of the variance in your dependent variable can be explained by your independent variables.

Page 30: spss

Procedure for multiple regression

From the menu at the top of the screen click on: Analyze, then click on Regression, then Linear.

Click on your continuous dependent variable and move them into the dependent box.

Click on your independent variables and move them into the Independent box.

For method, make sure Enter is selected Click on the Statistical button.

Page 31: spss

Procedure for multiple regression (Cont)

Click on the options button. In the Missing values section click on Exclude Pair wise.

Click on the Plots button. Click on the save button. Click on OK.

Page 32: spss

Steps of output from multiple regression

Step 1: Checking the assumptions: Check that your independent variables show at least some relationship with your dependent variable (above .3 preferably)

Step 2: Evaluating the model: To asses the statistical significance of the result it is necessary to look in the table labelled ANOVA. This tests the null hypothesis that multiple R in the population equals 0.

Page 33: spss

Steps of output from multiple regression (Cont)

Step 3: Evaluating each of the independent Variables: To compare the different variables it is important that you look at the standardized coefficients, not the unstandardised ones.

If the sig. value is less than .05 (.01,.0001, etc), then the variable is making a significant unique contribution to the prediction of the dependent variable. If greater than .05, then you can conclude that variable is not making significant unique contribution to the prediction of your dependent variable.

Page 34: spss

Factor Analysis (Parametric test)

Factor analysis allows you to condense a large set of variables or scale items down to a smaller, more manageable number of dimensions or factors.

Page 35: spss

Summary for Factor analysis

Example: What is the underlying factor structure of the positive and Negative affect scale? Is the structure of the scale in this study, using a community sample, consistent with this previous research?

What you need: A set of correlated continuous variables.

What it does: Factor analysis attempts to identify a small set of factors that represents the underlying relationships among a group of related variables.

Page 36: spss

Summary for Factor analysis (Cont)

Assumptions:1) Sample Size: Ideally the overall sample size should be 150+

2) Factorability of the correlation matrix: It should show at least some correlations of r=.3 or greater.

3) Linearity: It is assumed that the relationship between the variables is linear.

4) Outliers among cases: It can have sensitive outliers.

Page 37: spss

Procedure for factor analysis

From the menu at the top of the screen click on: Analyze, then click on Data Reduction, then on Factor.

Select all the required variables. Click on the Descriptive button.

--In the section marked Correlation Matrix, select the options Coefficients and KMO and Bartlett’s test of sphericity.

Click on continue.

Page 38: spss

Procedure for factor analysis (Cont)

Click on the Extraction button.

--- In the method section make sure Principal components is listed

---In the Analyze section make sure the Correlation matrix option is selected

---In the Display section click on Screeplot and make sure the Unrotated factor solution option is also selected

---In the Extract section the Eigenvalues over 1 button

Page 39: spss

Procedure for factor analysis (Cont)

--- is selected.--- click on Continue. Click on the Options button.--- In the Missing Values section click on Exclude Pair

wise.---In the Coefficient Display Format section click on

Sorted by size and suppress absolute values less than___.Type the value of .3 in the box.

Click on continue and then OK.

Page 40: spss

Steps of Output from Factor Analysis

Step1: Look for correlation coefficient of .3 and above. You should also check that the Kaiser-Meyer-Olkin (KMO) value is .6 or above. The Barlett’s Test of Sphericity value should be significant (sig. value should be .05 or smaller)

Step 2: We are interested only in components that have an eigenvalue of 1 or more.

Page 41: spss

Steps of Output from Factor Analysis (Cont)

Step 3: Using the KMO criterion, you will find that too many components are extracted, so it is important to also look at the screeplot provided by SPSS

Step 4: You need to use the list of eigenvalues provided in the Total Variance Explained table.

Step 5: The final table we need to look at is the Component Matrix. This shows the loadings of each of the items on the four components.

Page 42: spss

T-test (Parametric test)

Independent-sample t-test, used when you want to compare the mean scores of two different groups of people or conditions; and

Paired-sample t-test, used when you want to compare the mean scores for the same group of people on two different occasions.

Page 43: spss

Summary for independent-sample t-test

Example: Is there a significant difference in the mean self-esteem scores for males and females?

What you need: Two variables:

-- One categorical, independent variable

-- One continuous, dependent variable.

What it does: You are testing the probability that the two sets of scores came from the same population.

Page 44: spss

Procedure for independent-samples t-test

From the menu at the top of the screen click on Analyze, then click on compare means, then on Independent Sample T-test.

Move the dependent variable into the area labelled Test variable

Move the independent variable into the section labelled Grouping Variable.

Click on define groups and type in the numbers used in the data set to code each group.

Page 45: spss

Procedure for independent-samples t-test (Cont)

In the current data file 1=males, 2=females, therefore, in the group 1 box, type1; and in the Group 2 box, type 2.

Click on Continue and the OK.

Page 46: spss

Steps of Output from independent-samples t-test

Step 1: Checking the information about the groups: It gives you the mean and standard deviation for each of your groups.

Step 2: Checking assumptions: If your Sig. value is larger than .05, you should use the first line in the table, which refers to Equal variances assumed.

If the Significance level of Leven’s test is p=.05 or less, this means that the variances for the two groups are nit the same.

Page 47: spss

Steps of Output from independent-samples t-test (Cont)

Step 3: Assessing differences between the groups: If the value in the Sig. (2-tailed) column is equal or less than .05, then there is significance difference.

If the value is above .05, there is no significant difference between two groups.

Page 48: spss

Summary for paired-samples t-test

Example: Is there a significant change in participants’ fear of statistics scores following participation in an intervention designed to increase students’ confidence in their ability to successfully complete a statistics course?

What you need: One set of subjects. Each person must provide both sets of scores.

Two Variables: One Categorical variable another continuous variable.

What it does: A paired-samples t-tests are

Page 49: spss

Summary for paired-samples t-test (Cont)

tell you whether there is a statistically significant difference in the mean scores for Time 1 and Time 2.

Assumptions: The difference between the two scores obtained for each subject should be normally distributed.

With sample sizes of 30+, violation of this assumption is unlikely to cause problems.

Page 50: spss

Procedure for Paired-samples t-test

From the menu at the top of the screen click on Analyze, then click on Compare Means, then on Paired Sample T-test.

Click on the two variables that you are interested in comparing for each subject.

With both of the variables highlighted, move them into the labelled Paired variables by clicking on the arrow button, click on OK.

Page 51: spss

Steps of output from paired t-test

Step1: Determining overall significance: Sig. (2-tailed) If this value is less than .05 then there is a significant difference between your two scores.

Step2: Comparing mean values: The next steps is to find out which set of scores is higher. It helps us to find out the mean scores for each of the two sets of scores.

Page 52: spss

One-way analysis of variance (ANOVA)

It involves one independent variable (refer to as a factor), which has a number of different levels.

Page 53: spss

Summary for One way between two groups

Example: Is there a difference in optimism scores for young, middle aged and old subjects?

What you need: Two variables- One Categorical independent

variable with three or more distinct categories.

--One continuous dependent variable.

What it does: One-way ANOVA will tell you whether there are significant differences

Page 54: spss

Summary for One way between two groups (Cont)

in the mean scores on the dependent variable across the three groups. Post-hoc tests can then be used to find out where these differences lie.

Page 55: spss

Procedure for One-way between groups

From the menu at the top of the screen click on: Analyze, then click on Compare Means, then on One-way ANOVA.

Click on your dependent variable. Move this into the box marked Dependent List by Clicking on the arrow button

Click on your independent, categorical variable. Move this into the box labelled Factor.

Click the options button and click on Descriptive,

Page 56: spss

Procedure for One-way between groups (Cont)

Homogeneity of variance test, Brown-Forsythe, Welsh and Means Plot.

For Missing values, make sure there is a dot in the option marked Exclude cases analysis by analysis. If not, click on this option once. Click on Continue.

Click on the button marked Post Hoc. Click on Tukey.

Click on Continue and then OK.

Page 57: spss

Interpretation of output from One-way between groups ANOVA

Test of homogeneity of variances: This test identify whether the variance in scores is the same for each of the three groups. If this number is greater than .05 then you have not violated the assumption.

ANOVA: If the sig. value is less than or equal to .05 then there is a significant difference somewhere among the mean scores on your dependent variable for the three groups.

Page 58: spss

Interpretation of output from One-way between groups ANOVA (Cont)

Multiple Comparisons: If you find an asterisk, this means that the two groups being compared are significantly different from one another at the p<.05 level.

Means Plot: It provides an easy way to compare the mean scores for the different groups.

Page 59: spss

Two-way between-groups ANOVA

Two-way means that there are two independent variables, and between-groups indicates that different people are in each of the groups.

Page 60: spss

Summary for Two-way ANOVA

Example: What is the impact of age and gender on optimism? Does gender moderate the relationship between age and optimism?

What you need: Three variables-two categorical

independent variables. --One continuous variable

(e.g.; total optimism) What it does: Two-way

ANOVA allows you to simultaneously test for the effect of each of your independent variables on the dependent variable.

Page 61: spss

Procedure for two-way ANOVA

From the menu at the top of the screen click on: Analyze, then click on General Linear Model, then in Univariate.

Click on your dependent, continuous variables and move into the box labelled Dependent Variable.

Click on your two independent, categorical variables and move these into the box labelled Fixed Factors.

Click on the Options button.

Page 62: spss

Procedure for two-way ANOVA (Cont)

--- Click on Descriptive Statistics, estimates of effect size and Homogeneity tests.

--- click on continue. Click on the Post-Hoc button Click on the plots button

--- In the horizontal box put the independent variable that has most groups.

--- In the box labelled Separate Lines put the other

Page 63: spss

Procedure for two-way ANOVA (Cont)

independent variable.

--- Click on Add.

--- In the section labelled Plots you should now see two variables listed.

Click on Continue and then OK.

Page 64: spss

Interpretation of Output from two-way ANOVA

Descriptive Statistics: These provide the mean scores, standard deviation and N for each subgroup. Check that these values are correct.

Leven’s Test of Equality of error variances: The value you are most interested in this Sig. level. The value is to be greater than .05, and therefore not significant. A significant result (sig. value less than .05) suggests that the variance of your dependent variable across the groups is not equal.

Page 65: spss

Multivariate analysis of variance (MANOVA)

you want to compare your groups on a number of different, but related, dependent variables. Multivariate ANOVA can be used with one0way, two-way and higher factorial designs involving one, two, or more independent Multivariate analysis of variance is used when variables.

Page 66: spss

Summary for one-way MANOVA

Example: Do males and females differ in terms of overall wellbeing?

What you need: One-way MANOVA- One categorical, independent variable.

- Two or more continuous, dependent variables. MANOVA

Can also be extended to two-way and higher-order designs involving two or more categorical, independent variables.

What it does: Compares two or more groups in terms of their means on a group of dependent variables.

Page 67: spss

Procedure for MANOVA

From the menu at the top of the screen click on: Analyze, then click on General Linear Model, then on Multivariate.

In the Dependent Variables box enter each of your dependent variables.

In the Fixed factors box enter your independent variable.

Click on the Model button. Make sure that the

Page 68: spss

Procedure for MANOVA (Cont)

Full factorial button is selected in the specify Model box.

Down the button in the Sum of squares box. Type III should be displayed. This is the default method of calculating sums of squares. Click on Continue.

Click on the Options button. In the section labelled Factor and Factor interactions click on your independent variable. Move it into the box

Page 69: spss

Procedure for MANOVA (Cont)

marked Display Means for. In the Display section of this screen, put a tick in

the boxes labelled Descriptive Statistics, estimates of effect size and Homogeneity tests.

Click on Continue and then OK.

Page 70: spss

Interpretation of output from MANOVA

Descriptive statistics: Check that the information is correct. Check that the N Values correspond to what you know about sample.

Box’s test: If the sig. value is larger than .001, then you have not violated the assumption.

Leven’s Test: In the sig. column value is less than .05.These would indicate that you have violated the assumption of equality of variance for that variable.

Page 71: spss

Interpretation of output from MANOVA (Cont)

Multivariate tests: It indicate whether there are statistically significant differences among the groups on a linear combination of the independent variables.

Wilk’s Lambda: If the significance level is less than .05, then you can conclude that there is a difference among your groups.

Between-subjects effects: It gives a significant

Page 72: spss

Interpretation of output from MANOVA (Cont)

result on this multivariate test of significance, this gives you permission to investigate further in relation to each of your dependent variables.

Significance: Move down to third set of values in a row labelled with your independent variable.

Effect size: Partial Eta Squared represents the proportion of the variance in the dependent variable.

Page 73: spss

Interpretation of output from MANOVA (Cont)

Comparing group means: To find this out we refer to the output table provided in the section labelled Estimate marginal Means.

Page 74: spss

Chi square (Non-parametric Statistics)

The chi-square test for independence is used to determine whether two categorical variables are related.

Page 75: spss

Summary for chi-square

Example: Are males more likely to be smokers than females? Is the proportion of males that smoke the same as the proportion of females? Is there a relationship between gender and smoking behavior?

What you need: Two categorical Variables, with two or more categories in each, Like, Gender (Male/female) and Smoker (Yes/No).

Assumptions: The lowest frequency in any cell should

Page 76: spss

Summary for chi-square (Cont)

be 5 or more. At least 80 percent of cells should have expected frequencies 5 or more. If you have 1 by 2 or 2 by 2 table, it is recommended that the expected frequency be at least 10.

Page 77: spss

Procedure for chi-square

From the menu at the top of the screen click on: Analyze, then click on Descriptive Statistics, then on Crosstabs.

Click on one of your variables to be your row variable, clock on the arrow to move it into the box marked Row(s)

Click on the other variable to be your column variable, click on the arrow to move it into the box marked Column(s).

Page 78: spss

Procedure for chi-square (Cont)

Click on the Statistics button. Choose Chi-square. Click on continue.

Click on the cells button. In the Counts box, click on the Observed and

Expected boxes. In the percentage section click on the Row, Column

and Total boxes. Click on continue and then OK.

Page 79: spss

Interpretation of Output from chi-square

Assumptions: The first thing you should check is whether you have violated one of the assumptions of chi-square concerning the ‘minimum expected cell frequency’ , which should be 5 or greater.

Chi-square tests: To be significant value needs to be .05 or smaller. Say for the example, if the value is .56, then it is larger than the alpha value of .05, so we can conclude that our result is not significant.

Page 80: spss

Spearman’s Rank Order Correlation (Non-parametric test)

It is used to calculate the strength of the relationship between two continuous variables.

Page 81: spss

Summary for Spearman’s Rank Order

Example: How strong is the relationship between control of internal states and perceived stress?

What you need: Two continuous variables.

Page 82: spss

Procedure for spearman’s Rank order

From the menu at the top of the screen click on: Analyze, then click on Correlate, then on Bivariate.

Click on your two variables and move them into the box marked Variables.

In the section labelled correlation coefficients click on the option labelled Spearman. Click on OK.

Page 83: spss

Interpretation of output from Spearman’s Rank order

The output from spearman Rank order correlation can be interpreted in the same way as the output obtained from Pearson product-moment correlation.

Page 84: spss

Kruskal-Wallis Test (Non-parametric test)

Kruskal-Wallis allows you to compare the scores on some continuous variable for three or more groups. This is a ‘between groups’ analysis, so different people must be in each of the different groups.

Page 85: spss

Summary for Kruskal-Wallis Test

Example: Is there a difference in optimism levels across three age levels?

What you need: Two variables- 1) One categorical independent variable with three or more categories.

2) One continuous dependent variable.

Parametric Alternatives: One-way between groups analysis of variance.

Page 86: spss

Procedure for Kruskal-Wallis test

From the menu at the top of the screen click on: Analyze, then click on Non-parametric tests, then on K Independent Samples.

Click on your continuous and move it into the Test Variable List box.

Click on your categorical and move it into the Grouping variable box.

Click on the Define range button. Type in the first value of your categorical variable in the minimum

Page 87: spss

Procedure for Kruskal-Wallis test (Cont)

box. Type the largest value for your categorical variable in the maximum box. Click on Continue.

In the Test Type section make sure that the Kruskal-Wallis H box is ticked. Click on OK.

Page 88: spss

Interpretation of output from Kruskal-Wallis test

If this significance level is a value less than .05 then you can conclude that thee is a statistically significant difference in your continuous variable across the three groups.

You can then inspect the Mean rank for the three groups presented in your first output table.

Page 89: spss

Friedman Test (Non-parametric test)

It is used when you take the same sample of subjects or cases and you measure them at three or more points in time, or under three difference conditions.

Page 90: spss

Summary for Friedman test

Example: Is there a change in fear of statistics scores across three time periods?

What you need: One sample of subjects, measured on the same scale or measured

at three different time periods, or under three different conditions.

Parametric alternatives: Analysis of variance.

Page 91: spss

Procedure for Friedman Test

From the menu at top of the screen click on: Analyze, then click on Non-parametric tests, then on K Related Samples.

Click on the variables that represent the three measurements.

In the Test Type section check that the Friedman option is selected. Click on OK.

Page 92: spss

Interpretation of output from Friedman test

The result of this test there are significant difference in the fear of statistics scores across the time periods.