Top Banner
Analysis of Variance - One Way I. Introduction II. Logic III. Notation IV. Terminology V. Partitioning the Variance VI. The F Test VII. Formal Example 1. Research Question 2. Hypotheses 3. Assumptions 4. Decision Rules 5. Computation - [Minitab ] 6. Decision VIII. Comparisons Among Means - [Minitab ] [Spreadsheet ] IX. Relation of F to t Homework I. Introduction The ANalysis Of VAriance (or ANOVA) is a powerful and common statistical procedure in the social sciences. It can handle a variety of situations. We will talk about the case of one between groups factor here and two between groups factors in the next section. The example that follows is based on a study by Darley and Latané (1969). The authors were interested in whether the presence of other people has an influence on whether
25
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Analysis of Variance

   

Analysis of Variance - One Way

I. Introduction II. Logic

III. Notation IV. Terminology V. Partitioning the Variance

VI. The F Test VII. Formal Example

1. Research Question 2. Hypotheses 3. Assumptions 4. Decision Rules 5. Computation  - [Minitab]6. Decision

VIII. Comparisons Among Means  - [Minitab] [Spreadsheet]IX. Relation of F to   t

Homework

I. Introduction

The ANalysis Of VAriance (or ANOVA) is a powerful and common statistical procedure in the social sciences. It can handle a variety of situations. We will talk about the case of one between groups factor here and two between groups factors in the next section.

The example that follows is based on a study by Darley and Latané (1969). The authors were interested in whether the presence of other people has an influence on whether a person will help someone in distress. In this classic study, the experimenter (a female graduate student) had the subject wait in a room with either 0, 2, or 4 confederates. The experimenter announces that the study will begin shortly and walks into an adjacent room. In a few moments the person(s) in the waiting room hear her fall and complain of ankle pain. The dependent measure is the number of seconds it takes the subject to help the experimenter.

Page 2: Analysis of Variance

How do we analyze this data? We could do a bunch of between groups t tests. However, this is not a good idea for three reasons.

1. The amount of computational labor increases rapidly with the number of groups in the study.

NumberGroups

Number Pairsof Means

3 3

4 6

5 10

6 15

7 21

8 28

2. We are interested in one thing -- is the number of people present related to helping behavior? -- thus it would be nice to be able to do one test that would answer this question.

3. The type I error rate rises with the number of tests we perform.

II. Logic

The reason this analysis is called ANOVA rather than multi-group means analysis (or something like that) is because it compares group means by analyzing comparisons of variance estimates. Consider:

Page 3: Analysis of Variance

We draw three samples. Why might these means differ? There are two reasons:

1. Group Membership (i.e., the treatment effect or IV).2. Differences not due to group membership (i.e., chance or sampling error).

The ANOVA is based on the fact that two independent estimates of the population variance can be obtained from the sample data. A ratio is formed for the two estimates, where:

one is sensitive to     treatment effect & error   between groups estimateand the other to      error     within groups estimate

Given the null hypothesis (in this case HO: 1=2=3), the two variance estimates should be equal. That is, since the null assumes no treatment effect, both variance estimates reflect error and their ratio will equal 1. To the extent that this ratio is larger than 1, it suggests a treatment effect (i.e., differences between the groups).

It turns out that the ratio of these two variance estimates is distributed as F when the null hypothesis is true.

Page 4: Analysis of Variance

Note:

1. F is an extended family of distributions, which varies as a function of a pair of degrees of freedom (one for each variance estimate).

2. F is positively skewed.3. F ratios, like the variance estimates from which they are derived, cannot have a

value less than zero.

Using the F, we can compute the probability of the obtained result occurring due to chance. If this probability is low (p  ), we will reject the null hypothesis. 

III. Notation

We already knew that:

i = any scoren = the last score (or the number of scores)

What is new here is that:

j = any groupp = the last group (or the number of groups)

Thus:

Page 5: Analysis of Variance

Group1 2 J P

 X11   X12   X1j   X1p 

X21 X22 X2j X2p

Xi1 Xi2 Xij Xip

Xn1 Xn2 Xnj Xnp

T1 T2 Tj Tp

n1 n2 nj np

And:

1.

2.

3.

4.

Page 6: Analysis of Variance

5.

IV. Terminology Since we are talking about the analysis of the variance, let's review what we know about it.

So the variance is the mean of the squared deviations about the mean (MS) or the sum of the squared deviations about the mean (SS) divided by the degrees of freedom.

V. Partitioning the Variance As noted above, two independent estimates of the population variance can be obtained. Expressed in terms of the Sum of Squares:

To make this more concrete, consider a data set with 3 groups and 4 subjects in each. Thus, the possible deviations for the score X13 are as follows: 

Page 7: Analysis of Variance

As you can see, there are three deviations and:

total withingroups

betweengroups

#3 #1 #2

To obtain the Sum of the Squared Deviations about the Mean (the SS), we can square these deviations and sum them over all the scores.

Page 8: Analysis of Variance

Thus we have:

Note: nj in formula for the SSBetween means do it once for each deviation.

VI. The F Test

It is simply the ratio of the two variance estimates:

Page 9: Analysis of Variance

As usual, the critical values are given by a table. Going into the table, one needs to know the degrees of freedom for both the between and within groups variance estimates, as well as the alpha level.

For example, if we have 3 groups and 10 subjects in each, then:

DfB = p - 1 = 3 – 1 = 2

DfW = p(n - 1) or with unequal N's:

  

= 3 * (10-1) = 27     

DfT = N - 1 = 30 - 1 = 29

Note that the df add up to the total and with  =.05, Fcrit= 3.35

VII. Formal Example

1. Research Question Does the presence of others influence helping behavior?

2. Hypotheses

  In Symbols In Words

HO 1=2=3 The presence of others does not influence helping.

HA Not Ho The presence of others does influence helping.

3. Assumptions 1) The subjects are sampled randomly.2) The groups are independent.3) The population variances are homogenous.

Page 10: Analysis of Variance

4) The population distribution of the DV is normal in shape.5) The null hypothesis.

4. Decision rules Given 3 groups with 4, 5, and 5 subjects, respectively, we have (3-1=) 2 df for the between groups variance estimate and (3+4+4=) 11 df for the within groups variance estimate. (Note that it is good to check that the df add up to the total.) Now with an  level of .05, the table shows a critical value of F is 3.98. If Fobs  Fcrit, reject Ho, otherwise do not reject Ho.

5. Computation  - [Minitab] 

Here is the data (i.e., the number of seconds it took for folks to help):

 

# people present0 2 425 30 32

30 33 39

20 29 35

32 40 41

  36 44

107 168 191

4 5 5

26.8 33.6 38.2

Page 11: Analysis of Variance

A good way to describe this data would be to plot the means: 

For the analysis, we will use a grid as usual for most of the calculations:

 

0 X2 2 X2 4 X2

 

25 625 30 900 32 1024

30 900 33 1089 39 1521

20 400 29 841 35 1225

32 1024 40 1600 41 1681

  36 1296 44 1936

107 

168 

191 

=466 T

4 5 5 =14 N

Page 12: Analysis of Variance

26.8 33.6 38.2    

  2949   5726   7387 =16062 II

2862.25 5644.8 7296.2 =15803.25 III

Now we need the grand totals and the three intermediate quantities:

I.

Page 13: Analysis of Variance

II.

III.

And:

SSB = III – I = 15803.25-15511.14 = 292.11SSW = II – III = 16062-15803.25 = 258.75SST = II – I = 16062-15511.14 = 550.86

Thus:

Source SS df MS F p

Between 292.11 2 146.056 6.21 <.05

Within 258.75 11 23.520  

Total 550.86 13  

6. Decision Since Fobs (6.21) is > Fcrit (3.98), reject Ho and conclude that the more people present, the longer it takes to get help.

VIII. Comparisons Among Means

In the formal example presented above, we rejected the null and asserted that the groups were drawn from different populations. But which groups are different from which? A "comparison" compares the means of two groups. There are two kinds of comparisons that we can perform: "preplanned" and "post hoc". These are outlined

Page 14: Analysis of Variance

below. Which approach is used should be based on our goals. In reality, the post hoc approach is the one that is most often taken.

Preplanned Post hoc

We have a theory (or some previous research) which suggests certain comparisons.

Have a significant overall (or omnibus) F & then want to localize the effect.

In this case, we might not even compute the omnibus F (this approach is somewhat analogous to a one-tailed test).

Are more commonly used than preplanned comparisons.

In addition, there are "simple" (involving two means) and "complex" (involving more than two means) comparisons. With three groups (Groups 1, 2 & 3), the following 6 comparisons are possible.

Simple Complex

1 vs. 2 (1 + 2) vs. 3

1 vs. 3 1 vs. (2 + 3)

2 vs. 3 (1 + 3) vs. 2

As the number of groups increases, so does the number of comparisons that are possible. Some of these can tell us about trend (a description of the form of the relationship between the IV and DV).

The problem with post hoc tests is that the type I error rate increases the more comparisons we perform. This is a somewhat controversial area and there are a number of methods currently in use to deal with this problem. We will consider one of the more simple methods below.

The protected t test - [Minitab] [Spreadsheet]

It is performed only when the omnibus F is significant. This technique is protected because it requires the omnibus F to be significant (which tells us there is at least one comparison between means that is significant). So, in other words, it is protected because we are not just shooting in the dark.

It uses a more stable estimate of the population variance than the t test

(i.e.,  instead of   and as a result the df is greater.

The formula is:

Page 15: Analysis of Variance

(Where the df's are 1 for the numerator and dfw for the denominator.)

So, for our example the critical value of F is 4.84 (from the table) and:

Page 16: Analysis of Variance

Thus, the only comparison that is significant is that between the first and third groups. 

IX. Relation of F to t

Since the F test is just an extension of the t test to more than two groups, they should be related and they are.

With two groups, F = t2 (and this applies to both the critical and observed values).

Page 17: Analysis of Variance

For example, consider the critical values for df = (1, 15) with  = .05:

Fcrit (1, 15) = tcrit (15)2

Obtaining the values from the tables, we can see that this is true:

4.54 = 2.1312

Copyright © 1997-2009 M. Plonsky, Ph.D. Comments? [email protected].

 Language Filter: All

Dundas Chart for Windows Forms

Anova FormulaSee Also Send comments on this topic.

Formula Reference > Statistical Formulas > Anova Formula

Overview

An Anova test is used to determine the existence, or absence of a statistically significant difference between the mean values of two or more groups of data.

Applying the Anova Test Formula

All statistical formulas are calculated using the Statistics class, the following table describes how to use its Anova method to perform an Anova test.

 

Value/Description

Formula Name: Anova  

Parameters: 1. probability: the alpha value (probability).

2. inputSeriesNames: the names of two or more input series. Each series must exist in the series collection at the time of the method call, and have the same number of data points.

Statistics.Anova(

Return: An AnovaResult object, which has the following members:

DegreeOfFreedomBetweenGroups

DegreeOfFreedomTotal

DegreeOfFreedomWithinGroups

-

Page 18: Analysis of Variance

FCriticalValue

FRatio

MeanSquareVarianceBetweenGroups

MeanSquareVarianceWithinGroups

SumOfSquaresBetweenGroups

SumOfSquaresTotal

SumOfSquaresWithinGroups

 

 Note

Make sure that all data points have their XValue property set, and that their series'  XValueIndexed property has been set to false.

Statistical Interpretation

The purpose of an Anova test is to determine the existence, or absence of a statistically significant difference amongst several group means. Anova actually uses variances to help determine if the various means are equal or not.

To perform an Anova test three basic assumptions must be fulfilled:

1. Each group from which a sample is taken is normal.2. Each group is randomly selected and independent.3. The variables from each group come from distribution with approximately equal standard deviation.

The Null and Alternative Hypotheses

The null hypothesis is simply that all group population means are the same. The alternate hypothesis is that at least one pair of means is different.

Calculation

1. Calculate the sample average for each group:

2. Calculate the average of all the averages:

3. Calculate the sample variance of the averages:

4. Calculate the sample variance for each group:

Page 19: Analysis of Variance

5. Calculate the average of all of the sample variances:

6. Calculate the value of the F Statistic:

 

Example

This example demonstrates how to perform an Anova Test, using Series1, Series2, and Series3 for the input series. The results are returned in an AnovaResult object.

Visual Basic  Copy Code

Imports Dundas.Charting.WinControl ... Dim result As AnovaResult = Chart1.DataManipulator.Statistics.Anova(.05, "Series1,Series2,Series3")

C#  Copy Code

using Dundas.Charting.WinControl; ... AnovaResult result = Chart1.DataManipulator.Statistics.Anova(.05, "Series1,Series2,Series3");

 

Example

This example demonstrates how to perform an Anova Test, using Series1, and Series2 for the input series.  The results are returned in an AnovaResult object.  The object values are then added as titles to a separate chart.  We assume that series data was added at design-time.  Further, we assume a "DundasBlue" template was applied for appearance purposes. 

 

Page 20: Analysis of Variance

Figure 1: Two Charts; One containing Series data (left), and the other containing the AnovaResult object (right). 

 

Visual Basic  Copy Code

Imports Dundas.Charting.WinControl ... ' The Anova Result Object is Created Dim result As AnovaResult = Chart1.DataManipulator.Statistics.Anova(.9,"Series1,Series2") ' Add Title to the second chart. Chart2.Titles.Add("AnovaResult Object at Probability: 90%") ' Change Appearance properties of the first title. Chart2.Titles(0).BackColor = Color.FromArgb(255, 65, 140, 240) Chart2.Titles(0).Font = New Font("Trebuchet", 12, FontStyle.Bold) Chart2.Titles(0).Color = Color.White Chart2.Titles(0).Style = TextStyle.Shadow ' Add All AnovaResult data to the title collection, using the ToString() method. Chart2.Titles.Add("Degree Of Freedom Between Groups:

Page 21: Analysis of Variance

" + result.DegreeOfFreedomBetweenGroups.ToString()) Chart2.Titles.Add("Degree Of Freedom Total:" + result.DegreeOfFreedomTotal.ToString()) Chart2.Titles.Add("Degree Of Freedom Within Groups: " + result.DegreeOfFreedomWithinGroups.ToString()) Chart2.Titles.Add("F Critical Value: " + result.FCriticalValue.ToString()) Chart2.Titles.Add("FRatio: " + result.FRatio.ToString()) Chart2.Titles.Add("Mean Square Variance Between Groups: " + result.MeanSquareVarianceBetweenGroups.ToString()) Chart2.Titles.Add("Mean Square Variance Within Groups: " + result.MeanSquareVarianceWithinGroups.ToString()) Chart2.Titles.Add("Sum Of Squares Between Groups: " + result.SumOfSquaresBetweenGroups.ToString()) Chart2.Titles.Add("Sum Of Squares Total: " + result.SumOfSquaresTotal.ToString()) Chart2.Titles.Add("Sum Of Squares Within Groups: " + result.SumOfSquaresWithinGroups.ToString())

C#  Copy Code

using Dundas.Charting.WinControl; ... // The Anova Result Object is Created AnovaResult result = Chart1.DataManipulator.Statistics.Anova(.9, "Series1,Series2"); // Add Title to the second chart. Chart2.Titles.Add("AnovaResult Object at Probability: 90%"); // Change Appearance properties of the first title. Chart2.Titles[0].BackColor = Color.FromArgb(255, 65, 140, 240); Chart2.Titles[0].Font = new Font("Trebuchet", 12, FontStyle.Bold); Chart2.Titles[0].Color = Color.White; Chart2.Titles[0].Style = TextStyle.Shadow; // Add All AnovaResult data to the title collection, using the ToString() method. Chart2.Titles.Add("Degree Of Freedom Between Groups: " + result.DegreeOfFreedomBetweenGroups.ToString()); Chart2.Titles.Add("Degree Of Freedom Total:" + result.DegreeOfFreedomTotal.ToString()); Chart2.Titles.Add("Degree Of Freedom Within Groups: " + result.DegreeOfFreedomWithinGroups.ToString()); Chart2.Titles.Add("F Critical Value: " + result.FCriticalValue.ToString()); Chart2.Titles.Add("FRatio: " + result.FRatio.ToString()); Chart2.Titles.Add("Mean Square Variance Between Groups: " + result.MeanSquareVarianceBetweenGroups.ToString()); Chart2.Titles.Add("Mean Square Variance Within Groups: " + result.MeanSquareVarianceWithinGroups.ToString()); Chart2.Titles.Add("Sum Of Squares Between Groups: " + result.SumOfSquaresBetweenGroups.ToString()); Chart2.Titles.Add("Sum Of Squares Total: " + result.SumOfSquaresTotal.ToString()); Chart2.Titles.Add("Sum Of Squares Within Groups: " + result.SumOfSquaresWithinGroups.ToString());

See Also

Financial FormulasFormulas Overview

Statistical FormulasUsing Statistical FormulasStatistical Formula Listing

 

 

Copyright © 2001 - 2009, Dundas Data Visualization, Inc. and others.

©2009. All Rights Reserved.

Page 22: Analysis of Variance