Top Banner
CHAPTER 26: COMPARING COUNTS OF CATEGORICAL DATA To test claims and make inferences about counts for categorical variables Objecti ve:
46

Chapter 26: Comparing counts of categorical data

Feb 24, 2016

Download

Documents

danno

Chapter 26: Comparing counts of categorical data. To test claims and make inferences about counts for categorical variables. Objective:. Goodness-of-Fit. - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Chapter 26:  Comparing counts of categorical data

CHAPTER 26: COMPARING COUNTS OF CATEGORICAL DATA

To test claims and make inferences about counts for categorical variablesObjecti

ve:

Page 2: Chapter 26:  Comparing counts of categorical data

Goodness-of-Fit A test of whether the distribution of

counts in one categorical variable matches the distribution predicted by a model is called a goodness-of-fit test.

As usual, there are assumptions and conditions to consider…

Page 3: Chapter 26:  Comparing counts of categorical data

Assumptions & Conditions Counted Data Condition: Check that the data

are counts for the categories of a categorical variable.

Independence Assumption: The counts in the cells should be independent of each other.

Randomization Condition: The individuals who have been counted and whose counts are available for analysis should be a random sample from some population.

Page 4: Chapter 26:  Comparing counts of categorical data

Assumptions & Conditions (cont.) Sample Size Assumption: We must have

enough data for the methods to work.

Expected Cell Frequency Condition: We should expect to see at least 5 individuals in each cell (for expected counts).

This is similar to the condition that np and nq be at least 10 when we tested proportions.

Page 5: Chapter 26:  Comparing counts of categorical data

Calculations Since we want to examine how well the

observed data reflect what would be expected, it is natural to look at the differences between the observed and expected counts (Observed – Expected).

These differences are actually residuals, so we know that adding all of the differences will result in a sum of 0. That’s not very helpful.

Page 6: Chapter 26:  Comparing counts of categorical data

Calculations (cont.) We’ll handle the residuals as we did in

regression, by squaring them.

To get an idea of the relative sizes of the differences, we will divide each squared difference by the expected count for that cell.

Page 7: Chapter 26:  Comparing counts of categorical data

Calculations (cont.) We’ll handle the residuals as we did in

regression, by squaring them.

To get an idea of the relative sizes of the differences, we will divide each squared difference by the expected count for that cell.

Page 8: Chapter 26:  Comparing counts of categorical data

Calculations (cont.) The test statistic, called the chi-

squared statistic, is found by adding up the sum of the squares of the deviations between the observed and expected counts divided by the expected counts: 22

all cells

Obs ExpExp

Page 9: Chapter 26:  Comparing counts of categorical data

Calculations (cont.) The chi-square models are actually a

family of distributions indexed by degrees of freedom (much like the t-distribution).

The number of degrees of freedom for a goodness-of-fit test is n – 1, where n is the number of categories.

Page 10: Chapter 26:  Comparing counts of categorical data

One-Sided or Two-Sided? The chi-square statistic is used only for testing

hypotheses, not for constructing confidence intervals as there is not one specific parameter. You will see this when we set up our hypotheses later!

If the observed counts don’t match the expected, the statistic will be large—it can’t be “too small.”

So the chi-square test is always one-sided.

If the calculated P-value is small enough, we’ll reject the null hypothesis.

Page 11: Chapter 26:  Comparing counts of categorical data

One-Sided or Two-Sided? (cont.) The mechanics may work like a one-sided

test, but the interpretation of a chi-square test is in some ways many-sided.

There are many ways the null hypothesis could be wrong.

There’s no direction to the rejection of the null model—all we know is that it doesn’t fit.

Page 12: Chapter 26:  Comparing counts of categorical data

The Chi-Square Calculation1. Find the expected values:

Every model gives a hypothesized proportion for each cell.

The expected value is the product of the total number of observations times this proportion.

2. Compute the residuals: Once you have expected values for each cell, find the residuals, Observed – Expected.

3. Square the residuals.

Page 13: Chapter 26:  Comparing counts of categorical data

The Chi-Square Calculation (cont.)4. Compute the components. Now find

the components for each cell.

5. Find the sum of the components (that’s the chi-square statistic).

2Observed ExpectedExpected

Page 14: Chapter 26:  Comparing counts of categorical data

The Chi-Square Calculation (cont.)6. Find the degrees of freedom. It’s

equal to the number of cells minus one.

7. Test the hypothesis. Use your chi-square statistic to find the P-

value. (Remember, you’ll always have a one-sided test.)

Large chi-square values mean lots of deviation from the hypothesized model, so they give small P-values.

Page 15: Chapter 26:  Comparing counts of categorical data

I Believe the Null is True! Goodness-of-fit tests are likely to be

performed by people who have a theory of what the proportions should be, and who believe their theory to be true.

Unfortunately, the only null hypothesis available for a goodness-of-fit test is that the theory is true.

As we know, the hypothesis testing procedure allows us only to reject or fail to reject the null.

Page 16: Chapter 26:  Comparing counts of categorical data

I Believe the Null is True! (cont.) We can never confirm that a theory is in

fact true.

At best, we can point out only that the data are consistent with the proposed theory.

Remember, it’s that idea of “not guilty” versus “innocent.” We can never prove someone is innocent, we just have no evidence to prove them guilty, so they are “not guilty.”

Page 17: Chapter 26:  Comparing counts of categorical data

Steps for Chi-Square GOF Inference Tests

1. Check Conditions and show that you have checked these!

Counted Data Condition: Check that the data are counts for the categories of a categorical variable.

Randomization Condition: The individuals who have been counted and whose counts are available for analysis should be a random sample from some population.

Expected Cell Frequency Condition: We should expect to see at least 5 individuals in each cell (for expected counts).

Page 18: Chapter 26:  Comparing counts of categorical data

Steps for Chi-Square GOF Inference Tests (cont.)

2. State the test you are about to conduct Chi-Square Goodness of Fit (GOF) Test

3. Set up your hypotheses H0: that the proportions given are correct HA: at least one of the proportions is incorrect

4. Calculate your test statistic

5. Draw a picture of your desired area under the chi-square model, and calculate your P-value.

22

all cells

Obs ExpExp

Page 19: Chapter 26:  Comparing counts of categorical data

Steps for Chi-Square GOF Inference Tests (cont.)

6. Make your conclusion.P-Value Action ConclusionLow Reject H0 There is

sufficient evidence to conclude HA in context.

High Fail to reject H0

There is not sufficient evidence to conclude HA in context.

Page 20: Chapter 26:  Comparing counts of categorical data

Example: Chi-Square Goodness of Fit

Biologists wish to mate two fruit flies having genetic makeup RrCc. Indicating it has one dominant gene (R) and one recessive gene (r) for eye color, along with one dominant (C) and one recessive (c) gene for wing type. Each offspring will receive one gene for each of the two traits from both parents. The following table, often called a Punnett square, shows the possible combinations of genes received by the offspring.

Any offspring receiving an R gene will have red eyes, and offspring receiving a C gene will have straight wings. So based on this Punnett square, the biologists predict a ratio of 9 red-eyed, straight-wing (x): 3 red-eyed, curly wing (y): 3 white-eyed, straight (z): 1 white-eyed, curly (w) offspring. In order to test their hypothesis about the distribution of offspring, the biologists mate the fruit flies. Of 200 offspring, 101 had red eyes and straight wings, 42 had red eyes and curly wings, 49 had white eyes and straight wings, and 10 had white eyes and curly wings. Do these data differ significantly from what the biologists have predicted?

Page 21: Chapter 26:  Comparing counts of categorical data

Example: Chi-Square Goodness of Fit (cont.)

Page 22: Chapter 26:  Comparing counts of categorical data

TI Tips The x2 test in the Stat-Test will not calculate the GOF for you. Enter counts into L1 and expected percentages or fractions into L2

Convert percents to expected counts by multiplying each by the total # of observations (i.e. L2 * sum(L1))

Choose D: x2 GOF-Test from STAT-Tests (only available in the TI-84 models or newer)

Specify the lists where you stored the observed and expected counts and enter the degrees of freedom.

Page 23: Chapter 26:  Comparing counts of categorical data

Two-Way Tables

Page 24: Chapter 26:  Comparing counts of categorical data

Comparing Observed Distributions A test comparing the distribution of

counts for two or more groups on the same categorical variable is called a chi-square test of homogeneity.

A test of homogeneity is actually the generalization of the two-proportion z-test.

Page 25: Chapter 26:  Comparing counts of categorical data

Comparing Observed Distributions (cont.)

The statistic that we calculate for this test is identical to the chi-square statistic for goodness-of-fit.

In this test, however, we ask whether choices are the same among different groups (i.e., there is no model).

The expected counts are found directly from the data and we have different degrees of freedom.

Page 26: Chapter 26:  Comparing counts of categorical data

Assumptions & Conditions for Two-Way Tables

The assumptions and conditions are (almost) the same as for the chi-square goodness-of-fit test: Counted Data Condition: The data must be

counts of two OR more categorical variables.

Randomization Condition and 10% Condition: As long as we don’t want to generalize about a larger population, we don’t have to check these conditions.

Expected Cell Frequency Condition: The expected count in each cell must be at least 5.

Page 27: Chapter 26:  Comparing counts of categorical data

Calculations for Two-Way Tables

To find the expected counts, we multiply the row total by the column total and divide by the grand total.

We calculate the chi-square statistic as we did in the goodness-of-fit test:

In this situation we have (R – 1)(C – 1) degrees of freedom, where R is the number of rows and C is the number of columns. We’ll need the degrees of freedom to find a

P-value for the chi-square statistic.

22

all cells

Obs ExpExp

Page 28: Chapter 26:  Comparing counts of categorical data

Chi-Square Test for Homogeneity (Two-Way Tables) Example

Chronic cocaine users need the drug to feel pleasure. Perhaps giving them medication that fights depression will help them stay off cocaine. A 3-year study compared an anti-depressant called desipramine with lithium (a standard treatment for cocaine addiction) and a placebo. The subjects were 72 chronic cocaine users who wanted to break their drug habit. Twenty-four of the subjects were randomly assigned to each treatment. Are the proportions of cocaine addicts who avoid relapse the same across all treatments?

Page 29: Chapter 26:  Comparing counts of categorical data

Chi-Square Test for Homogeneity (Two-Way Tables) Example (cont.)

Page 30: Chapter 26:  Comparing counts of categorical data

Examining the Residuals When we reject the null hypothesis, it’s always a

good idea to examine residuals.

For chi-squared tests, we want to work with standardized residuals, since we want to compare residuals for cells that may have very different counts.

To standardize a cell’s residual, we just divide by the square root of its expected value:

Obs Expc

Exp

Page 31: Chapter 26:  Comparing counts of categorical data

Examining the Residuals (cont.) These standardized residuals are just the

square roots of the components we calculated for each cell, with the + or the – sign indicating whether we observed more cases than we expected, or fewer.

The standardized residuals give us a chance to think about the underlying patterns and to consider the ways in which the distribution might not match what we hypothesized to be true.

Page 32: Chapter 26:  Comparing counts of categorical data

Examining the Residuals (cont.) Look at the table of residuals for a

university's class:

What do you notice?

Page 33: Chapter 26:  Comparing counts of categorical data

Independence

Page 34: Chapter 26:  Comparing counts of categorical data

Independence Contingency tables categorize counts on two

(or more) variables so that we can see whether the distribution of counts on one variable is contingent on the other.

A test of whether the two categorical variables are independent examines the distribution of counts for one group of individuals classified according to both variables in a contingency table.

A chi-square test of independence uses the same calculation as a test of homogeneity.

Page 35: Chapter 26:  Comparing counts of categorical data

Assumptions & Conditions for the Chi-Square Test for Independence

We still need counts and enough data so that the expected values are at least 5 in each cell.

If we’re interested in the independence of variables, we usually want to generalize from the data to some population.

In that case, we’ll need to check that the data are a representative random sample from that population for the random condition.

Page 36: Chapter 26:  Comparing counts of categorical data

Examine the Residuals Each cell of a contingency table

contributes a term to the chi-square sum.

It helps to examine the standardized

residuals, just like we did for tests of homogeneity.

Page 37: Chapter 26:  Comparing counts of categorical data

Homogeneity vs. Independence

In the test of independence, all subjects/units are collected at random from a population, and two categorical variables are observed for each unit.

In the test of homogeneity, the data are collected by randomly sampling from each sub-group separately. (Say, 100 blacks, 100 whites, 100 American Indians, and so on.) The null hypothesis is that each sub-group shares the same distribution of another categorical variable. (Say, "chain smoker", "occasional smoker", "non-smoker".)

The difference between these two tests is subtle yet important.

Page 38: Chapter 26:  Comparing counts of categorical data

Example: Chi-Squared Test for Independence or Association

In a study of heart disease in male federal employees, researchers classified 356 volunteer subjects according to their SES and their smoking habits. There were three categories of SES: high, middle, and low. Individuals were asked whether they were current smokers, former smokers, or had never smoked, producing three categories for smoking habits as well. Here is the two-way table that summarizes the data:

Are SES and smoking independent?

Page 39: Chapter 26:  Comparing counts of categorical data

Example: Chi-Squared Test for Independence or Association (cont.)

Page 40: Chapter 26:  Comparing counts of categorical data

TI-Tips for Testing Homogeneity or Independence

Enter the data in as a matrix Matrix (2nd Matrix) EDIT matrix [A] Specify the dimensions of the table: rows X columns Enter the appropriate counts, one cell at a time

Do the test STAT TESTS x2 – test TI recognized you put observed counts into [A] and tells you when it

stores expected counts into [B]. Calculate for test mechanics

Page 41: Chapter 26:  Comparing counts of categorical data

Chi-Square & Causation Chi-square tests are common, and tests for

independence are especially widespread.

We need to remember that a small P-value is not proof of causation.

Since the chi-square test for independence treats the two variables symmetrically, we cannot differentiate the direction of any possible causation even if it existed.

And, there’s never any way to eliminate the possibility that a lurking variable is responsible for the lack of independence.

Page 42: Chapter 26:  Comparing counts of categorical data

Chi-Square & Causation (cont.) In some ways, a failure of independence

between two categorical variables is less impressive than a strong, consistent, linear association between quantitative variables.

Two categorical variables can fail the test of independence in many ways.

Examining the standardized residuals can help you think about the underlying patterns.

Page 43: Chapter 26:  Comparing counts of categorical data

What Can Go Wrong? Don’t use chi-square methods unless you have

counts. Just because numbers are in a two-way table

doesn’t make them suitable for chi-square analysis.

Beware large samples. With a sufficiently large sample size, a chi-square

test can always reject the null hypothesis.

Don’t say that one variable “depends” on the other just because they’re not independent. Association is not causation.

Page 44: Chapter 26:  Comparing counts of categorical data

Recap We’ve learned how to test hypotheses about

categorical variables.

All three methods we examined look at counts of data in categories and rely on chi-square models.

Goodness-of-fit tests compare the observed distribution of a single categorical variable to an expected distribution based on theory or model.

Tests of homogeneity compare the distribution of several groups for the same categorical variable.

Tests of independence examine counts from a single group for evidence of an association between two categorical variables.

Page 45: Chapter 26:  Comparing counts of categorical data

Recap (cont.) Mechanically, these tests are almost identical.

While the tests appear to be one-sided, conceptually they are many-sided, because there are many ways that the data can deviate significantly from what we hypothesize.

When we reject the null hypothesis, we know to examine standardized residuals to better understand the patterns in the data.

Page 46: Chapter 26:  Comparing counts of categorical data

Assignments: pp. 642 – 648 Day 1: # 3, 4, 5

Day 2: # 7, 8

Day 3: # 14, 16, 18, 20

Day 4: # 1, 24, 26

Day 5: # 2, 27, 30

Day 6: # 9, 28