Top Banner
Slide 21 - 1 Copyright © 2010 Pearson Education, Inc.
39

Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Mar 27, 2015

Download

Documents

Natalie Owen
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 1Copyright © 2010 Pearson Education, Inc.

Page 2: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 2Copyright © 2010 Pearson Education, Inc.

Solution: C

Page 3: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Copyright © 2010 Pearson Education, Inc.

Chapter 21More About Tests and Intervals

Page 4: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 4Copyright © 2010 Pearson Education, Inc.

Null Hypothesis

To perform a hypothesis test, the null must be a statement about the value of a parameter for a model.

We then use this value to compute the probability that the observed sample statistic—or something even farther from the null value—might occur.

You cannot prove a null hypothesis true. Use what you want to show as the alternative.

Page 5: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 5Copyright © 2010 Pearson Education, Inc.

Example: The diabetes drug Avandia was approved to treat type II diabetes in 1999. But in 2007 an article in the New England Journal of Medicine raised concerns that the drug might carry an increased risk of heart attack. This study combined results from a number of other separate studies to obtain an overall sample of 4485 diabetes patients taking Avandia. People with Type 2 diabetes are known to have about a 20.2% chance of suffering a heart attack within a seven-year period. According to the articles author, the risk found in the NEJM study was equivalent to a 28.9% chance of heart attack over a 7 years. The FDA is the government agency responsible for relabeling Avandia to warn of the risk if it is judged to be unsafe. Although the statistical methods they used are more sophisticated, we can get an idea of their reasoning with the tools we have learned. What null hypothesis and alternative hypothesis about 7-yr heart attack risk would you test? Explain.

Page 6: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 6Copyright © 2010 Pearson Education, Inc.

How to Think About P-Values

A P-value is a conditional probability—P(observed statistic given that the null hypothesis is true). The P-value is NOT the probability that the null

hypothesis is true. It’s not even the conditional probability that null

hypothesis is true given the data.

Page 7: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 7Copyright © 2010 Pearson Education, Inc.

Example: A NEJM paper reported that the 7-yr risk of heart attack in diabetes patients taking the drub Avandia was increased from the baseline of 20.2% to an estimated 28.9% and said the P-value was .03. How should the P-value be interpreted in this context?

Page 8: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 8Copyright © 2010 Pearson Education, Inc.

What to Do with a High P-Value

When we see a small P-value, we could continue to believe the null hypothesis and conclude that we just witnessed a rare event. But instead, we trust the data and use it as evidence to reject the null hypothesis.

However big P-values just mean what we observed isn’t surprising. That is, the results are now in line with our assumption that the null hypothesis models the world, so we have no reason to reject it.

A big P-value doesn’t prove that the null hypothesis is true, but it certainly offers no evidence that it is not true.

Thus, when we see a large P-value, all we can say is that we “don’t reject the null hypothesis.”

Page 9: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 9Copyright © 2010 Pearson Education, Inc.

Example: The question of whether Avandia increased the risk of heart attack was raised by a study in the NEJM. This study estimated the seven-year risk of heart attack to be 28.9% and reported a P-value of .03 for a test of whether this risk was higher than the baseline seven-year risk of 20.2%. An earlier study had estimated the 7-yr risk to be 26.9% and reported a P-value of 0.27. Why did the researchers in the earlier study not express alarm about the increased risk they had seen?

Page 10: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 10Copyright © 2010 Pearson Education, Inc.

Alpha Levels

Sometimes we need to make a firm decision about whether or not to reject the null hypothesis.

When the P-value is small, it tells us that our data are rare given the null hypothesis.

We can define “rare event” arbitrarily by setting a threshold for our P-value. If our P-value falls below that point, we’ll reject

H0. We call such results statistically significant. The threshold is called an alpha level, denoted

by .

Page 11: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 11Copyright © 2010 Pearson Education, Inc.

Alpha Levels (cont.) Common alpha (also called the significance level) levels

are 0.10, 0.05, and 0.01. Consider your alpha level carefully and choose an

appropriate one for the situation. When we reject the null hypothesis, we say that the test

is “significant at that level.” What can you say if the P-value does not fall below ?

You should say that “The data have failed to provide sufficient evidence to reject the null hypothesis.”

Don’t say that you “accept the null hypothesis.” In a jury trial, if we do not find the defendant guilty, we say

the defendant is “not guilty”—we don’t say that the defendant is “innocent.”

Page 12: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 12Copyright © 2010 Pearson Education, Inc.

Alpha Levels (cont.)

The P-value gives the reader far more information than just stating that you reject or fail to reject the null.

In fact, by providing a P-value to the reader, you allow that person to make his or her own decisions about the test. What you consider to be statistically significant

might not be the same as what someone else considers statistically significant.

There is more than one alpha level that can be used, but each test will give only one P-value.

Page 13: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 13Copyright © 2010 Pearson Education, Inc.

Significant vs. Important

What do we mean when we say that a test is statistically significant? All we mean is that the test statistic had a

P-value lower than our alpha level. Don’t be lulled into thinking that statistical

significance carries with it any sense of practical importance or impact.

Page 14: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 14Copyright © 2010 Pearson Education, Inc.

Significant vs. Important (cont.)

For large samples, even small, unimportant (“insignificant”) deviations from the null hypothesis can be statistically significant.

On the other hand, if the sample is not large enough, even large, financially or scientifically “significant” differences may not be statistically significant.

It’s good practice to report the magnitude of the difference between the observed statistic value and the null hypothesis value (in the data units) along with the P-value on which we base statistical significance.

Page 15: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 15Copyright © 2010 Pearson Education, Inc.

Confidence Intervals and Hypothesis Tests

Confidence intervals and hypothesis tests are built from the same calculations. They have the same assumptions and

conditions. You can approximate a hypothesis test by

examining a confidence interval. Just ask whether the null hypothesis value is

consistent with a confidence interval for the parameter at the corresponding confidence level.

Page 16: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 16Copyright © 2010 Pearson Education, Inc.

Confidence Intervals and Hypothesis Tests (cont.)

Because confidence intervals are two-sided, they correspond to two-sided tests. In general, a confidence interval with a confidence

level of C% corresponds to a two-sided hypothesis test with an -level of 100 – C%.

The relationship between confidence intervals and one-sided hypothesis tests is a little more complicated. A confidence interval with a confidence level of C%

corresponds to a one-sided hypothesis test with an -level of ½(100 – C)%.

Page 17: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 17Copyright © 2010 Pearson Education, Inc.

Example: The baseline 7-yr risk of heart attacks from diabetics is 20.2%. In 2007 a NEJM study reported a 95% confidence interval equivalent to 20.8% to 40.0% for the risk among patients taking Avandia. What did this confidence interval suggest to the FDA about the safety of the drug?

Page 18: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 18Copyright © 2010 Pearson Education, Inc.

*A 95% Confidence Interval for Small Samples When the Success/Failure Condition fails, all is

not lost. Add four phony observations, two successes and

two failures. So instead of we use the adjusted

proportion

p̂ y

n,

p

y 2n 4

Page 19: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 19Copyright © 2010 Pearson Education, Inc.

*A Better Confidence Interval for Proportions (cont.)

Now the adjusted interval is

The adjusted form gives better performance overall and works much better for proportions of 0 or 1.

It has the additional advantage that we no longer need to check the Success/Failure Condition.

(1 )p pp z

n

Page 20: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 20Copyright © 2010 Pearson Education, Inc.

Example: Surgeons claimed their results to compare two methods for a surgical procedure used to alleviate pain on the outside of the wrist. A new method was compared with the traditional “freehand” method procedure. Of 45 operations using the “freehand” method, three were unsuccessful, for a failure rate of 6.7%. With only 3 failures, the data don’t satisfy the success/failure condition, so we can’t use a a standard confidence interval. What is the confidence interval using the “plus-four” method?

Page 21: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 21Copyright © 2010 Pearson Education, Inc.

Making Errors

When we perform a hypothesis test, we can make mistakes in two ways:

I. The null hypothesis is true, but we mistakenly reject it. (Type I error)

II. The null hypothesis is false, but we fail to reject it. (Type II error)

Page 22: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 22Copyright © 2010 Pearson Education, Inc.

Making Errors (cont.) Which type of error is more serious depends on

the situation at hand. In other words, the gravity of the error is context dependent.

Here’s an illustration of the four situations in a hypothesis test:

Page 23: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 23Copyright © 2010 Pearson Education, Inc.

Making Errors (cont.)

How often will a Type I error occur? Since a Type I error is rejecting a true null

hypothesis, the probability of a Type I error is our level.

When H0 is false and we reject it, we have done the right thing. A test’s ability to detect a false hypothesis is

called the power of the test.

Page 24: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 24Copyright © 2010 Pearson Education, Inc.

Making Errors (cont.)

When H0 is false and we fail to reject it, we have made a Type II error. We assign the letter to the probability of this

mistake. It’s harder to assess the value of because we

don’t know what the value of the parameter really is.

There is no single value for --we can think of a whole collection of ’s, one for each incorrect parameter value.

Page 25: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 25Copyright © 2010 Pearson Education, Inc.

Making Errors (cont.) One way to focus our attention on a particular is

to think about the effect size. Ask “How big a difference would matter?”

We could reduce for all alternative parameter values by increasing . This would reduce but increase the chance

of a Type I error. This tension between Type I and Type II errors

is inevitable. The only way to reduce both types of errors is to

collect more data. Otherwise, we just wind up trading off one kind of error against the other.

Page 26: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 26Copyright © 2010 Pearson Education, Inc.

Example: Back to Avandia. The issue of the NEJM in which that study appeared also included an editorial that said, in part, “A few events either way might have changed the findings for myocardial infarction or for deal from cardiovascular causes. In this setting, the possibility that the findings were due to chance cannot be excluded.” What kind of error would the researchers have made if, in fact, their findings were due to chance? What could be the consequences of this error?

Page 27: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 27Copyright © 2010 Pearson Education, Inc.

Power

The power of a test is the probability that it correctly rejects a false null hypothesis.

When the power is high, we can be confident that we’ve looked hard enough at the situation.

The power of a test is 1 –

Page 28: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 28Copyright © 2010 Pearson Education, Inc.

Power (cont.)

Whenever a study fails to reject its null hypothesis, the test’s power comes into question.

When we calculate power, we imagine that the null hypothesis is false.

The value of the power depends on how far the truth lies from the null hypothesis value. The distance between the null hypothesis

value, p0, and the truth, p, is called the effect size.

Power depends directly on effect size.

Page 29: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 29Copyright © 2010 Pearson Education, Inc.

Example: The study of Avandia combined results from 47 different trials, a method called meta-analysis. The drug’s manufacturer issued a statement that pointed out, “Each study is designed differently and looks at unique questions. By combining data from many studies, meta-analyses can achieve a much larger sample size. How could this larger sample size help?

Page 30: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 30Copyright © 2010 Pearson Education, Inc.

A Picture Worth Words

The larger the effect size, the easier it should be to see it.

Obtaining a larger sample size decreases the probability of a Type II error, so it increases the power.

It also makes sense that the more we’re willing to accept a Type I error, the less likely we will be to make a Type II error.

1

P(z 3.09)

Page 31: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 31Copyright © 2010 Pearson Education, Inc.

A Picture Worth Words (cont.)

This diagram shows the relationship between these concepts:

1

P(z 3.09)

Page 32: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 32Copyright © 2010 Pearson Education, Inc.

Reducing Both Type I and Type II Error

The previous figure seems to show that if we reduce Type I error, we must automatically increase Type II error.

But, we can reduce both types of error by making both curves narrower.

How do we make the curves narrower? Increase the sample size.

Page 33: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 33Copyright © 2010 Pearson Education, Inc.

Reducing Both Type I and Type II Error (cont.) This figure has means that are just as far apart as

in the previous figure, but the sample sizes are larger, the standard deviations are smaller, and the error rates are reduced:

Page 34: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 34Copyright © 2010 Pearson Education, Inc.

Reducing Both Type I and Type II Error (cont.)

Original comparison of errors:

Comparison of errors with a larger sample size:

Page 35: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 35Copyright © 2010 Pearson Education, Inc.

What Can Go Wrong?

Don’t interpret the P-value as the probability that H0 is true. The P-value is about the data, not the

hypothesis. It’s the probability of observing data this unusual,

given that H0 is true, not the other way around. Don’t believe too strongly in arbitrary alpha levels.

It’s better to report your P-value and a confidence interval so that the reader can make her/his own decision.

Page 36: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 36Copyright © 2010 Pearson Education, Inc.

Example: More Avandia … The drug manufacturer pointed out in their rebuttal, “Data from the earlier clinical trial did show a small increase in reports of myocardial infarction among the Avandia-treated group…however, the number of events is too small to reach a reliable conclusion about the role of the medicines may have played in this finding.” Why would this smaller study have been less likely to detect the difference in risk? What are the appropriate statistical concepts for comparing the smaller studies?

Page 37: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 37Copyright © 2010 Pearson Education, Inc.

What have we learned?

And, we’ve learned about the two kinds of errors we might make and seen why in the end we’re never sure we’ve made the right decision. Type I error, reject a true null hypothesis,

probability is . Type II error, fail to reject a false null

hypothesis, probability is . Power is the probability we reject a null

hypothesis when it is false, 1 – . Larger sample size increase power and

reduces the chances of both kinds of errors.

Page 38: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 38Copyright © 2010 Pearson Education, Inc.

Example: A bank wondered if it could hget more customers to make payments on delinquent balances by sending them a DVD urging them to set up a payment plan. The bank tested their strategy. A 90% confidence interval for the success rate is (0.29, 0.45). Their old send-a-letter method had worked 30% of the time. Can you reject the null hypothesis that the proportion is still 30% at alpha = 0.05?

Example: Given the confidence interval the bank found in their trial of DVDs, what would you recommend that they do? Should they scrap the DVD strategy?

Example: Explain what a type I error is in this context and what the consequences would be to the bank.

Example: What is a type II error in the experiment and what would it’s consequences be?

Example: For the bank, which situation has the higher power: a strategy that works really well, actually getting 60% of people to pay off their balances, or a strategy that barely increases the payoff rate to 32%? Explain.

Page 39: Copyright © 2010 Pearson Education, Inc. Slide 21 - 1.

Slide 21 - 39Copyright © 2010 Pearson Education, Inc.

Homework: Pg. 499 1-15 odd

Day 2 – Pg. 499 17 - 33a,b