Michael A. Kohn, MD, MPP 10/30/2008 Chapter 7 – Prognostic Tests Chapter 8 – Combining Tests and Multivariable Decision Rules.

Post on 26-Dec-2015

216 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

Transcript

Michael A. Kohn, MD, MPP

10/30/2008

Chapter 7 – Prognostic Tests

Chapter 8 – Combining Tests and Multivariable Decision Rules

Outline of Topics• Prognostic Tests

– Differences from diagnostic tests– Quantifying prediction: calibration and discrimination– Comparing predictions – Value of prognostic information

• Combining Tests/Diagnostic Models– Importance of test non-independence– Recursive Partitioning– Logistic Regression– Variable (Test) Selection– Importance of validation separate from derivation

Prognostic Tests

• Differences from diagnostic tests

• Validation/Quantifying Accuracy (calibration and discrimination)

• Comparing predictions by different people or different models

• Assessing the value of prognostic information

Difference from Diagnostic Tests

• Diagnostic tests are for prevalent disease; prognostic tests are for incident outcomes.

• Studies of prognostic tests have a longitudinal rather than cross-sectional time dimension.*(Fix a future time point and determine whether the dichotomous outcome has occurred at that point, e.g., death or recurrence at 5 years.)

• Prognostic test “result” is often a probability of having the outcome by the future time point (e.g. risk of death or recurrence by 5 years).

*But studies of diagnostic tests that use clinical follow-up as a gold standard also are longitudinal.

Problems with estimating risk of outcome by a fixed future time point

• Equates all outcomes prior to the time point and all outcomes after the time point. (Death at 1 month is the same as death at 4 years and 11 months; 5-year-1-month survival is the same as > 10-year survival).

• Cannot analyze subjects lost to follow-up prior to the time point.

Time-to-event analysis (proportional hazards) often important/necessary, but it’s covered elsewhere in your curriculum.

Predicting Continuous Outcomes

• Time to death/recurrence

• Birth weight

• Weight loss/gain

Predicting Continuous Outcomes

Glare, P., K. Virik, et al. (2003). "A systematic review of physicians' survival predictions in terminally ill cancer patients." Bmj 327(7408): 195-8.

Predicting Continuous OutcomesCan calculate Outcomeactual - Outcomepredicted

for each individual.*

Summarize with mean and SD of individual differences.

Plot individual differences vs. actual outcome. Looks like a Bland-Altman plot.

(And that’s all I’m going to say about predicting continuous outcomes.)

*This does not make sense for dichotomous outcomes.

Prognostic Tests and Multivariable Diagnostic Models

Commonly express results in terms of a probability

-- risk of the outcome by a fixed time point (prognostic test)

-- posterior probability of disease (diagnostic model)

Need to assess both calibration and discrimination.

Example*

Oncologists estimated the probability of “cure” (5-year disease-free survival) in each of 96 cancer patients.

After 5 years, 70 (of the 96) died or had recurrence, and 26 (27%) were “cured.”

*Mackillop, W. J. and C. F. Quirt (1997). "Measuring the accuracy of prognostic judgments in oncology." J Clin Epidemiol 50(1): 21-9.

PatientID

Oncologist's Predicted

ProbabilityDisease Free

survival

1 1% 0

2 25% 0

3 50% 0

4 90% 1

5 60% 0

6 40% 0

7 45% 0

8 35% 0

9 35% 1

10 10% 0

11 75% 1

12 55% 0

How do you assess the validity of the predictions?

• How accurate are the predicted probabilities?– Break the population into groups– Compare actual and predicted probabilities

for each group

Calibration*

*Related to Goodness-of-Fit and diagnostic model validation, which will be discussed shortly.

Calibration

Mackillop, W. J. and C. F. Quirt (1997). "Measuring the accuracy of prognostic judgments in oncology." J Clin Epidemiol 50(1): 21-9.

• How well can the test separate subjects in the population from the mean probability to values closer to zero or 1?

• May be more generalizable

• Often measured with C-statistic (AUROC)

Discrimination

Discrimination

Mackillop, W. J. and C. F. Quirt (1997). "Measuring the accuracy of prognostic judgments in oncology." J Clin Epidemiol 50(1): 21-9.

Discrimination

Mackillop, W. J. and C. F. Quirt (1997). "Measuring the accuracy of prognostic judgments in oncology." J Clin Epidemiol 50(1): 21-9.

Calibration vs. Discrimination

• Perfect calibration, no discrimination:– Oncologist assigned 27% probability of cure to each

of the 96 patients.

• Perfect discrimination, poor calibration– Mean* of oncologist-assigned “cure” probabilities was

50%, but every patient who died or had a recurrence was assigned a cure probability ≤ 40% and every patient who survived was assigned a probability ≥ 60%.

* ∑pini / N (It was actually 30% in the study.)

Calibration

0.0%

20.0%

40.0%

60.0%

80.0%

100.0%

0% 20% 40% 60% 80% 100%

Discrimination

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.2 0.4 0.6 0.8 1

100%

80%

60%

40%

Comparing Predictions• Compare ROC Curves and AUROCs

• Reclassification Tables*, Net Reclassification Improvement (NRI), Integrated Discrimination Improvement (IDI)

• See Jan. 2008 Issue of Statistics in Medicine** (? and EBD Edition 2 ?)

*Problem 8-1 has a reclassification table.

**Pencina et al. Stat Med. 2008 Jan 30;27(2):157-72;

Value of Prognostic Information

Why do you want to know prognosis?

-- ALS, slow vs rapid progression

-- GBM, expected survival

-- Na-MELD Score vs. Na-MELD + Ascites

Value of Prognostic Information

• To inform treatment or other clinical decisions

• To inform (prepare) patients and their families

• To stratify by disease severity in clinical trials

Altman, D. G. and P. Royston (2000). "What do we mean by validating a prognostic model?" Stat Med 19(4): 453-73.

• Doctors and patients like prognostic information

• But hard to assess its value• Most objective approach is decision-

analytic. Consider: – What decision is to be made?– Costs of errors?– Cost of test?

Value of Prognostic Information

Common Problems with Studies of Prognostic Tests

See Chapter 7

– Importance of test non-independence– Recursive Partitioning– Logistic Regression– Variable (Test) Selection– Importance of validation separate from

derivation (calibration and discrimination revisited)

Combining Tests/Diagnostic Models

Combining TestsExample

Prenatal sonographic Nuchal Translucency (NT) and Nasal Bone Exam as dichotomous tests for Trisomy 21*

*Cicero, S., G. Rembouskos, et al. (2004). "Likelihood ratio for trisomy 21 in fetuses with absent nasal bone at the 11-14-week scan." Ultrasound Obstet Gynecol 23(3): 218-23.

If NT ≥ 3.5 mm Positive for Trisomy 21*

*What’s wrong with this definition?

>95th Perc.37.9%, 88.6%

> 3.5 mm9.2%, 63.7%

> 4.5 mm3.5%, 43.5%

> 5.5 mm1.9%, 31.2%

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

1 - Specificity

Sen

siti

vity

• In general, don’t make multi-level tests like NT into dichotomous tests by choosing a fixed cutoff

• I did it here to make the discussion of multiple tests easier

• I arbitrarily chose to call ≥ 3.5 mm positive

One Dichotomous Test

Trisomy 21

Nuchal D+ D- LR

Translucency

≥ 3.5 mm 212 478 7.0

< 3.5 mm 121 4745 0.4

Total 333 5223

Do you see that this is (212/333)/(478/5223)?

Review of Chapter 3: What are the sensitivity, specificity, PPV, and NPV of this test? (Be careful.)

Nuchal Translucency

• Sensitivity = 212/333 = 64%

• Specificity = 4745/5223 = 91%

• Prevalence = 333/(333+5223) = 6%

(Study population: pregnant women about to undergo CVS, so high prevalence of Trisomy 21)

PPV = 212/(212 + 478) = 31%

NPV = 4745/(121 + 4745) = 97.5%** Not that great; prior to test P(D-) = 94%

Clinical Scenario – One TestPre-Test Probability of Down’s = 6%

NT Positive

Pre-test prob: 0.06

Pre-test odds: 0.06/0.94 = 0.064

LR(+) = 7.0

Post-Test Odds = Pre-Test Odds x LR(+)

= 0.064 x 7.0 = 0.44

Post-Test prob = 0.44/(0.44 + 1) = 0.31

NT Positive

• Pre-test Prob = 0.06

• P(Result|Trisomy 21) = 0.64

• P(Result|No Trisomy 21) = 0.09

• Post-Test Prob = ?

http://www.quesgen.com/Calculators/PostProdOfDisease/PostProdOfDisease.html

Slide Rule

Nasal Bone SeenNBA=“No”

Neg for Trisomy 21

Nasal Bone AbsentNBA=“Yes”

Pos for Trisomy 21

Second Dichotomous Test

Nasal Bone Tri21+ Tri21- LR

Absent

Yes 229 129 27.8

No 104 5094 0.32

Total 333 5223

Do you see that this is (229/333)/(129/5223)?

Pre-Test Probability of Trisomy 21 = 6%NT Positive for Trisomy 21 (≥ 3.5 mm)Post-NT Probability of Trisomy 21 = 31%NBA Positive (no bone seen)Post-NBA Probability of Trisomy 21 = ?

Clinical Scenario –Two Tests

Using Probabilities

Clinical Scenario – Two Tests

Pre-Test Odds of Tri21 = 0.064NT Positive (LR = 7.0)Post-Test Odds of Tri21 = 0.44NBA Positive (LR = 27.8?)Post-Test Odds of Tri21 = .44 x 27.8?

= 12.4? (P = 12.4/(1+12.4) = 92.5%?)

Using Odds

Clinical Scenario – Two TestsPre-Test Probability of Trisomy 21 = 6%NT ≥ 3.5 mm AND Nasal Bone Absent

Question

Can we use the post-test odds after a positive Nuchal Translucency as the pre-test odds for the positive Nasal Bone Examination?

i.e., can we combine the positive results by multiplying their LRs?

LR(NT+, NBE +) = LR(NT +) x LR(NBE +) ? = 7.0 x 27.8 ? = 194 ?

Answer = No

NT NBE

Trisomy 21

+ %

Trisomy 21

- % LR

Pos Pos 158 47% 36 0.7% 69

Pos Neg 54 16% 442 8.5% 1.9

Neg Pos 71 21% 93 1.8% 12

Neg Neg 50 15% 4652 89% 0.2

Total Total 333 100% 5223 100%  

Not 194

158/(158 + 36) = 81%, not 92.5%

Non-Independence

Absence of the nasal bone does not tell you as much if you already know that the nuchal translucency is ≥ 3.5 mm.

Clinical Scenario

Pre-Test Odds of Tri21 = 0.064NT+/NBE + (LR =68.8)Post-Test Odds = 0.064 x 68.8

= 4.40 (P = 4.40/(1+4.40) = 81%, not 92.5%)

Using Odds

Non-Independence

Non-Independence of NT and NBA

Apparently, even in chromosomally normal fetuses, enlarged NT and absence of the nasal bone are associated. A false positive on the NT makes a false positive on the NBE more likely. Of normal (D-) fetuses with NT < 3.5 mm only 2.0% had nasal bone absent. Of normal (D-) fetuses with NT ≥ 3.5 mm, 7.5% had nasal bone absent.

Some (but not all) of this may have to do with ethnicity. In this London study, chromosomally normal fetuses of “Afro-Caribbean” ethnicity had both larger NTs and more frequent absence of the nasal bone.

In Trisomy 21 (D+) fetuses, normal NT was associated with the presence of the nasal bone, so a false negative on the NT was associated with a false negative on the NBE.

Non-Independence

Instead of looking for the nasal bone, what if the second test were just a repeat measurement of the nuchal translucency?

A second positive NT would do little to increase your certainty of Trisomy 21. If it was false positive the first time around, it is likely to be false positive the second time.

Reasons for Non-Independence

Tests measure the same aspect of disease.

One aspect of Down’s syndrome is slower fetal development; the NT decreases more slowly and the nasal bone ossifies later. Chromosomally NORMAL fetuses that develop slowly will tend to have false positives on BOTH the NT Exam and the Nasal Bone Exam.

Reasons for Non-Independence

Tests measure the same aspect of disease.

Consider exercise ECG (EECG) and radionuclide scan as tests for coronary artery disease (CAD) with the gold standard being anatomic narrowing of the arteries on angiogram. Both EECG and nuclide scan measure functional narrowing. In a patient without anatomic narrowing (a D- patient), coronary artery spasm could cause false positives on both tests.

Reasons for Non-IndependenceSpectrum of disease severity.

In the EECG/nuclide scan example, CAD is defined as ≥70% stenosis on angiogram. A D+ patient with 71% stenosis is much more likely to have a false negative on both the EECG and the nuclide scan than a D+ patient with 99% stenosis.

Reasons for Non-IndependenceSpectrum of non-disease severity.

In this example, CAD is defined as ≥70% stenosis on angiogram. A D- patient with 69% stenosis is much more likely to have a false positive on both the EECG and the nuclide scan than a D- patient with 33% stenosis.

Unless tests are independent, we can’t combine results by

multiplying LRs

Ways to Combine Multiple TestsOn a group of patients (derivation set),

perform the multiple tests and (independently*) determine true disease status (apply the gold standard)

• Measure LR for each possible combination of results

• Recursive Partitioning

• Logistic Regression

*Beware of incorporation bias

Determine LR for Each Result Combination

NT NBA Tri21+ % Tri21- % LRPost Test

Prob*

Pos Pos 158 47% 36 0.7% 69 81%

Pos Neg 54 16% 442 8.5% 1.9 11%

Neg Pos 71 21% 93 1.8% 12 43%

Neg Neg 50 15% 4652 89.1% 0.2 1%

Total Total 333 100% 5223 100%  

*Assumes pre-test prob = 6%

Sort by LR (Descending)

NT NBA Tri21+ % Tri21- % LR

Pos Pos 158 47% 36 0.70% 69

Neg Pos 71 21% 93 1.80% 12

Pos Neg 54 16% 442 8.50% 1.9

Neg Neg 50 15% 4652 89.10% 0.2

Apply Chapter 4 – Multilevel Tests

• Now you have a multilevel test (In this case, 4 levels.)

• Have LR for each test result

• Can create ROC curve and calculate AUROC

• Given pre-test probability and treatment threshold probability (C/(B+C)), can find optimal cutoff.

Create ROC Table

NT NBE Tri21+ Sens Tri21- 1 - Spec LR AUROC

      0%   0%   0

Pos Pos 158 47% 36 0.70% 69 0.002

Neg Pos 71 68% 93 3% 12 0.012

Pos Neg 54 84% 442 11% 1.9 0.077

Neg Neg 50 100% 4652 100% 0.2 0.896

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Sensitivity

1 - S

pecific

ity

AUROC = 0.896

Optimal Cutoff

NT NBE LRPost-Test

Prob

Pos Pos 69 0.81

Neg Pos 12 0.43

Pos Neg 1.9 0.11

Neg Neg 0.2 0.01

Assume

• Pre-test probability = 6%

• Threshold for CVS is 2%

Determine LR for Each Result Combination

2 dichotomous tests: 4 combinations

3 dichotomous tests: 8 combinations

4 dichotomous tests: 16 combinations

Etc.

2 3-level tests: 9 combinations

3 3-level tests: 27 combinations

Etc.

Determine LR for Each Result Combination

How do you handle continuous tests?

Not always practical for groups of tests.

Recursive PartitioningMeasure NT First

Nuchal Translucency

Nasal Bone

< 3.5 mm ≥ 3.5 mm

31%2.5%

Present

1 %

Suspected Trisomy 21 (P = 6%)

43 %

Nasal Bone

Absent Present

11 %

Absent

81 %

Recursive PartitioningExamine Nasal Bone First

Nasal Bone

Nuchal Translucency

< 3.5 mm≥ 3.5 mm

64%2 %

Present

1 %

Suspected Trisomy 21 (P = 6%)

11 % 43 %

Absent

81 %

< 3.5 mm≥ 3.5 mm

Nuchal Translucency

Do Nasal Bone Exam First

• Better separates Trisomy 21 from chromosomally normal fetuses

• If your threshold for CVS is between 11% and 43%, you can stop after the nasal bone exam

• If your threshold is between 1% and 11%, you should do the NT exam only if the NBE is normal.

Recursive PartitioningExamine Nasal Bone FirstCVS if P(Trisomy 21 > 5%)

Nasal Bone

Nuchal Translucency

< 3.5 mm≥ 3.5 mm

64%2%

Present

1 %

Suspected Trisomy 21 (P = 6%)

11 % 43 %

Absent

81 %

< 3.5 mm≥ 3.5 mm

Nuchal Translucency

No NT, CVS

CVSNo CVS

Recursive PartitioningExamine Nasal Bone FirstCVS if P(Trisomy 21 > 5%)

Nasal Bone

Nuchal Translucency

< 3.5 mm

64%2%

Present

1 %

Suspected Trisomy 21 (P = 6% )

11 %

Absent

≥ 3.5 mmCVS

CVSNo CVS

Recursive Partioning

• Same as Classification and Regression Trees (CART)

• Don’t have to work out probabilities (or LRs) for all possible combinations of tests, because of “tree pruning”

Tree Pruning: Goldman Rule*

8 “Tests” for Acute MI in ER Chest Pain Patient :1. ST Elevation on ECG; 2. CP < 48 hours; 3. ST-T changes on ECG; 4. Hx of MI; 5. Radiation of Pain to Neck/LUE; 6. Longest pain > 1 hour; 7. Age > 40 years; 8. CP not reproduced by palpation.

*Goldman L, Cook EF, Brand DA, et al. A computer protocol to predict myocardial infarction in emergency department patients with chest pain. N Engl J Med. 1988;318(13):797-803.

ST Elevation

CP < 48 hrs No Yes

14%No

9 %

10 %

YesNo

Yes

80 %CP < 48 hrs

No Yes No YesHx of ACI Hx of ACI Hx of ACI Hx of ACI

YesST Changes

No7% 25 %

CP > 1 hr CP > 1 hr

No NoYes Yes0% 11%

No YesNo Yes

8 tests 28 = 256 Combinations

Recursive Partitioning

• Does not deal well with continuous test results*

*when there is a monotonic relationship between the test result and the probability of disease

Logistic Regression

Ln(Odds(D+)) =

a + bNTNT+ bNBANBA + binteract(NT)(NBA)

“+” = 1

“-” = 0

More on this later in ATCR!

Why does logistic regression model log-odds instead of probability?

Related to why the LR Slide Rule’s log-odds scale helps us visualize combining test results.

Probability of Trisomy 21 vs. Maternal Age

Ln(Odds) of Trisomy 21 vs. Maternal Age

Combining 2 Continuous Tests

> 1% Probability of Trisomy 21

< 1% Probability of Trisomy 21

Logistic Regression Approach to the “R/O ACI patient”

*Selker HP, Griffith JL, D'Agostino RB. A tool for judging coronary care unit admission appropriateness, valid for both real-time and retrospective use. A time-insensitive predictive instrument (TIPI) for acute cardiac ischemia: a multicenter study. Med Care. Jul 1991;29(7):610-627. For corrected coefficients, see http://medg.lcs.mit.edu/cardiac/cpain.htm

Coefficient MV Odds Ratio

Constant -3.93  

Presence of chest pain 1.23 3.42

Pain major symptom 0.88 2.41

Male Sex 0.71 2.03

Age 40 or less -1.44 0.24

Age > 50 0.67 1.95

Male over 50 years** -0.43 0.65

ST elevation 1.314 3.72

New Q waves 0.62 1.86

ST depression 0.99 2.69

T waves elevated 1.095 2.99

T waves inverted 1.13 3.10

T wave + ST changes** -0.314 0.73

Clinical Scenario*

71 y/o man with 2.5 hours of CP, substernal, non-radiating, described as “bloating.” Cannot say if same as prior MI or worse than prior angina.

Hx of CAD, s/p CABG 10 yrs prior, stenting 3 years and 1 year ago. DM on Avandia.

ECG: RBBB, Qs inferiorly. No ischemic ST-T changes.

*Real patient seen by MAK 1 am 10/12/04

Coefficient Clinical Scenario

Constant -3.93 Result -3.93

Presence of chest pain 1.23 1 1.23

Pain major symptom 0.88 1 0.88

Sex 0.71 1 0.71

Age 40 or less -1.44 0 0

Age > 50 0.67 1 0.67

Male over 50 years -0.43 1 -0.43

ST elevation 1.314 0 0

New Q waves 0.62 0 0

ST depression 0.99 0 0

T waves elevated 1.095 0 0

T waves inverted 1.13 0 0

T wave + ST changes -0.314 0 0

-0.87

Odds of ACI 0.418952

Probability of ACI 30%

Choosing Which Tests to Include in the Decision Rule

Have focused on how to combine results of two or more tests, not on which of several tests to include in a decision rule.

Variable Selection Options include:

• Recursive partitioning

• Automated stepwise logistic regression

Choice of variables in derivation data set requires confirmation in a separate validation data set.

Variable Selection

• Especially susceptible to overfitting

Need for Validation: Example*Study of clinical predictors of bacterial diarrhea.Evaluated 34 historical items and 16 physical

examination questions. 3 questions (abrupt onset, > 4 stools/day, and

absence of vomiting) best predicted a positive stool culture (sensitivity 86%; specificity 60% for all 3).

Would these 3 be the best predictors in a new dataset? Would they have the same sensitivity and specificity?

*DeWitt TG, Humphrey KF, McCarthy P. Clinical predictors of acute bacterial diarrhea in young children. Pediatrics. Oct 1985;76(4):551-556.

Need for ValidationDevelop prediction rule by choosing a few

tests and findings from a large number of possibilities.

Takes advantage of chance variations* in the data.

Predictive ability of rule will probably disappear when you try to validate on a new dataset.

Can be referred to as “overfitting.”

e.g., low serum calcium in 12 children with hemolytic uremic syndrome and bad outcomes

VALIDATION

No matter what technique (CART or logistic regression) is used, the tests included in a model and the way in which their results are combined must be tested on a data set different from the one used to derive the rule.

Beware of studies that use a “validation set” to tweak the model. This is really just a second derivation step.

Validation Dataset

Measure all the variables needed for the model.

Determine disease status (D+ or D-) on all subjects.

VALIDATIONCalibration

-- Divide dataset into probability groups (deciles, quintiles, …) based on the model (no tweaking allowed).

-- In each group, compare actual D+ proportion to model-predicted probability in each group.

VALIDATIONDiscrimination

Discrimination

-- Test result is model-predicted probability of disease.

-- Use “Walking Man” to draw ROC curve and calculate AUROC.

Outline of Topics• Prognostic Tests

– Differences from diagnostic tests– Quantifying prediction: calibration and discrimination– Comparing predictions – Value of prognostic information

• Combining Tests/Diagnostic Models– Importance of test non-independence– Recursive Partitioning– Logistic Regression– Variable (Test) Selection– Importance of validation separate from derivation

top related