INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION QUANTIFYING AND COMPARING DYNAMIC PREDICTIVE ACCURACY OF JOINT MODELS for longitudinal marker and time-to-event with competing risks P. Blanche, C. Proust-Lima, L. Loub` ere, H. Jacqmin-Gadda
50
Embed
QUANTIFYING AND COMPARING DYNAMIC PREDICTIVE ......INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION OBJECTIVE I Question: How to evaluate
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
QUANTIFYING AND COMPARING DYNAMIC
PREDICTIVE ACCURACY OF JOINT MODELSfor longitudinal marker and time-to-event
with competing risks
P. Blanche, C. Proust-Lima, L. Loubere, H. Jacqmin-Gadda
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
OBJECTIVE
I Question : How to evaluate and compare dynamic predictiveaccuracy of joint-models?
I Data: Cohorts of elderly people Paquid (training, n = 2970) and3-City (validation, n = 3880)
I Dynamic prediction of dementiaI Using repeated measurements of cognitive tests
I Statistical Goal : making inference with dynamic accuracymeasures
I Estimating dynamic predictive accuracy curvesI Testing whether or not 2 curves of predictive accuracy differ
1/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
COMPETING RISKS : MOTIVATION EXAMPLE
Health
Dementia(η = 1)
Deathdementia free
(η = 2)
Notations:I T : time-to-eventI η : type of event
2/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
COMPETING RISKS IN CANCER
Health
Deathfrom cancer
(η = 1)
Deathfrom
another cause(η = 2)
Notations:I T : time-to-eventI η : type of event
3/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
DYNAMIC PREDICTION
Landmark time “s” at which predictions are made varies, horizon “t” is fixed.
follow−up time
Cog
nitiv
e sc
ore
(MM
SE
)20
2224
2628
30
0
4/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
DYNAMIC PREDICTION
Landmark time “s” at which predictions are made varies, horizon “t” is fixed.
follow−up time
Cog
nitiv
e sc
ore
(MM
SE
)20
2224
2628
30
0
4/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
DYNAMIC PREDICTION
Landmark time “s” at which predictions are made varies, horizon “t” is fixed.
follow−up time
Cog
nitiv
e sc
ore
(MM
SE
)20
2224
2628
30
0
4/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
DYNAMIC PREDICTION
Landmark time “s” at which predictions are made varies, horizon “t” is fixed.
follow−up time
Cog
nitiv
e sc
ore
(MM
SE
)20
2224
2628
30
0
4/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
DYNAMIC PREDICTION
Landmark time “s” at which predictions are made varies, horizon “t” is fixed.
follow−up time
Cog
nitiv
e sc
ore
(MM
SE
)20
2224
2628
30
0 s=4 yearss=4 years
Landmark time ''s''
4/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
DYNAMIC PREDICTIONLandmark time “s” at which predictions are made varies, horizon “t” is fixed.
follow−up time
Cog
nitiv
e sc
ore
(MM
SE
)20
2224
2628
30
0 s=4 years s+t=9 years
0 %
100
%1−
Pro
babi
lity
of e
vent
Landmark time ''s''Horizon ''t''
1 − πi(s,t)=67%
4/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
NOTATIONS FOR POPULATION PARAMETERS
I Event-time and event-type : (Ti, ηi)
I Indicator of disease occurrence in (s, s + t]:
Di(s, t) = 11{s < Ti ≤ s + t, ηi = 1}
I Dynamic predictions:
πi(s, t) = Pξ
(Di(s, t) = 1
∣∣∣Ti > s,Yi(s),Xi
)= Pξ(s < Ti ≤ s + t, ηi = 1|Ti > s,Yi(s),Xi)
I Yi(s): set of marker measurements measured before time sI Xi: baseline covariatesI ξ: estimated model parameters (from independent training data)
5/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
PREDICTIVE ACCURACY : DISCRIMINATION
Di(s, t) = 11{s < Ti ≤ s + t, ηi = 1}
I Does a higher predicted risk really mean more likely toexperience the event ?
I How often πi(s, t) > πj(s, t) and Di(s, t) = 1, Dj(s, t) = 0 ?
Landmark time s Time s + t
and ηi 6= 1
and ηi = 1
time
6/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
DEFINITIONS OF ACCURACY: AUC(s, t)
Di(s, t) = 11{s < Ti ≤ s + t, ηi = 1}
AUC (Area under ROC curve):
AUC(s, t) = P(πi(s, t) > πj(s, t)
∣∣∣Di(s, t) = 1,Dj(s, t) = 0,Ti > s,Tj > s)
with i and j two independent subjects.
I the higher the betterI Discrimination measureI Does NOT depend on incidence in (s, s + t]
7/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
PREDICTIVE ACCURACY : PREDICTION ERROR
Di(s, t) = 11{s < Ti ≤ s + t, ηi = 1}
I How close are the predicted risks πi(s, t) from the “trueunderlying” risk of event given the available information ?
I Is it true that :
πi(s, t) ≈ E[Di(s, t)
∣∣∣Ti > s,Yi(s),Xi
]≈ P
(s < Ti ≤ s + t, ηi = 1
∣∣Ti > s,Yi(s),Xi) ?
8/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
DEFINITIONS OF ACCURACY: BS(s, t)
Di(s, t) = 11{s < Ti ≤ s + t, ηi = 1}
Expected Brier Score:
BS(s, t) = E[{
D(s, t)− π(s, t)}2∣∣∣T > s
]
I the lower the betterI BS ≈ Bias2 + VarianceI Calibration and DiscriminationI Depends on incidence in (s, s + t]
9/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
RIGHT CENSORING ISSUE
Landmark time s Time s + t
time
: uncensored: censored
For subject i censored within [s, s + t) the status
Di(s, t) = 11{s < Ti ≤ s + t, ηi = 1}
is unknown.
10/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
RIGHT CENSORING ISSUE
Landmark time s Time s + t
time
: uncensored
: censored
For subject i censored within [s, s + t) the status
Di(s, t) = 11{s < Ti ≤ s + t, ηi = 1}
is unknown.
10/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
RIGHT CENSORING ISSUE
Landmark time s Time s + t
time
: uncensored: censored
For subject i censored within [s, s + t) the status
Di(s, t) = 11{s < Ti ≤ s + t, ηi = 1}
is unknown.
10/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
RIGHT CENSORING ISSUE
Landmark time s Time s + t
time
: uncensored: censored
For subject i censored within [s, s + t) the status
Di(s, t) = 11{s < Ti ≤ s + t, ηi = 1}
is unknown.10/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
DESCRIPTIVE STATISTICS & RIGHT CENSORING ISSUEt = 5 years, s ∈ S = {0, 0.5, . . . , 4} years
0 0.5 1 1.5 2 2.5 3 3.5 4
Landmark time ''s'' (years)
No.
of s
ubje
cts
010
0020
0030
0040
00
Censored in (s,s+t]Event−free at s+tDeath dementia−free in (s,s+t]Dementia in (s,s+t]
23/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
DYNAMIC PREDICTION ACCURACY CURVES: AUCt = 5 years, s ∈ S = {0, 0.5, . . . , 4} years
landmark time s
AU
C(s
,t)
50%
75%
90%
100%
0 0.5 1 1.5 2 2.5 3 3.5 4
● ●● ● ● ● ●
●
●●●
● ●● ●
●●
●
ISTMMSE
landmark time s
Diff
eren
ce in
AU
C(s
,t)
●●
●
●●
●
●
●
●
0 0.5 1 1.5 2 2.5 3 3.5 4
05%
10%
15%
95% Conf Interval95% Conf Band
24/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
COMPARING PREDICTION ACCURACY CURVES: BSt = 5 years, s ∈ S = {0, 0.5, . . . , 4} years
landmark time s
BS
(s,t)
●
●
●● ● ●
●
●
●
0 0.5 1 1.5 2 2.5 3 3.5 4
0.02
0.05
0.10
●
●
●● ● ●
●
●
●
ISTMMSE
landmark time s
Diff
eren
ce in
BS
(s,t)
● ●
●● ●
●
●●
●
0 0.5 1 1.5 2 2.5 3 3.5 4
−0.
005
00.
005
95% Conf Interval95% Conf Band
25/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
PERSPECTIVE: R2-LIKE CRITERIA
I Interpretation difficulties for s 7→ BS(s, t) :I Scaling meaning ?I BS value depends on cumulative incidence in (s, s + t]I Increase/decrease when s varies not explainable
I “Explained variation” criteria :
R2(s, t) = 1− BS(s, t)BSNULL(s, t)
where BSNULL(s, t) is BS of the null model predicting the samerisk for all subjects (=cumulative incidence in (s, s + t]).
I the higher the better & easier scalingI cumulative incidence free
26/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
PERSPECTIVE: R2-LIKE CRITERIA
I Interpretation difficulties for s 7→ BS(s, t) :I Scaling meaning ?I BS value depends on cumulative incidence in (s, s + t]I Increase/decrease when s varies not explainable
I “Explained variation” criteria :
R2(s, t) = 1− BS(s, t)BSNULL(s, t)
where BSNULL(s, t) is BS of the null model predicting the samerisk for all subjects (=cumulative incidence in (s, s + t]).
I the higher the better & easier scalingI cumulative incidence free
26/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
PERSPECTIVE: INFERENCE FOR R2-LIKE CRITERIAt = 5 years, s ∈ S = {0, 0.5, . . . , 4}
Landmark time s (years)
R2 (s
,t)
0%5%
10%
0 0.5 1 1.5 2 2.5 3 3.5 4
●
●
●
●
●
●
●● ●
●
●●
●
●●
●
● ●
ISTMMSE
Computation of confidence regions (easy): ongoing work ... 27/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
CONCLUSION (1/2)
I New testing approach to simultaneously compare dynamicpredictions over all times at which predictions are made
I Nonparametric methodology provides a model-free comparison.
”Essentially, all models are wrong, butsome are useful.”, G. Box
⇒We do not assume any correct model specification.
28/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
CONCLUSION (1/2)
I New testing approach to simultaneously compare dynamicpredictions over all times at which predictions are made
I Nonparametric methodology provides a model-free comparison.
”Essentially, all models are wrong, butsome are useful.”, G. Box
⇒We do not assume any correct model specification.
28/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
CONCLUSION (2/2)
I Asymptotic results established
I Good simulation results with finite sample size (not shown)
I Beyond the joint modeling framework ?
≈ provide inference procedures for comparing any kind ofdynamic prediction tools
e.g : Joint modeling vs Landmarking ?
”Statisticians, like artists, have the badhabit of falling in love with their models.”,
G. Box
THANK YOU FOR YOUR ATTENTION!
29/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
CONCLUSION (2/2)
I Asymptotic results established
I Good simulation results with finite sample size (not shown)
I Beyond the joint modeling framework ?
≈ provide inference procedures for comparing any kind ofdynamic prediction tools
e.g : Joint modeling vs Landmarking ?
”Statisticians, like artists, have the badhabit of falling in love with their models.”,
G. Box
THANK YOU FOR YOUR ATTENTION!
29/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
CONCLUSION (2/2)
I Asymptotic results established
I Good simulation results with finite sample size (not shown)
I Beyond the joint modeling framework ?
≈ provide inference procedures for comparing any kind ofdynamic prediction tools
e.g : Joint modeling vs Landmarking ?
”Statisticians, like artists, have the badhabit of falling in love with their models.”,
G. Box
THANK YOU FOR YOUR ATTENTION!
29/29
INTRODUCTION DYNAMIC PREDICTION ACCURACY LARGE SAMPLE RESULTS APPLICATION PERSPECTIVES CONCLUSION
CONCLUSION (2/2)
I Asymptotic results established
I Good simulation results with finite sample size (not shown)
I Beyond the joint modeling framework ?
≈ provide inference procedures for comparing any kind ofdynamic prediction tools
e.g : Joint modeling vs Landmarking ?
”Statisticians, like artists, have the badhabit of falling in love with their models.”,