Top Banner
Model Identification & Model Selection With focus on Mark/Recapture Studies 1
45

Model Identification & Model Selection

Feb 24, 2016

Download

Documents

aran

Model Identification & Model Selection. With focus on Mark/Recapture Studies. Overview. Basic inference from an evidentialist perspective Model selection tools for mark/recapture AICc & SIC/BIC Overdispersed data Model set size Multimodel inference. DATA. - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Model Identification & Model Selection

1

Model Identification & Model Selection

With focus on Mark/Recapture Studies

Page 2: Model Identification & Model Selection

2

Overview

• Basic inference from an evidentialist perspective

• Model selection tools for mark/recapture– AICc & SIC/BIC– Overdispersed data– Model set size– Multimodel inference

Page 3: Model Identification & Model Selection

3

DATA /* 01 */ 1100000000000000 1 1 1.16 27.7 4.19; /* 04 */ 1011000000000000 1 0 1.16 26.4 4.39; /* 05 */ 1011000000000000 1 1 1.08 26.7 4.04; /* 06 */ 1010000000000000 1 0 1.12 26.2 4.27; /* 07 */ 1010000000000000 1 1 1.14 27.7 4.11; /* 08 */ 1010110000000000 1 1 1.20 28.3 4.24; /* 09 */ 1010000000000000 1 1 1.10 26.4 4.17; /* 10 */ 1010110000000000 1 1 1.42 27.0 5.26; /* 11 */ 1010000000000000 1 1 1.12 27.2 4.12; /* 12 */ 1010101100000000 1 1 1.11 27.1 4.10; /* 13 */ 1010101100000000 1 0 1.07 26.8 3.99; /* 14 */ 1010101100000000 1 0 0.94 25.2 3.73; /* 15 */ 1010101100000000 1 0 1.24 27.1 4.58; /* 16 */ 1010101100000000 1 0 1.12 26.5 4.23; /* 17 */ 1010101000000000 1 1 1.34 27.5 4.87; /* 18 */ 1010101011000000 1 0 1.01 27.2 3.71; /* 19 */ 1010101011000000 1 0 1.04 27.0 3.85; /* 20 */ 1010101000000000 1 1 1.25 27.6 4.53; /* 21 */ 1010101011000000 1 0 1.20 27.6 4.35; /* 22 */ 1010101011000000 1 0 1.28 27.0 4.74; /* 23 */ 1010101010110000 1 0 1.25 27.2 4.59; /* 24 */ 1010101010110000 1 0 1.09 27.5 3.96; /* 25 */ 1010101010110000 1 1 1.05 27.5 3.82; /* 26 */ 1010101010101100 1 0 1.04 25.5 4.08; /* 27 */ 1010101010101010 1 0 1.13 26.8 4.22; /* 28 */ 1010101010101010 1 1 1.32 28.5 4.63; /* 29 */ 1010101010101010 1 0 1.18 25.9 4.56; /* 30 */ 1010101010101010 1 0 1.07 26.7 4.01; /* 31 */ 1010101010101010 1 1 1.26 26.9 4.68; /* 32 */ 1010101010101010 1 0 1.27 27.6 4.60; /* 33 */ 1010101010101010 1 0 1.08 26.0 4.15; /* 34 */ 1010101010101010 1 1 1.11 27.0 4.11; /* 35 */ 1010101010101010 1 0 1.15 27.1 4.24; /* 36 */ 1010101010101010 1 0 1.03 26.5 3.89; /* 37 */ 1010101010101010 1 0 1.16 27.5 4.22;

Page 4: Model Identification & Model Selection

4

Models carry the meaning in science

• Model

– Organized thought

• Parameterized Model

– Organized thought connected to reality

Page 5: Model Identification & Model Selection

Science is a cyclic process of model reconstruction and model reevaluation

• Comparison of predictions with observations/data

• Relative comparisons are evidence

Page 6: Model Identification & Model Selection

6

All models are false, but some are useful.

George Box

Page 7: Model Identification & Model Selection

7

Statistical Inferences

• Quantitative measures of the validity and utility of models

• Social control on the behavior of scientists

Page 8: Model Identification & Model Selection

8

Scientific Model Selection Criteria

• Illuminating• Communicable• Defensible• Transferable

Page 9: Model Identification & Model Selection

9

Common Information Criteria

Page 10: Model Identification & Model Selection

10

Statistical Methods are Tools

• All statistical methods exist in the mind only, but some are useful.

– Mark Taper

Page 11: Model Identification & Model Selection

11

Classes of Inference

• Frequentist Statistics - Bayesian Statistics

• Error Statistics – Evidential Stats – Bayesian Stats

Page 12: Model Identification & Model Selection

12

Two key frequencies in frequentist statistics

• Frequency definition of probability• Frequency of error in a decision rule

Page 13: Model Identification & Model Selection

13

Null H tests with Fisherian P-values

• Single model only• P-value= Prob of discrepancy at least as great

as observed by chance.• Not terribly useful for model selection

Page 14: Model Identification & Model Selection

14

Neyman-Pearson Tests

• 2 models• Null model test along a maximally sensitive axis.• Binary response: Accept Null or reject Null• Size of test (α) describes frequency of rejecting null in

error.– Not about the data, it is about the test.– You support your decision because you have made it with a

reliable procedure.• N-P test tell you very little about relative support for

alternative models.

Page 15: Model Identification & Model Selection

15

Decisions vs. Conclusions

• Decision based inference reasonable within a regulatory framework. – Not so appropriate for science

• John Tukey (1960) advocated seeking to reach conclusions not making decisions.– Accumulate evidence until a conclusion is very

strongly supported.– Treat as true.– Revise if new evidence contradicts.

Page 16: Model Identification & Model Selection

16

In conclusion framework, multiple statistical metrics not “incompatible”

All are tools for aiding scientific thought

Page 17: Model Identification & Model Selection

17

Statistical Evidence

• Data based estimate of the relative distance between two models and “truth”

Page 18: Model Identification & Model Selection

18

Common Evidence Functions

• Likelihood ratios• Differences in information criteria• Others available

– E.g. Log(Jackknife prediction likelihood ratio)

Page 19: Model Identification & Model Selection

Model Adequacy

• Bruce Lindsay

• The discrepancy of a model from truth• Truth represented by an empirical

distribution function, • A model is “adequate” if the estimated

discrepancy is less than some arbitrary but meaningful level.

Page 20: Model Identification & Model Selection

Model Adequacy and Goodness of Fit

• Estimation framework rather than testing framework

• Confidence intervals rather than testing• Rejection of “true model formalism”

Page 21: Model Identification & Model Selection

Model Adequacy, Goodness of Fit, and Evidence

• Adequacy does not explicitly compare models

• Implicit comparison

• Model adequacy interpretable as bound on strength of evidence for any better model

• Unifies Model Adequacy and Evidence in a common framework

Page 22: Model Identification & Model Selection

Model adequacy interpreted as a bound on evidence for a possibly better model

Empirical Distribution - “Truth”

Model 1

Potentially better model

Model adequacy measure

Evidence measure

Page 23: Model Identification & Model Selection

23

Goodness of fit misnomer

• Badness of fit measures & goodness of fit tests• Comparison of model to a nonparametric

estimate of true distribution.– G2-Statistic– Helinger Distance– Pearson χ2

– Neyman χ2

Page 24: Model Identification & Model Selection

24

Points of interest

• Badness of fit is the scope for improvement• Evidence for one model relative to another

model is the difference of badness of fit.

Page 25: Model Identification & Model Selection

25

ΔIC estimates differences of Kullback-Leibler Discrepancies

• ΔIC = log(likelihood ratio) when # of parameters are equal

• Complexity penalty is a bias correction to adjust of increase in apparent precision with an increase in # parameters.

Page 26: Model Identification & Model Selection

26

Evidence Scales

L/R Log2 ln Log10

Weak <8 <3 <2 <1

Strong 8 - <32 3 - <5 2 - <7 1 - <2

Very Strong > 32 > 5 > 7 > 2

Note cutoff are arbitrary and vary with scale

Page 27: Model Identification & Model Selection

27

Which Information Criterion?

• AIC? AICc ? SIC/BIC?• Don’t use AIC• 5.9 of one versus 6.1 of the other

Page 28: Model Identification & Model Selection

28

What is sample size for complexity penalty?

• Mark/Recapture based on multinomial likelihoods

• Observation is a capture history not a session

Page 29: Model Identification & Model Selection

29

To Q or not to Q?

• IC based model selection assumes a good model in set.

• Over-dispersion is common in Mark/Recapture data– Don’t have a good model in set– Due to lack of independence of observations– Parameter estimate bias generally not influenced– But fit will appear too good!– Model selection will choose more highly

parameterized models than appropriate

Page 30: Model Identification & Model Selection

30

Quasi likelihood approach

1) χ2 goodness of fit test for most general model

2) If reject H0 estimate variance inflation3) c^ = χ2 /df4) Correct fit component of IC & redo selection

c

Page 31: Model Identification & Model Selection

31

QICs

Knc

yLQBIC

KnKKK

cyLQAICc

e

e

lnˆ|ˆln2

1122

ˆ|ˆln2

Page 32: Model Identification & Model Selection

32

Problems with Quasilikelihood correction

• C^ is essentially a variance estimate.– Variance estimates unstable without a lot of data

• lnL/c^ is a ratio statistic– Ratio statistics highly unstable if the uncertainty in

the denominator is not trivial• Unlike AICc, bias correction is estimated.

– Estimating a bias correction inflates variance!

Page 33: Model Identification & Model Selection

33

Fixes

• Explicitly include random component in model– Then redo model selection

• Bootstrapped median c^• Model selection with Jackknifed prediction

likelihood

Page 34: Model Identification & Model Selection

34

Large or small model sets?

• Problem: Model Selection Bias– When # of models large relative to data size some

models will have a good fit just by chance• Small

– Burnham & Anderson strongly advocate small model sets representing well thought out science

– Large model sets = “data dredging”• Large

– The science may not be mature– Small model sets may risk missing important factors

Page 35: Model Identification & Model Selection

35

Model Selection from Many Candidates Taper(2004)

SIC(x) = -2In(L) + (In(n) + x)k.

Page 36: Model Identification & Model Selection

36

Performance of SIC(X) with small data set.

N=50, true covariates=10, spurious covariates=30, all models of order <=20, 1.141 X 1014 candidate

models'

Page 37: Model Identification & Model Selection

37

Chen & Chen 2009

• M subset size, P= # of possible terms

Page 38: Model Identification & Model Selection

38

Explicit Tradeoff

• Small model sets– Allows exploration of fine structure and small effects– Risks missing unanticipated large effects

• Large model sets– Will catch unknown large effects– Will miss fine structure

• Large or small model sets is a principled choice that data analysts should make based on their background knowledge and needs

Page 39: Model Identification & Model Selection

39

Akaike Weights & Model Averaging

Beware, there be dragons here!

Page 40: Model Identification & Model Selection

40

Akaike Weights

• “Relative likelihood of model i given the data and model set”

• “Weight of evidence that model i most appropriate given data and model set”

R

m

m

i

iw

12exp

2exp

Page 41: Model Identification & Model Selection

41

Model Averaging

• “Conditional” Variance– Conditional on

selected model

• “Unconditional” Variance.– Actually conditional

on entire model set

2

2

1

1

ˆˆ|ˆˆ

ˆˆ

ii

R

ii

i

R

ii

imVarwVar

w

Page 42: Model Identification & Model Selection

42

Good impulse with Huge Problems

• I do not recommend Akaike weights• I do not recommend model averaging in this

fashion• Importance of good models is diminished by

adding bad models• Location of average influenced by adding

redundant models

Page 43: Model Identification & Model Selection

43

Model Redundancy

• Model Space is not filled uniformly• Models tend to be developed in highly

redundant clusters.• Some points in model space allow few models• Some points allow many

Page 44: Model Identification & Model Selection

44

Redundant models do not add much information

Model dimension

Mod

el a

dequ

acy

Model dimension

Mod

el a

dequ

acy

Page 45: Model Identification & Model Selection

45

A more reasonable approach

1) Bootstrap Data2) Fit model set & select best

model3) Estimate derived parameter θ

from best model4) Accumulate θ

RepeatWithinTime Constraints

Mean or median θ with percentile confidence intervals