Top Banner
1 Lecture 10: Meta-analysis of intervention studies • Introduction to meta-analysis • Selection of studies • Abstraction of information • Quality scores • Methods of analysis and presentation • Sources of bias
33

Lecture 10: Meta-analysis of intervention studies

Jan 04, 2016

Download

Documents

violet-ward

Lecture 10: Meta-analysis of intervention studies. Introduction to meta-analysis Selection of studies Abstraction of information Quality scores Methods of analysis and presentation Sources of bias. Definitions. Traditional (narrative) review: Selective, biassed - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Lecture 10: Meta-analysis of intervention studies

1

Lecture 10:Meta-analysis of intervention

studies

• Introduction to meta-analysis

• Selection of studies

• Abstraction of information

• Quality scores

• Methods of analysis and presentation

• Sources of bias

Page 2: Lecture 10: Meta-analysis of intervention studies

2

Definitions

• Traditional (narrative) review:– Selective, biassed

• Systematic review (overview):– Synthesis of studies of a research question– Explicit methods for study selection, data

abstraction, and analysis (repeatable)

• Meta-analysis:– Quantitative pooling of study results

Page 3: Lecture 10: Meta-analysis of intervention studies

3 Source: l’Abbé et al, Ann Intren Med 1987, 107: 224-233

Page 4: Lecture 10: Meta-analysis of intervention studies

4

Protocol preparation

• Research question

• Study “population”:– search strategy– inclusion/exclusion criteria

Page 5: Lecture 10: Meta-analysis of intervention studies

5

Protocol preparation

• Search strategy:– computerized databases (Medline, CINAHL,

Psychinfo, etc.):• test sensitivity and predictive value of search strategy

– hand-searches (reference list, relevant journals, colleagues)

– “grey” (unpublished) literature:• pro: publication bias• con: results less reliable

Page 6: Lecture 10: Meta-analysis of intervention studies

6

Identifying relevant studies for systematic reviews of RCTs in vision research (Dickerson, in Systematic Reviews, BMJ,1995)

• Sensitivity and precision” of Medline searching• Gold standard:

– registry of RCTs in vision research• extensive computer and hand searches

• contacts with investigators to clarify design

• Sensitivity:– proportion of known RCTs identified by the search

• “Precision”:– proportion of publications identified by search that were

RCTs

Page 7: Lecture 10: Meta-analysis of intervention studies

7Source: Chalmers + Altman, Systematic Reviews, BMJ Publishing Group, 1995

Page 8: Lecture 10: Meta-analysis of intervention studies

8Source: Chalmers + Altman, Systematic Reviews, BMJ Publishing Group, 1995

Page 9: Lecture 10: Meta-analysis of intervention studies

9Source: Chalmers + Altman, Systematic Reviews, BMJ Publishing Group, 1995

Page 10: Lecture 10: Meta-analysis of intervention studies

10

Protocol preparation

• Study “population”:– inclusion/exclusion criteria:

• language

• study design

• outcome of interest

• etc.

Source: Data abstraction form for meta-analysis project

Page 11: Lecture 10: Meta-analysis of intervention studies

11

Protocol preparation

• Data collection:– standardized abstraction form– number of abstractors– blinding of abstractors– rules for resolving discrepancies (consensus,

other)– use of quality scores

Page 12: Lecture 10: Meta-analysis of intervention studies

12Source: l’Abbé et al, Ann Intren Med 1987, 107:224-233

Page 13: Lecture 10: Meta-analysis of intervention studies

13

Analysis

• Measure of effect:– odds ratio, risk/rate ratio– risk/rate difference – relative risk reduction

• Graphical methods:– conventional (individual studies)– cumulative– exploring heterogeneity

Page 14: Lecture 10: Meta-analysis of intervention studies

14Source: Chalmers + Altman Systematic Reviews, BMJ Publishing Group, 1995

Page 15: Lecture 10: Meta-analysis of intervention studies

15Source: Chalmers + Altman Systematic Reviews, BMJ Publishing Group, 1995

Page 16: Lecture 10: Meta-analysis of intervention studies

16

Analyses

• Pooling results:– is it appropriate?

– equivalent to pooling results from multi-centre trials

– fixed (e.g., Mantel-Haenzel) methods• assume that all trials have same underlying treatment effect

– random effects methods (e.g., DerSimonian & Laird):• allow for heterogeneity of treatment effects

Page 17: Lecture 10: Meta-analysis of intervention studies

17Source: Chalmers + Altman Systematic Reviews, BMJ Publishing Group, 1995

Page 18: Lecture 10: Meta-analysis of intervention studies

18

Page 19: Lecture 10: Meta-analysis of intervention studies

19

Page 20: Lecture 10: Meta-analysis of intervention studies

20Source: l’Abbé et al, Ann Intren Med 1987, 107:224-233

Page 21: Lecture 10: Meta-analysis of intervention studies

21

Quality scores

• Rating scales and checklists to assess methodological quality of RCTs

• How should they be used?– Qualitative assessment– Exclusion of weaker studies– Weighting of estimates

Page 22: Lecture 10: Meta-analysis of intervention studies

22

Does quality of trials affect estimate of intervention efficacy? (Moher et al, 1998)

• Random sample of 11 meta-analyses of 127 RCTs • Replicated analysis • Used quality scales/measures• Results:

– masked abstraction provided higher quality score than unmasked

– low quality trials found stronger effects than high quality trials

– quality-weighted analysis resulted in lower statistical heterogeneity

Page 23: Lecture 10: Meta-analysis of intervention studies

23Source: Moher et al, Lancet 1998, 352: 609-13

Page 24: Lecture 10: Meta-analysis of intervention studies

24

Source: Moher et al, Lancet 1998, 352: 609-13

Page 25: Lecture 10: Meta-analysis of intervention studies

25Source: Moher et al, Lancet 1998, 352; 609-13

Page 26: Lecture 10: Meta-analysis of intervention studies

26

Unresolved questions about meta-analysis

• Apples and oranges?– Between-study differences in study population,

design, outcome measures, etc.

• Inclusion of weak studies?

• Publication bias– methods to evaluate impact – - particularly with small studies

• Is it better to do good original studies?

Page 27: Lecture 10: Meta-analysis of intervention studies

27

Large trials vs meta-analyses of smaller trials (Cappelleri et al, 1996)

• Selected meta-analyses from Medline and Cochrane pregnancy and childbirth database with at least 1 “large” study and 2 smaller studies:– sample size approach (n=1000+) - 79 meta-analyses– statistical power approach (adequate size to detect treatment

effect from pooled analysis - 61 meta-analyses

• Results:– agreement between larger trials and meta-analysis 82-90%

using random effects models– greater disagreement using fixed effects models

Page 28: Lecture 10: Meta-analysis of intervention studies

28

Large trials vs meta-analyses of smaller trials (Cappelleri et al, 1996)

• Results:– agreement between larger trials and meta-analysis 82-

90% using random effects models

– greater disagreement using fixed effects models

• Conclusion:– large and small trial results generally agree

– each type of trial has advantages and disadvantages:• large trials provide more stable estimates of effect

• small trials may better effect heterogeneity of clinical populations

Page 29: Lecture 10: Meta-analysis of intervention studies

29

Risk ratios from large studies vs pooled smaller studies (Cappeleri et al,1996)

(Left- sample size approach; right - statistical power approach)

Source: Cappeleri et al, JAMA 1996, 276: 1332-1338

Page 30: Lecture 10: Meta-analysis of intervention studies

30

Source: Cappeleri et al, JAMA 1996, 276: 1332-1338

Page 31: Lecture 10: Meta-analysis of intervention studies

31

Discrepancies between meta-analyses and subsequent large RCTs (LeLorier et al, 1997)

• Compared results of 12 large (n=1000+) RCTs with results of 19 prior meta-analyses (M-A)on same topics

• For total of 40 primary and secondary outcomes, agreement between large trial and M-A only fair (kappa = 0.35, 95% CI .06 to .64)

• Positive predictive value of M-A = 68%• Negative predictive value of M-A= 67%

Page 32: Lecture 10: Meta-analysis of intervention studies

32Source: Lelorier et al, NEJM 1997, 337: 536-42

Page 33: Lecture 10: Meta-analysis of intervention studies

33Source: Lelorier et al, NEJM 1997, 337: 536-42