Introduction • Statistics are increasingly prevalent in medical practice, and for those doing research, statistical issues are fundamental. It is extremely important therefore, to understand basic statistical ideas relating to research design and data analysis, and to be familiar with the most commonly used methods of analysis.
96
Embed
Introduction Statistics are increasingly prevalent in medical practice, and for those doing research, statistical issues are fundamental. It is extremely.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Introduction
• Statistics are increasingly prevalent in medical practice, and for those doing research, statistical issues are fundamental. It is extremely important therefore, to understand basic statistical ideas relating to research design and data analysis, and to be familiar with the most commonly used methods of analysis.
• Although data analysis is certainly an important part of the statistical process, there is an equally vital role to be played in the design of the research project. Without a properly designed study, the subsequent analysis may be unsafe, and/or a complete waste of time and resources.
• Types of data • Descriptive statistics• Data distributions• Comparative statistics• Non-parametric tests• Paired data• Comparison of several means• Comparing proportions• Exploring the relationship between 2
variables• Correlation• Linear regression• Survival analysis
Pro
po
rtio
n o
f to
tal
Platelet count
0 100 400 1000 1500
0
.05
.1
.15
900
Types of Data
• Categorical– binary or dichotomous e.g.
diabetic/non-diabetic, smoker/non-smoker
– nominal e.g. AB/B/AB/O, short-sighted/long-sighted/normal
– ordered categorical (ordinal) e.g. stage 1/2/3/4, mild/moderate/severe
• Discrete numerical - e.g. number of children - 0/1/2/3/4/5+
• Continuous - e.g. Blood pressure, age • Other types of data
– ranks, e.g. preference between treatments– percentages, e.g. % oxygen uptake– rates or ratios, e.g. numbers of infant
deaths/1000– scores, e.g. Apgar score for evaluating new-
born babies– visual analogue scales, e.g. perception of pain– survival data – two components, outcome and
time to outcome
Descriptive Statistics • For continuous variables there are a
number of useful descriptive statistics– Mean - equal to the sum of the observations
divided by the number of observations, also known as the arithmetic mean
– Median - the value that comes half-way when the data are ranked in order
– Mode - the most common value observed – Standard Deviation - is a measure of the
average deviation (or distance) of the observations from the mean
– Standard Error of the mean - is measure of the uncertainty of a single sample mean as an estimate of the population mean
Data Distributions
• Frequency distribution – If there are more than about 20
observations, a useful first step in summarizing quantitative data is to form a frequency distribution. This is a table showing the number of observations at different values or within certain ranges. If this is then plotted as a bar diagram a frequency distribution is obtained.
The Normal Distribution • In practice it is found that a reasonable
description of many variables is provided by the normal distribution (Gaussian distribution). The curve of the normal distribution is symmetrical about the mean and bell-shaped. The bell is tall and narrow for small standard deviations, and short and wide for large ones.
Comparative statistics • When there are two or more sets of
observations from a study there are two types of design that must be distinguished: independent or paired. The design will determine the method of statistical analysis
• If the observations are from different groups of individuals, e.g. ages of males and females, or spectacle use in diabetics/non-diabetics, then the data is independent. The sample size may vary from group to group
• If each set of observations is made on the same group of individuals, e.g. WBC count pre- and post- treatment, then the data is said to be paired. This indicates that the observations are on the same individuals rather than from independent samples, and so we have the same number of observations in each set of data
Independent data
• With independent continuous data, we are interested in the mean difference between the groups, but the variability between subjects becomes important. This is because the two sample t test (the most common test used), is based on the assumption that each set of observations is sampled from a population with a Normal Distribution, and that the variances of the two populations are the same.
Non-parametric test
• If the continuous data is not normally distributed, or the standard deviations are very different, a non-parametric alternative to the t test known as the Mann-Whitney test can be utilised (another derivation of the same test is due to Wilcoxon)
• If the data is normally distributed– Mean and standard deviation
• If the data is skewed or non-normally distributed or is from a small sample (N<20)– Median and range
Comparison of several means
• Data sets comprising more than two groups are common, and their analysis often involves the comparison of the means for the component subgroups. It is obviously possible to compare each pair of groups using t tests, but this is not a good approach. It is far better to use a single analysis that enables us to look at all the data in one go, and the method of choice is called analysis of variance
• If the data are not normally distributed or have different variances, a non-parametric equivalent to the analysis of variance can be used, and is known as the Kruskal-Wallis test
Paired data• When we have more than one group of
observations it is vital to distinguish the case where the data are paired from that where the groups are independent. Paired data arise when the same individuals are studied more than once, usually in different circumstances. Also, when we have two different groups of subjects who have been individually matched, for example in a matched pair case-control study, then we should treat the data as paired.
• A one sample t test is used to examine the data. The value t is calculated from– t = sample mean - hypothesised mean
standard error of sample mean
• In a paired analysis where one set of observations are subtracted from the other set, the hypothesised mean is zero. Thus the calculation of the t statistic reduces to – t = sample mean / standard error of sample
mean
• The non-parametric equivalent to this test is the Wilcoxon matched pairs signed rank sum test
• If non-paired and normally distributed with similar variances : T-test
• If non-paired non-normally distributed or with non-similar variances or very small numbers : Mann-Whitney test
• Paired data – paired t-test or Wilcoxon Signed Ranks Test
Comparing Proportions• Qualitative or categorical data is best
presented in the form of table, such that one variable defines the rows, and the categories for the other variable define the columns. Thus in a European study of ASCT for HD, patient gender was compared between the UK and Europe
• The data are arranged in a contingency table • Individuals are assigned to the appropriate
cell of the contingency table according to their values for the two variables
COUNTRYG * PSEX Crosstabulation
Count
16 610 828 1454
100 160 260
16 710 988 1714
europe
uk
COUNTRYG
Total
Female Male
PSEX
Total
COUNTRYG * PSEXG Crosstabulation
828 610 1438
57.6% 42.4% 100.0%
83.8% 85.9% 84.7%
160 100 260
61.5% 38.5% 100.0%
16.2% 14.1% 15.3%
988 710 1698
58.2% 41.8% 100.0%
100.0% 100.0% 100.0%
Count
% within COUNTRYG
% within PSEXG
Count
% within COUNTRYG
% within PSEXG
Count
% within COUNTRYG
% within PSEXG
europe
uk
COUNTRYG
Total
1.00 2.00
PSEXG
Total
Chi-squared test (2)
• A chi-squared test (2) is used to test whether there is an association between the row variable and the column variable. When the table has only two rows or two columns this is equivalent to the comparison of proportions.
• The first step in interpreting contingency table data is to calculate appropriate proportions or percentages. The chi-squared test compares the observed numbers in each of the four categories and compares them with the numbers expected if there were no difference between the distribution of patient gender
• The greater the differences between the observed and expected numbers, the larger the value of 2 and the less likely it is that the difference is due to chance
COUNTRYG * PSEXG Crosstabulation
828 610 1438
57.6% 42.4% 100.0%
83.8% 85.9% 84.7%
160 100 260
61.5% 38.5% 100.0%
16.2% 14.1% 15.3%
988 710 1698
58.2% 41.8% 100.0%
100.0% 100.0% 100.0%
Count
% within COUNTRYG
% within PSEXG
Count
% within COUNTRYG
% within PSEXG
Count
% within COUNTRYG
% within PSEXG
europe
uk
COUNTRYG
Total
1.00 2.00
PSEXG
Total
Chi-Square Tests
1.418b 1 .234
1.260 1 .262
1.428 1 .232
.246 .131
1.417 1 .234
1698
Pearson Chi-Square
Continuity Correction a
Likelihood Ratio
Fisher's Exact Test
Linear-by-LinearAssociation
N of Valid Cases
Value dfAsymp. Sig.(2-sided)
Exact Sig.(2-sided)
Exact Sig.(1-sided)
Computed only for a 2x2 tablea.
0 cells (.0%) have expected count less than 5. The minimum expected count is108.72.
b.
Fisher’s Exact Test
• When the overall total of the table is less than 20, or if it is between 20 and 40 with the smallest of the four expected values is less than 5, then Fisher’s Exact Test should be used.
Crosstab
15 15
100.0% 100.0%
88.2% 83.3%
2 1 3
66.7% 33.3% 100.0%
11.8% 100.0% 16.7%
17 1 18
94.4% 5.6% 100.0%
100.0% 100.0% 100.0%
Count
% within SURV
% within TRMV
Count
% within SURV
% within TRMV
Count
% within SURV
% within TRMV
.00
1.00
SURV
Total
DISG3.00
.00 1.00
TRMV
Total
Chi-Square Tests
5.294b 1 .021
.847 1 .357
3.905 1 .048
.167 .167
5.000 1 .025
18
Pearson Chi-Square
Continuity Correction a
Likelihood Ratio
Fisher's Exact Test
Linear-by-LinearAssociation
N of Valid Cases
DISG3.00
Value dfAsymp. Sig.(2-sided)
Exact Sig.(2-sided)
Exact Sig.(1-sided)
Computed only for a 2x2 tablea.
3 cells (75.0%) have expected count less than 5. The minimum expected count is .17.b.
• The chi-squared test can also be applied to larger tables, generally called r x c tables, where r denotes the number of rows in the table, and c the number of columns.
• The standard chi-squared test for a 2 x c table is a general test to assess whether there are differences among the c proportions. When the categories in the columns have a natural order, however, a more sensitive test is to look for an increasing (or decreasing) trend in the proportions over the columns. This trend can be tested using the chi-squared test for trend.
Cesarean section
Shoe Size
<4 4 4.5 5 5.5 6 Total
Yes 5 7 6 7 8 10 43
No 17 28 36 41 46 140 308
• In the table below the relation between frequency of Cesarean section and maternal foot size is presented
• The standard chi-squared test of this 2 x 6 table gives and a 2 value of 9.29, with 5 d.f., for which P=0.098. Analysis of the data for trend gives a 2
trend = 8.02, with 1 d.f. (P=0.005). Thus there is strong evidence of a linear trend in the proportion of women giving birth by Cesarean section in relation to shoe size. This relation is not causal, but reflects that shoe size is a convenient indicator of small pelvic size
Categorical data – comparing proportions
• Studies where there are 2 groups and the total number of patients > 40 : Chi-squared test
• Studies where there are 2 groups and the total number of patients < 40 or if more than 40 and a single cell has less than 5 : Fisher’s Exact Test
• Studies where there are more than 2 groups but not ordered : - Chi-squared test
• Studies where there are more than 2 groups which are ordered : - Chi-squared test for trend
Exploring the relationship between two variables
• Three possible purposes :– a.) assess association e.g. body
weight and blood pressure – b.) prediction e.g. height and weight
– c.) assess agreement e.g. blood pressure measurement
Correlation• Method for investigating the linear
association between two continuous variables
• The association is measured by the correlation coefficient
• A correlation between two variables shows that they are associated but does not necessarily imply a ‘cause and effect’ relationship
• A t test is used to test whether the correlation coefficient obtained is significantly different from zero, or in other words whether the observed correlation could simply be due to chance
• The significance level is a function of both the size of the correlation coefficient and the number of observations. A weak correlation may therefore be statistically significant if based on a large number of observations, while a strong correlation may fail to achieve significance if there are only a few observations
Correlations
1 -.393** .620** .395**
. .000 .000 .000
121 121 121 121
-.393** 1 -.220* -.465**
.000 . .015 .000
121 121 121 121
.620** -.220* 1 .152
.000 .015 . .097
121 121 121 121
.395** -.465** .152 1
.000 .000 .097
121 121 121 121
Pearson Correlation
Sig. (2-tailed)
N
Pearson Correlation
Sig. (2-tailed)
N
Pearson Correlation
Sig. (2-tailed)
N
Pearson Correlation
Sig. (2-tailed)
N
BET2MG
OPG
CRP
NTX
BET2MG OPG CRP NTX
Correlation is significant at the 0.01 level (2-tailed).**.
Correlation is significant at the 0.05 level (2-tailed).*.
OPG
20100
CRP
300
200
100
0
P=0.015
BET2MG
3020100
CRP
210
180
150
120
90
60
30
0
P=<0.0001
Problems with correlation analyses
• Biological systems are multifactoral so a simple two-way correlation may not be a true reflection of what is being observed
• Spurious correlations
FOOT SIZE
12108642
READING ABILITY
110
100
90
80
70
60
50
40
30
Assessing agreement
• Neither correlation nor linear regression are appropriate
• There may be a very high correlation, but one method gives a systematically higher/lower reading
• Linear regression, the data is not independent
• The only appropriate way is to subtract one observation from the other, and plot against an index variable
Correlation between PCR and TAQman for measuring MRD
• Linear regression gives the equation of the straight line that describes how the y variable increases (or decreases) with an increase in the x variable. y is commonly called the dependent variable, and x the independent, or explanatory variable
• A t test is used to test whether the gradient b differs significantly from a specified value (usually zero)
• Assumptions– for any value of x, y must be normally
distributed– the magnitude of the scatter of the
points about the regression line is the same throughout the length of the line
– the relation between the two variables should be linear
AGE
706050403020100
TLENGTH
22
20
18
16
14
12
AGE
706050403020100
Telomer length
22
20
18
16
14
12
Coefficientsa
17.893 .317 56.390 .000
-.049 .010 -.462 -4.809 .000
(Constant)
AGE
Model1
B Std. Error
UnstandardizedCoefficients
Beta
StandardizedCoefficients
t Sig.
Dependent Variable: TLENGTHa.
AGE
706050403020100
Unstandardized Residual
4
3
2
1
0
-1
-2
-3
-4
Practical application
• Y = mx + c
• Telomere length = age * -0.049 + 17.89
• Substituting in the above equation for ages of 30 and 60
• 16.42 = 30*-0.049 +17.89
• 14.95 = 60*-0.049 +17.89
AGE
706050403020100
Telomer length
22
20
18
16
14
12
Survival data
• Has 2 components
• The event of interest and the time to the event
• Special statistical methods are required – it is not appropriate to use tests for categorical data
Life Table Analysis
• Survival data are usually summarised as survival or Kaplan-Meier curves
• Based on a series of conditional probabilities
• For example, the probability of a patient surviving 10 days after a transplant, is the probability of surviving nine days, multiplied by the probability of surviving the 10th day given that the patient survived the first nine days.
0 40 80 120 160 200 240 280 320
Days post BMT
0123456789
101112131415
Pat
ien
t n
um
ber
Alive
Dead
Table 1. Life table for fifteen patients who received an allogeneic stem cell transplant Time (days) Status Number at risk Probability of
survivalStandard error
16* 0 15 1.00
26 1 14 0.93 0.069
66 1 13 0.86 0.094
69* 0 12
74 1 11 0.78 0.113
82* 0 10
88 1 9 0.69 0.129
89* 0 8
117* 0 7
133* 0 6
144* 0 5
172* 0 4
252* 0 3
291* 0 2
305* 0 1
0 50 100 150 200 250 300 3500
20
40
60
80
100
Days post BMT
Pro
bab
ility
%
Outcomes suitable for Kaplan-Meier analyses
• Survival (event of interest is death, patients alive are censored)
• Disease-free survival (events of interest are either death or disease relapse, patients alive and in remission are censored)
• Primary graft failure• Acute graft versus host disease
0 1 2 3 4 5
Years post BMT
0
20
40
60
80
100
Pro
ba
bil
ity
(%
)Overall and leukaemia-free survival for 111 patients with CML
67%
LFS
OS
45%
in CP allografted with stem cells from HLA-identical sibling donors
HH/ICSM May 2003
0 60 120 180 240 300 360
Days post BMT
0
5
10
15
20
Pro
bab
ility
of
gra
ft f
ailu
re (
%)
Graft failure following BMT for 1stCP CML with a VUD
• Decide what data needs collecting (for statistical purposes) and then try if appropriate design a form (this is best done in a database, eg Microsoft ACCESS)
• Get the computer to do as much of the work as possible. ie calculation of ages, surface area etc
• Think ahead to what format the spreadsheet/stats package requires the data to be in
• For analysis purposes, its much easier to work with numbers and codes, as opposed to descriptions ie instead of male/female or m/f, use 1 or 2
• Use a ‘code’ to identify missing data, eg 999 or something ‘unlikely’
• Check the data before analysis, get ‘descriptive statistics’
STATA, Statgraphics, MINITAB, STATXACT, GENSTAT, SAS
Presentation of results• Where possible give actual P values rather than
ranges– ie P=0.041 rather than P<0.05
• If a P value is not significant give the actual value and not just NS– ie P=0.15 rather than P=NS
• When presenting data it may be more useful to present confidence intervals rather than a P-value– ie lens A was more durable than lens B by 2.4 days
(P=0.03), it might be more informative to write - lens A was more durable than lens B by 2.4 days (95%CI 0.3-4.5days)
• It is not necessary to give test results– ie t=33.5, 28 dof, P=0.0001
• If a continuous variable is normally distributed present, as a description of the data, the mean and standard deviation, if not normally distributed, a median and range
• Don’t quote more significant figures than necessary – ie mean patient age 34.2550 (std dev
11.4337), 34.3 (std dev 11.4) will suffice
0.000
0.200
0.400
0.600
0.800
1.000
1.200
32D xl-1 xl-2 xl-3 xl-4 xl-5 xl-6 32DP210
Cell Line
Adhesion to FN (A
490
absorbance units)
3636N =
GROUP
XL2
.8
.6
.4
.2
0.0
8
Writing the statistics section in a paper
• If power calculations were used to calculate the sample sizes, details should be given– eg Based on samples sizes of x in each arm, we
should have been able to detect a difference of y given 80% power at a significance level of 0.05.
• State which statistical tests were used (reference obscure ones). – eg in order to investigate the differences between
the groups, a t-test was used for continuous data, and a chi-squared test for categorical data
• If applicable, state whether standard deviations or standard errors are quoted
• State whether p-values are from one or two-tailed tests– eg all quoted p-values are two-tailed
• Not necessary to quote which stats package was used
Suggested Reading Material
• Essentials of Medical Statistics – Betty Kirkwood
• Practical Statistics for Medical Research– Doug Altman
• Statistical Methods in Medical Research– Armitage and Berry
Summary
• If at all possible - consult a statistician before starting your study
• Get a feel of your data by plotting results - don’t rely on descriptive statistics alone
• Use appropriate statistical tests, not those that give the ‘best’ results