Correspondence Analysis Correspondence Analysis Correspondence Analysis Correspondence Analysis and Related Methods and Related Methods and Related Methods and Related Methods Michael Greenacre Universitat Pompeu Fabra Barcelona [email protected]www.globalsong.net www.econ.upf.es/~michael 1961 1973 1984 1989 1991 1993 1999 2002 1994 1998 B C A 2007 First XLSTAT Users Conference Paris, 2007 7-8 June 2007 Correspondence analysis and Related Methods – Part 1 1. What is correspondence analysis (CA)? 2. Why is CA so useful as a method of visualizing tabular data? 3. How is CA implemented in XLSTAT? (by CA I mean “simple” CA analysis, as opposed to “multiple” CA, which is discussed in the next talk) Jean-Paul Benzécri... creator of Correspondence Analysis Correspondence analysis: in which areas of research is it useful? CA visualizes complex data, primarily data on categorical measurement scales, facilitating understanding and interpretation – a neglected aspect of statistical enquiry (cf. usual modelling approach) • linguistics, textual analysis: word frequencies • sociology: cross-tabulations and large sets of categorical data from questionnaires; useful for qualitative research, visualization of case study data • ecology: species abundance data at several locations, often with explanatory variables • market research: perceptual mapping of brands/products, ... • archeology: large sparse data matrices • biology, geology, chemistry, psychology...
12
Embed
Correspondence Analysis visualizing tabular data? and Related … · 2012. 10. 31. · Fat Freddy’s Cat – Dimensional Transmogrifier with thanks to Jörg Blasius. Data set“product
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Correspondence Analysis Correspondence Analysis Correspondence Analysis Correspondence Analysis and Related Methodsand Related Methodsand Related Methodsand Related Methods
Correspondence analysisand Related Methods – Part 1
1. What is correspondence analysis (CA)?
2. Why is CA so useful as a method of visualizing tabular data?
3. How is CA implemented in XLSTAT?
(by CA I mean “simple” CA analysis, as opposed to “multiple” CA, which is discussedin the next talk)
Jean-Paul Benzécri... creator of Correspondence AnalysisCorrespondence analysis: in which areas of research is it useful?CA visualizes complex data, primarily data on categorical
measurement scales, facilitating understanding and interpretation – a neglected aspect of statisticalenquiry (cf. usual modelling approach)
• linguistics, textual analysis : word frequencies• sociology : cross-tabulations and large sets of
categorical data from questionnaires; useful for qualitative research, visualization of case study data
• ecology : species abundance data at severallocations, often with explanatory variables
• market research : perceptual mapping of brands/products, ...
• archeology : large sparse data matrices• biology, geology, chemistry, psychology...
Simple Correspondence Analysis (CA)
� CA is a method of data visualization
O
O
O
O
O
• • • •
OO
O
O
O
•••
•
� It applies in the first instance to a cross-tabulation (contingency table)
� The results of CA are in the form of a map of points
� The points represent the rows and columns of the table; it is not the absolute values which are represented (as in principal componentanalysis, for example) but their relative values.
� The positions of the points in the map tell you something about similarities between the rows, similarities between the columns and the association between rows and columns
A simple example
� 312 respondents, all readers of a certain newspaper, cross-tabulated according to their education group and level of reading of the newspaper
E1
E2
E3
E4
E5
C1 C2 C3
� E1: some primary E2: primary completed E3: some secondary E4: secondary completed E5: some tertiary
So the answer is to divide all profile elements by the √ of their averages
√0.183 √0.183 √0.413 √0.413 √0.404 √0.404
“Stretched” row profiles viewed in 3-d chi -squared space
“Pythagorian” –ordinary Euclideandistances
Chi-square distances
profiles
vertices
What CA does…
� … centres the row and column profiles with respect to their average profiles, so that the origin represents the average.
� … re-defines the dimensions of the space in an ordered way: first dimension “explains” the maximum amount of inertia possible in one dimension; second adds the maximum amount to first (hence first two explain the maximum amount in two dimensions), and so on… until all dimensions are “explained”.
� … decomposes the total inertia along the principal axes into principal inertias, usually expressed as % of the total.
� … so if we want a low-dimensional version, we just take the first (principal) dimensions
The row and column problem solutions are closely related, one can be obtained from the other; there are simple scaling
factors along each dimension relating the two problems.
Asymmetric Maps using XLSTAT
E5
E4
E3
E2
E1C3
C2
C1
-1.5
-1
-0.5
0
0.5
1
1.5
2
-1 -0.5 0 0.5 1 1.5
.07037 (84 ,5%)
.01289 (15,5%)
E1
E2
E3
E4
E5C1
C2C3
-1
-0.5
0
0.5
1
1.5
2
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
.07037 (84,5%)
.01289 (15,5%)
Symmetric Map using XLSTAT
some tertiary
secondary complete
secondaryincomplete
primarycomplete
primary incomplete
very thorough
fairly thorough
glance
-0.2
0
0.2
-0.6 -0.4 -0.2 0 0.2 0.4 0.6
.07037 (84,5%)
.01289 (15,5%)
Asymmetric and symmetric maps
Asymmetric maps represent the rows and columns jointly in principal & standard coordinates; asymmetric maps are alsobiplots.
Because the principal coordinates can be much “smaller” thanthe standard coordinates, especially when λk is small, thegenerally accepted way for the joint map is the symmetric map, where both rows and columns are in principal coordinates. Symmetric maps are – strictly speaking – not biplots, but theyare almost so (see Gabriel, Biometrika, 2002).
� Our company wishes to identify the perceptions of itself and its nine major competitors.
� Data are gathered from representatives from 18 companies that represent their potential client base: each has to say which companies they associate with which of 8 attributes.
� The aim is to gain an idea about the relationships between the competitors and the attributes, and where our company is situated in the overall scheme.
Reduction of dimensionality
••
•
••
•
•
•
••
•• •
Reduction of dimensionality
••
•
••
•
•
•
••
•• •
• data centred
means
Reduction of dimensionality
••
•
•
••
•
•
••
•• •
• data centred
• points weighted (row masses)
– in case of frequency data, points are weighted by their row masses, that is the relative frequencies ofeach row (i.e. proportional to sample sizes, n)
•
•
Reduction of dimensionality
••
•
•
••
•
•
••
•• •
• data centred
• points weighted (row masses)
•
• metric weighted (column weights)
dii'2 = Σj wi ( yij – yi'j )2
•
i
i'
e.g. wj = 1/σj2 the inverse of the variance in PCA
� Our company wishes to identify the perceptions of itself and its 9 major competitors (A, B, …, I).
� Data are gathered from representatives from 18 companies that represent their potential client base: each has to say which companies they associate with which of 8 attributes.
� The aim is to gain an idea about the relationships between the competitors and the attributes, and where our company is situated in the overall scheme.
Products
Data set “product” (McFie et al.)
� First note that this is NOT a contingency table, so the chi-square test is not applicable (a permutation test could test for significance, but then we need to have original respondent-level data).
� This is an interesting example because it can be analyzed “as is” or it can be recoded to bring out certain features.
� Analyzing it with no recoding means that the “size” effect (sometimes called the “halo effect”) is removed, since we analyze profiles, i.e., the counts relative to their total counts. In other words, if a company gets relatively few associations, then it is the highest of these (lower) associations that are determinant. Hence, in the following extreme case, a pattern of [ 18 18 18 …] is identical to a pattern of [ 1 1 1 …] !
� The masses assigned to the companies will be proportional to the number of associations they get.
� If the “size” effect is needed to be visualized as well, the data table should be doubled.
� Doubling involves coding the counts of the numbers (out of 18) that DON’T associate the company with the attribute in each case.
� There are now two columns per attribute – each attribute is represented by its positive and negative end of the 0-to-18 scale of counts.
Doubled table:
Prod.
ours I
H
G
F E
D CB
A
GlobProd
PriceSens
ModImage
PriceLevel
Environm
ProdRange
Innovatn
ProdQual
-2
-1
0
1
2
3
-2 -1 0 1 2 3
0.0765 (53.1%)
0.0478 (33.2 %) � Row points are projections of row profiles –have inertias along axes equal to principal inertias (hence principal coordinates).
� Column points are projections of extreme “corner”profiles, or vertices (cf.triangle…) –have inertia along axes equal to 1 (hence standard coordinates).
� Profile points generally close to average.
Row asymmetric map
� Row points and column points are both displayed in principal coordinates –both have inertias along axes equal to principal inertias.
� Both sets of points occupy similar regions of the map: aesthetically a better graphic.
Symmetric map
GlobProd
PriceSens
ModImage
PriceLevel
Environm
ProdRange
Innovatn
ProdQualoursI
H
G
FE
D
C
B
A
-0.4
-0.2
0
0.2
0.4
0.6
-0.4 -0.2 0 0.2 0.4 0.6 0.8
0.0765 (53.1%)
0.0478 (33.2%)
� Attributes have positive and negative pole –average association is at the origin of the map, e.g., In(novation ) has high average, P(roduct )Q(uality ) has low average.
� Fairly similar configuration to undoubledanalysis: there is no strong halo effect.
Doubled data: symmetric map
GP-
GP
PS-
PS
MI-
MI
PL-
PL
En-
En
PR-
PR In-
In
PQ-
PQ
ours
I
H
G
F
E
D
C
B
A
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
-0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
0.1173 (54.5%)
0.0682 (31.7%) Highproductquality
High price sensitive; low environment, product range and price level
Highproductrange, modern image, global products
Inertia contributions in CA
� Correspondence analysis (CA) is a method of data visualization which represents the true positions of profile points in a map which comes closest to all the points, closest in sense of weighted least-squares.
O
O
O
O
O
• • • •
OO
O
O
O
•••
•
� The inertia explained in the map applies to all the points: if we say 83% of the inertia is explained in the map, 71% on the first dimension and 12% on the second, this is a figure calculated for all row (or column) points together.
71%
12%
Inertia contributions in CA
� This type of “inertia-explained-by-axes” calculation can be made for individual points.
� These more detailed results are aids to interpretation in the form of numerical diagnostics, called contributions.
� Especially when there is not a high percentage of inertia explained by the map, these contributions will help us to identify points which are represented inaccurately.
� The inertias and their percentages tell us how much of the variance in the table is explained by the principal axes. The contributions do the same, but for each point individually, and help us to see:
(a) which points are being explained better than others; (b) which points are contributing to the solution more than others.
Geometry of inertia contributions
centroid c °°°°
• i-th point aiwith mass mi
k-th principal axis
•projection on
axis
di
fik
Total inertia of the cloud of points = µi mi di2 = µi mi µk fik
2 = µk λk
Inertia of i-th point = mi di2 = mi µk fik
2
Inertia contribution of i-th point to k-th axis = mi fik2
Inertia contributions
centroid c °°°°
• i-th point aiwith mass mi
k-th principal axis
•projection on
axis
di
fik
Total inertia of the cloud of points = µi mi di2 = µi mi µk fik
2 = µk λk
Inertia of i-th point = mi di2 = mi µk fik
2
Inertia contribution of i-th point to k-th axis = mi fik2
m1 f112 m1 f12
2 ... m1 f1p2
m2 f212 m2 f22
2 ... m2 f2p2
m3 f312 m3 f32
2 ... m3 f3p2
: : :: : :: : :: : :
mn fn12 mn fn2
2 ... mn fnp2
1
2
3
n
Axes1 2 ... p
m1 d12
m2 d22
m3 d32
:
:
:
mn dn2
λ1 λ2 ... λp
Inertia contributions
centroid c °°°°
• i-th point aiwith mass mi
k-th principal axis
•projection on
axis
di
fik
m1 f112 m1 f12
2 ... m1 f1p2
m2 f212 m2 f22
2 ... m2 f2p2
m3 f312 m3 f32
2 ... m3 f3p2
: : :: : :: : :: : :
mn fn12 mn fn2
2 ... mn fnp2
1
2
3
n
Axes1 2 ... p
m1 d12
m2 d22
m3 d32
:
:
:
mn dn2
λ1 λ2 ... λp
mi fik2 / λk : amount of inertia of axisk explained by pointi (absolute contribution, CTR)
mi fik2 / midi
2 : amount of inertia of pointi explained by axisk (relative contribution, COR)
mi fik2 / midi
2 = fik2 / di
2 , i.e. the square offik / di = cos(θik ), whereθik is the angle point-axis