Top Banner
Journal of Machine Learning Research VV (YYYY) PP-PP Submitted 4/09; Revised 12/09; Published MM/YY Information Retrieval Perspective to Nonlinear Dimensionality Reduction for Data Visualization Jarkko Venna JARKKO. VENNA@TKK. FI Jaakko Peltonen JAAKKO. PELTONEN@TKK. FI Kristian Nybo KRISTIAN. NYBO@TKK. FI Helena Aidos HELENA. AIDOS@TKK. FI Samuel Kaski SAMUEL. KASKI @TKK. FI Aalto University School of Science and Technology Department of Information and Computer Science P.O. Box 15400, FI-00076 Aalto, Finland Editor: Yoshua Bengio Abstract Nonlinear dimensionality reduction methods are often used to visualize high-dimensional data, al- though the existing methods have been designed for other related tasks such as manifold learning. It has been difficult to assess the quality of visualizations since the task has not been well-defined. We give a rigorous definition for a specific visualization task, resulting in quantifiable goodness measures and new visualization methods. The task is information retrieval given the visualization: to find similar data based on the similarities shown on the display. The fundamental tradeoff be- tween precision and recall of information retrieval can then be quantified in visualizations as well. The user needs to give the relative cost of missing similar points vs. retrieving dissimilar points, after which the total cost can be measured. We then introduce a new method NeRV (neighbor retrieval visualizer) which produces an optimal visualization by minimizing the cost. We further derive a variant for supervised visualization; class information is taken rigorously into account when computing the similarity relationships. We show empirically that the unsupervised version outperforms existing unsupervised dimensionality reduction methods in the visualization task, and the supervised version outperforms existing supervised methods. Keywords: information retrieval, manifold learning, multidimensional scaling, nonlinear dimen- sionality reduction, visualization 1. Introduction Visualization of high-dimensional data sets is one of the traditional applications of nonlinear di- mensionality reduction methods. In high-dimensional data, such as experimental data where each dimension corresponds to a different measured variable, dependencies between different dimensions often restrict the data points to a manifold whose dimensionality is much lower than the dimension- ality of the data space. Many methods are designed for manifold learning, that is, to find and unfold the lower-dimensional manifold. There has been a research boom in manifold learning since 2000, and there now exist many methods that are known to unfold at least certain kinds of manifolds suc- cessfully. Some of the successful methods include isomap (Tenenbaum et al., 2000), locally linear embedding (LLE; Roweis and Saul, 2000), Laplacian eigenmap (LE; Belkin and Niyogi, 2002a), and maximum variance unfolding (MVU; Weinberger and Saul, 2006). c YYYY Jarkko Venna, Jaakko Peltonen, Kristian Nybo, Helena Aidos and Samuel Kaski.
40

Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

Sep 03, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

Journal of Machine Learning Research VV (YYYY) PP-PP Submitted 4/09; Revised 12/09; Published MM/YY

Information Retrieval Perspective to Nonlinear Dimensionality

Reduction for Data Visualization

Jarkko Venna [email protected]

Jaakko Peltonen [email protected]

Kristian Nybo [email protected]

Helena Aidos [email protected]

Samuel Kaski [email protected]

Aalto University School of Science and Technology

Department of Information and Computer Science

P.O. Box 15400, FI-00076 Aalto, Finland

Editor: Yoshua Bengio

Abstract

Nonlinear dimensionality reduction methods are often used to visualize high-dimensional data, al-

though the existing methods have been designed for other related tasks such as manifold learning.

It has been difficult to assess the quality of visualizations since the task has not been well-defined.

We give a rigorous definition for a specific visualization task, resulting in quantifiable goodness

measures and new visualization methods. The task is information retrieval given the visualization:

to find similar data based on the similarities shown on the display. The fundamental tradeoff be-

tween precision and recall of information retrieval can then be quantified in visualizations as well.

The user needs to give the relative cost of missing similar points vs. retrieving dissimilar points,

after which the total cost can be measured. We then introduce a new method NeRV (neighbor

retrieval visualizer) which produces an optimal visualization by minimizing the cost. We further

derive a variant for supervised visualization; class information is taken rigorously into account

when computing the similarity relationships. We show empirically that the unsupervised version

outperforms existing unsupervised dimensionality reduction methods in the visualization task, and

the supervised version outperforms existing supervised methods.

Keywords: information retrieval, manifold learning, multidimensional scaling, nonlinear dimen-

sionality reduction, visualization

1. Introduction

Visualization of high-dimensional data sets is one of the traditional applications of nonlinear di-

mensionality reduction methods. In high-dimensional data, such as experimental data where each

dimension corresponds to a different measured variable, dependencies between different dimensions

often restrict the data points to a manifold whose dimensionality is much lower than the dimension-

ality of the data space. Many methods are designed for manifold learning, that is, to find and unfold

the lower-dimensional manifold. There has been a research boom in manifold learning since 2000,

and there now exist many methods that are known to unfold at least certain kinds of manifolds suc-

cessfully. Some of the successful methods include isomap (Tenenbaum et al., 2000), locally linear

embedding (LLE; Roweis and Saul, 2000), Laplacian eigenmap (LE; Belkin and Niyogi, 2002a),

and maximum variance unfolding (MVU; Weinberger and Saul, 2006).

c©YYYY Jarkko Venna, Jaakko Peltonen, Kristian Nybo, Helena Aidos and Samuel Kaski.

Page 2: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

It has turned out that the manifold learning methods are not necessarily good for information

visualization. Several methods had severe difficulties when the output dimensionality was fixed

to two for visualization purposes (Venna and Kaski, 2007a). This is natural since they have been

designed to find a manifold, not to compress it into a lower dimensionality.

In this paper we discuss the specific visualization task of projecting the data to points on a two-

dimensional display. Note that this task is different from manifold learning, in case the inherent

dimensionality of the manifold is higher than two and the manifold cannot be represented perfectly

in two dimensions. As the representation is necessarily imperfect, defining and using a measure

of goodness of the representation is crucial. However, in spite of the large amount of research

into methods for extracting manifolds, there has been very little discussion on what a good two-

dimensional representation should be like and how the goodness should be measured. In a recent

survey of 69 papers on dimensionality reduction from years 2000–2006 (Venna, 2007) it was found

that 28 (≈ 40%) of the papers only presented visualizations of toy or real data sets as a proof of

quality. Most of the more quantitative approaches were based on one of two strategies. The first is

to measure preservation of all pairwise distances or the order of all pairwise distances. Examples of

this approach include the multidimensional scaling (MDS)-type cost functions like Sammon’s cost

and Stress, methods that relate the distances in the input space to the output space, and various cor-

relation measures that assess the preservation of all pairwise distances. The other common quality

assurance strategy is to classify the data in the low-dimensional space and report the classification

performance.

The problem with using the above approaches to measure visualization performance is that their

connection to visualization is unclear and indirect at best. Unless the purpose of the visualization

is to help with a classification task, it is not obvious what the classification accuracy of a projection

reveals about its goodness as a visualization. Preservation of pairwise distances, the other widely

adopted principle, is a well-defined goal; it is a reasonable goal if the analyst wishes to use the

visualization to assess distances between selected pairs of data points, but we argue that this is not

the typical way how an analyst would use a visualization, at least in the early stages of analysis when

no hypothesis about the data has yet been formed. Most approaches including ours are based on

pairwise distances at heart, but we take into account the context of each pairwise distance, yielding

a more natural way of evaluating visualization performance; the resulting method has a natural and

rigorous interpretation which we discuss below and in the following sections.

In this paper we make rigorous the specific information visualization task of projecting a high-

dimensional data set onto a two-dimensional plane for visualizing similarity relationships. This task

has a very natural mapping into an information retrieval task as will be discussed in Section 2. The

conceptualization as information retrieval explicitly reveals the necessary tradeoff between preci-

sion and recall, of making true similarities visible and avoiding false similarities. The tradeoff can

be quantified exactly once costs have been assigned to each of the two error types, and once the total

cost has been defined, it can be optimized as will be discussed in Section 3. We then show that the

resulting method, called NeRV for neighbor retrieval visualizer, can be further extended to super-

vised visualization, and that both the unsupervised and supervised methods empirically outperform

their alternatives. NeRV includes the previous method called stochastic neighbor embedding (SNE;

Hinton and Roweis, 2002) as a special case where the tradeoff is set so that only recall is maximized;

thus we give a new information retrieval interpretation to SNE.

2

Page 3: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

This paper extends our earlier conference paper (Venna and Kaski, 2007b) which introduced

the ideas in a preliminary form with preliminary experiments. The current paper gives the full

justification and comprehensive experiments, and also introduces the supervised version of NeRV.

2. Visualization as Information Retrieval

In this section we define formally the specific visualization task; this is a novel formalization of

visualization as an information retrieval task. We first give the definition for a simplified setup in

Section 2.1, and then generalize it in Section 2.2.

2.1 Similarity Visualization with Binary Neighborhood Relationships

In the following we first define the specific visualization task and a cost function for it; we then

show that the cost function is related to the traditional information retrieval measures precision and

recall.

2.1.1 TASK DEFINITION: SIMILARITY VISUALIZATION

Let {xi}Ni=1 be a set of input data samples, and let each sample i have an input neighborhood Pi,

consisting of samples that are close to i. Typically, Pi might consist of all input samples (other than

i itself) that fall within some radius of i, or alternatively Pi might consist of a fixed number of input

samples most similar to i. In either case, let ri be the size of the set Pi.

The goal of similarity visualization is to produce low-dimensional output coordinates {yi}Ni=1

for the input data, usable in visual information retrieval. Given any sample i as a query, in visual

information retrieval samples are retrieved based on the visualization; the retrieved result is a set

Qi of samples that are close to yi in the visualization; we call Qi the output neighborhood. The Qi

typically consists of all input samples j (other than i itself) whose visualization coordinates y j are

within some radius of yi in the visualization, or alternatively Qi might consist of a fixed number

of input samples whose output coordinates are nearest to yi. In either case, let ki be the number

of points in the set Qi. The number of points in Qi may be different from the number of points in

Pi; for example, if many points have been placed close to yi in the visualization, then retrieving all

points within a certain radius of yi might yield too many retrieved points, compared to how many

are neighbors in the input space. Figure 1 illustrates the setup.

The remaining question is what is a good visualization, that is, what is the cost function. Denote

the number of samples that are in both Qi and Pi by NTP,i (true positives), samples that are in Qi but

not in Pi by NFP,i (false positives), and samples that are in Pi but not Qi by NMISS,i (misses). Assume

the user has assigned a cost CFP for each false positive and CMISS for each miss. The total cost Ei

for query i, summed over all data points, then is

Ei = NFP,iCFP +NMISS,iCMISS . (1)

2.1.2 RELATIONSHIP TO PRECISION AND RECALL

The cost function of similarity visualization (1) bears a close relationship to the traditional measures

of information retrieval, precision and recall. If we allow CMISS to be a function of the total number

of relevant points r, more specifically CMISS(ri) =C′MISS/ri, and take the cost per retrieved point by

3

Page 4: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

falsepositives

Input space

miss

Output space (visualization)

*

**

i

i

Q

Px

y

i

i

*

**

Figure 1: Diagram of the types of errors in visualization.

dividing by ki, the total cost becomes

E(ki,ri) =1

kiE(ri) =

1

ki(NFP,iCFP +NMISS,iCMISS(ri))

= CFP

NFP,i

ki+C′MISS

ki

NMISS,i

ri

= CFP(1−precision(i))+C′MISS

ki(1− recall(i)) .

The traditional definition of precision for a single query is

precision(i) =NTP,i

ki= 1−

NFP,i

ki,

and recall is

recall(i) =NTP,i

ri= 1−

NMISS,i

ri.

Hence, fixing the costs CFP and CMISS and minimizing (1) corresponds to maximizing a specific

weighted combination of precision and recall.

Finally, to assess performance of the full visualization the cost needs to be averaged over all

samples (queries) which yields mean precision and recall of the visualization.

2.1.3 DISCUSSION

Given a high-dimensional data set, it is generally not possible to show all the similarity relation-

ships within the data on a low-dimensional display; therefore, all linear or nonlinear dimensionality

reduction methods need to make a tradeoff about which kinds of similarity relationships they aim

to show on the display. Equation (1) fixes the tradeoff given the costs of the two kinds of errors.

Figure 2 illustrates this tradeoff (computed with methods introduced in Section 3) with a toy ex-

ample where a three-dimensional sphere surface is visualized in two dimensions. If we take some

4

Page 5: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

query point in the visualization and retrieve a set of points close-by in the visualization, in display

A such retrieval yields few false positives but many misses, whereas in display B the retrieval yields

few misses but many false positives. The tradeoff can also be seen in the (mean) precision-recall

curves for the two visualizations, where the number of retrieved points is varied to yield the curve.

Visualization A reaches higher values of precision, but the precision drops much before high recall

is reached. Visualization B has lower precision at the left end of the curve, but precision does not

drop as much even when high recall is reached.

Note that in order to quantify the tradeoff, both precision and recall need to be used. This

requires a rich enough retrieval model, in the sense that the number of retrieved points can be

different from the number of relevant points, so that precision and recall get different values. It is

well-known in information retrieval that if the numbers of relevant and retrieved items (here points)

are equal, precision and recall become equal. The recent “local continuity” criterion (Equation

9 in Chen and Buja, 2009) is simply precision/recall under this constraint; we thus give a novel

information retrieval interpretation of it as a side result. Such a criterion is useful but it gives

only a limited view of the quality of visualizations, because it corresponds to a limited retrieval

model and cannot fully quantify the precision-recall tradeoff. In this paper we will use fixed-radius

neighborhoods (defined more precisely in Section 2.2) in the visualizations, which naturally yields

differing numbers of retrieved and relevant points.

The simple visualization setup presented in this section is a novel formulation of visualization

and useful as a clearly defined starting point. However, for practical use it has a shortcoming: the

overly simple binary fixed-size neighborhoods do not take into account grades of relevance. The

cost function does not penalize violating the original similarity ordering of neighbor samples; and

the cost function penalizes all neighborhood violations with the same cost. Next we will introduce

a more practical visualization setup.

2.2 Similarity Visualization with Continuous Neighborhood Relationships

We generalize the simple binary neighborhood case by defining probabilistic neighborhoods both in

the (i) input and (ii) output spaces, and (iii) replacing the binary precision and recall measures with

probabilistic ones. It will finally be shown that for binary neighborhoods, interpreted as a constant

high probability of being a neighbor within the neighborhood set and a constant low probability

elsewhere, the measures reduce to the standard precision and recall.

2.2.1 PROBABILISTIC MODEL OF RETRIEVAL

We start by defining the neighborhood in the output space, and do that by defining a probability

distribution over the neighbor points. Such a distribution is interpretable as a model about how the

user does the retrieval given the visualization display.

Given the location of the query point on the display, yi, suppose that the user selects one point

at a time for inspection. Denote by q j|i the probability that the user chooses y j. If we can define

such probabilities, they will define a probabilistic model of retrieval for the neighbors of yi.

The form of q j|i can be defined by a few axiomatic choices and a few arbitrary ones. Since the

q j|i are a probability distribution over j for each i, they must be nonnegative and sum to one over

j; therefore we can represent them as q j|i = exp(− fi, j)/∑k 6=i exp(− fi,k) where fi, j ∈ R. The fi, jshould be an increasing function of distance (dissimilarity) between yi and y j; we further assume

that fi, j depends only on yi and y j and not on the other points yk. It remains to choose the form of

5

Page 6: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

A

B

mean recall

mea

n p

reci

sion

A B

Figure 2: Demonstration of the tradeoff between false positives and misses. Top left: A three-

dimensional data set sampled from the surface of a sphere; only the front hemisphere

is shown for clarity. The glyph shapes (size, elongation, and angle) show the three-

dimensional coordinates of each point; the colors in the online version show the same

information. Bottom: Two embeddings of the data set. In the embedding A, the sphere

has been cut open and folded out. This embedding eliminates false positives, but there

are some misses because points on different sides of the tear end up far away from each

other. In contrast, the embedding B minimizes the number of misses by simply squashing

the sphere flat; this results in a large number of false positives because points on opposite

sides of the sphere are mapped close to each other. Top right: mean precision-mean recall

curves with input neighborhood size r= 75, as a function of the output neighborhood size

k, for the two projections. The embedding A has better precision (yielding higher values

at the left end of the curve) whereas the embedding B has better recall (yielding higher

values at the right end of the curve).

6

Page 7: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

fi, j. In general there should not be any reason to favor any particular neighbor point, and hence the

form should not depend on j. It could depend on i, however; we assume it has a simple quadratic

form fi, j = ||yi−y j||2/σ2

i where ||yi−y j|| is the Euclidean distance and the positive multiplier 1/σ2i

allows the function to grow at an individual rate for each i. This yields the definition

q j|i =exp(−

‖yi−y j‖2

σ2i

)

∑k 6=i exp(− ‖yi−yk‖2

σ2i

). (2)

2.2.2 PROBABILISTIC MODEL OF RELEVANCE

We extend the simple binary neighborhoods of input data samples to probabilistic neighborhoods

as follows. Suppose that if the user was choosing the neighbors of a query point i in the original

data space, she would choose point j with probability p j|i. The p j|i define a probabilistic model of

relevance for the original data, and are equivalent to a neighborhood around i: the higher the chance

of choosing this neighbor, the larger its relevance to i.

We define the probability p j|i analogously to q j|i, as

p j|i =exp(−

d(xi,x j)2

σ2i

)

∑k 6=i exp(− d(xi,xk)2

σ2i

), (3)

where d(·, ·) is a suitable difference measure in the original data, and xi refers to the point in the

original data that is represented by yi in the visualization. Some data sets may provide the values of

d(·, ·) directly; otherwise the analyst can choose a difference measure suitable for the data feature

vectors. Later in this paper we will use both the simple Euclidean distance and a more complicated

distance measure that incorporates additional information about the data.

Given known values of d(·, ·), the above definition of the neighborhood p j|i can be motivated by

the same arguments as q j|i. That is, the given form of p j|i is a good choice if no other information

about the original neighborhoods is available. Other choices are possible too; in particular, if the

data directly includes neighbor probabilities, they can simply be used as the p j|i. Likewise, if more

accurate models of user behavior are available, they can be plugged in place of q j|i. The forms of

p j|i and q j|i need not be the same.

For each point i, the scaling parameter σi controls how quickly the probabilities p j|i fall off

with distance. These parameters could be fixed by prior knowledge, but without such knowledge it

is reasonable to set the σi by specifying how much flexibility there should be about the choice of

neighbors. That is, we set σi to a value that makes the entropy of the p·|i distribution equal to logk,

where k is a rough upper limit for the number of relevant neighbors, set by the user. We use the

same relative scale σi both in the input and output spaces (Equations 2 and 3).

2.2.3 COST FUNCTIONS

The remaining task is to measure how well the retrieval done in the output space, given the visual-

ization, matches the true relevances defined in the input space. Both were above defined in terms of

distributions, and a natural candidate for the measure is the Kullback-Leibler divergence, defined as

D(pi,qi) = ∑j 6=i

p j|i logp j|i

q j|i

7

Page 8: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

where pi and qi are the neighbor distributions for a particular point i, in the input space and in the

visualization respectively. For the particular probability distributions defined above the Kullback-

Leibler divergence turns out to be intimately related to precision and recall. Specifically, for any

query i, the Kullback-Leibler divergence D(pi,qi) is a generalization of recall, and D(qi, pi) is

a generalization of precision; for simple “binary” neighborhood definitions, the Kullback-Leibler

divergences and the precision-recall measures become equivalent. The proof is in Appendix A.

We call D(qi, pi) smoothed precision and D(pi,qi) smoothed recall. To evaluate a complete

visualization rather than a single query, we define aggregate measures in the standard fashion: mean

smoothed precision is defined as Ei[D(qi, pi)] and mean smoothed recall as Ei[D(pi,qi)], where E

denotes expectation and the means are taken over queries (data points i).

Mean smoothed precision and recall are analogous to mean precision and recall in that we

cannot in general reach the optimum of both simultaneously. We return to Figure 2 which illustrates

the tradeoff for nonlinear projections of a three-dimensional sphere surface. The subfigure A was

created by maximizing mean smoothed precision; the sphere has been cut open and folded out,

which minimizes the number of false positives but also incurs some misses because some points

located on opposite edges of the point cloud were originally close to each other on the sphere. The

subfigure B was created by maximizing mean smoothed recall; the sphere is squashed flat, which

minimizes the number of misses, as all the points that were close to each other in the original data

are close to each other in the visualization. However, there are then a large number of false positives

because opposite sides of the sphere have been mapped on top of each other, so that many points

that appear close to each other in the visualization are actually originally far away from each other.

2.2.4 EASIER-TO-INTERPRET ALTERNATIVE GOODNESS MEASURES

Mean smoothed precision and recall are rigorous and well-motivated measures of visualization per-

formance, but they have one practical shortcoming for human analysts: the errors have no upper

bound, and the scale will tend to depend on the data set. The measures are very useful for compar-

ing several visualizations of the same data, and will turn out to be useful as optimization criteria,

but we would additionally like to have measures where the plain numbers are easily interpretable.

We address this by introducing mean rank-based smoothed precision and recall: simply replace the

distances in the definitions of p j|i and q j|i with ranks, so that the probability for the nearest neighbor

uses a distance of 1, the probability for the second nearest neighbor a distance of 2, and so on. This

imposes an upper bound on the error because the worst case scenario is that the ranks in the data set

are reversed in the visualization. Dividing the errors by their upper bounds gives us measures that lie

in the interval [0,1] regardless of the data and are thus much easier to interpret. The downside is that

substituting ranks for distances makes the measures disregard much of the neighborhood structure in

the data, so we suggest using mean rank-based smoothed precision and recall as easier-to-interpret,

but less discriminating complements to, rather than replacements of, mean smoothed precision and

recall.

3. Neighborhood Retrieval Visualizer (NeRV)

In Section 2 we defined similarity visualization as an information retrieval task. The quality of

a visualization can be measured by the two loss functions, mean smoothed precision and recall.

These measures generalize the straightforward precision and recall measures to non-binary neigh-

borhoods. They have the further advantage of being continuous and differentiable functions of the

8

Page 9: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

output visualization coordinates. It is then easy to use the measures as optimization criteria for a

visualization method. We now introduce a visualization algorithm that optimizes visual information

retrieval performance. We call the algorithm the neighborhood retrieval visualizer (NeRV).

As demonstrated in Figure 2, precision and recall cannot in general be minimized simultane-

ously, and the user has to choose which loss function (average smoothed precision or recall) is more

important, by assigning a cost for misses and a cost for false positives. Once these costs have been

assigned, the visualization task is simply to minimize the total cost. In practice the relative cost of

false positives to misses is given as a parameter λ. The NeRV cost function then becomes

ENeRV = λEi[D(pi,qi)]+ (1−λ)Ei[D(qi, pi)]

∝ λ∑i

∑j 6=i

p j|i logp j|i

q j|i+(1−λ)∑

i∑j 6=i

q j|i logq j|i

p j|i(4)

where, for example, setting λ to 0.1 indicates that the user considers an error in precision

(1−0.1)/0.1 = 9 times as expensive as a similar error in recall.

To optimize the cost function (4) with respect to the output coordinates yi of each data point,

we use a standard conjugate gradient algorithm. The computational complexity of each iteration

is O(dn2), where n is the number of data points and d the dimension of the projection. (In our

earlier conference paper a coarse approximate algorithm was required for speed; this turned out

to be unnecessary, and the O(dn2) complexity does not require any approximation.) Note that if

a pairwise distance matrix in the input space is not directly provided as data, it can as usual be

computed from input features; this is a one-time computation done at the start of the algorithm and

takes O(Dn2) time, where D is the input dimensionality.

In general, NeRV optimizes a user-defined cost which forms a tradeoff between mean smoothed

precision and mean smoothed recall. If we set λ = 1 in Equation (4), we obtain the cost function of

stochastic neighbor embedding (SNE; see Hinton and Roweis, 2002). Hence we get as a side result

a new interpretation of SNE as a method that maximizes mean smoothed recall.

3.0.5 PRACTICAL ADVICE ON OPTIMIZATION

After computing the distance matrix from the input data, we scale the input distances so that the

average distance is equal to 1. We use a random projection onto the unit square as a starting point

for the algorithm. Even this simple choice has turned out to give better results than alternatives; a

more intelligent initialization, such as projecting the data using principal component analysis, can

of course also be used.

To speed up convergence and avoid local minima, we apply a further initialization step: we run

ten rounds of conjugate gradient (two conjugate gradient steps per round), and after each round

decrease the neighborhood scaling parameters σi used in Equations (2) and (3). Initially, we set the

σi to half the diameter of the input data. We decrease them linearly so that the final value makes

the entropy of the p j|i distribution equal to an effective number of neighbors k, which is the choice

recommended in Section 2.2. This initialization step has the same complexity O(dn2) per iteration

as the rest of the algorithm. After this initialization phase we perform twenty standard conjugate

gradient steps.

9

Page 10: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

4. Using NeRV for Unsupervised Visualization

It is easy to apply NeRV for unsupervised dimensionality reduction. As in any unsupervised anal-

ysis, the analyst first chooses a suitable unsupervised similarity or distance measure for the input

data; for vector-valued input data this can be the standard Euclidean distance (which we will use

here), or it can be some other measure suggested by domain knowledge. Once the analyst has spec-

ified the relative importance of precision and recall by choosing a value for λ, the NeRV algorithm

computes the embedding based on the distances it is given.

In this section we will make extensive experiments comparing the performance of NeRV with

other dimensionality reduction methods on unsupervised visualization of several data sets, including

both benchmark data sets and real-life bioinformatics data sets. In the following subsections, we

describe the comparison methods and data sets, briefly discuss the experimental methodology, and

present the results.

4.1 Comparison Methods for Unsupervised Visualization

For the task of unsupervised visualization we compare the performance of NeRV with the follow-

ing unsupervised nonlinear dimensionality reduction methods: principal component analysis (PCA;

Hotelling, 1933), metric multidimensional scaling (MDS; see Borg and Groenen, 1997), locally lin-

ear embedding (LLE; Roweis and Saul, 2000), Laplacian eigenmap (LE; Belkin and Niyogi, 2002a),

Hessian-based locally linear embedding (HLLE; Donoho and Grimes, 2003), isomap (Tenenbaum

et al., 2000), curvilinear component analysis (CCA; Demartines and Herault, 1997), curvilinear dis-

tance analysis (CDA; Lee et al., 2004), maximum variance unfolding (MVU; Weinberger and Saul,

2006), landmark maximum variance unfolding (LMVU; Weinberger et al., 2005), and our previous

method local MDS (LMDS; Venna and Kaski, 2006).

Principal component analysis (PCA; Hotelling, 1933) finds linear projections that maximally

preserve the variance in the data. More technically, the projection directions can be found by solving

for the eigenvalues and eigenvectors of the covariance matrix Cx of the input data points. The

eigenvectors corresponding to the two or three largest eigenvalues are collected into a matrix A, and

the data points xi can then be visualized by projecting them with yi = Axi, where yi is the obtained

low-dimensional representation of xi. PCA is very closely related to linear multidimensional scaling

(linear MDS, also called classical scaling; Torgerson, 1952; Gower, 1966), which tries to find low-

dimensional coordinates preserving squared distances. It can be shown (Gower, 1966) that when

the dimensionality of the sought solutions is the same and the distance measure is Euclidean, the

projection of the original data to the PCA subspace equals the configuration of points found by

linear MDS. This implies that PCA tries to preserve the squared distances between data points, and

that linear MDS finds a solution that is a linear projection of the original data.

Traditional multidimensional scaling (MDS; see Borg and Groenen, 1997) exists in several dif-

ferent variants, but they all have a common goal: to find a configuration of output coordinates that

preserves the pairwise distance matrix of the input data. For the comparison experiments we chose

metric MDS which is the simplest nonlinear MDS method; its cost function (Kruskal, 1964), called

the raw stress, is

E = ∑i, j

(d(xi,x j)−d(yi,y j))2, (5)

10

Page 11: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

where d(xi,x j) is the distance of points xi and x j in the input space and d(yi,y j) is the distance of

their corresponding representations (locations) yi and y j in the output space. This cost function is

minimized with respect to the representations yi.

Isomap (Tenenbaum et al., 2000) is an interesting variant of MDS, which again finds a config-

uration of output coordinates matching a given distance matrix. The difference is that Isomap does

not compute pairwise input-space distances as simple Euclidean distances but as geodesic distances

along the manifold of the data (technically, along a graph formed by connecting all k-nearest neigh-

bors). Given these geodesic distances the output coordinates are found by standard linear MDS.

When output coordinates are found for such input distances, the manifold structure in the original

data becomes unfolded; it has been shown (Bernstein et al., 2000) that this algorithm is asymptot-

ically able to recover certain types of manifolds. We used the isomap implementation available at

http://isomap.stanford.edu in the experiments.

Curvilinear component analysis (CCA; Demartines and Herault, 1997) is a variant of MDS that

tries to preserve only distances between points that are near each other in the visualization. This is

achieved by weighting each term in the MDS cost function (5) by a coefficient that depends on the

corresponding pairwise distance in the visualization. In the implementation we use, the coefficient

is simply a step function that equals 1 if the distance is below a predetermined threshold and 0 if it

is larger.

Curvilinear distance analysis (CDA; Lee et al., 2000, 2004) is an extension of CCA. The idea is

to replace the Euclidean distances in the original space with geodesic distances in the same manner

as in the isomap algorithm. Otherwise the algorithm stays the same.

Local MDS (LMDS; Venna and Kaski, 2006) is our earlier method, an extension of CCA that

focuses on local proximities with a tunable cost function tradeoff. It can be seen as a first step in the

development of the ideas of NeRV.

The locally linear embedding (LLE; Roweis and Saul, 2000) algorithm is based on the assump-

tion that the data manifold is smooth enough and is sampled densely enough, such that each data

point lies close to a locally linear subspace on the manifold. LLE makes a locally linear approx-

imation of the whole data manifold: LLE first estimates a local coordinate system for each data

point, by calculating linear coefficients that reconstruct the data point as well as possible from its k

nearest neighbors. To unfold the manifold, LLE finds low-dimensional coordinates that preserve the

previously estimated local coordinate systems as well as possible. Technically, LLE first minimizes

the reconstruction error E(W) = ∑i‖xi−∑ jWi, jx j‖2 with respect to the coefficients Wi, j, under the

constraints that Wi, j = 0 if i and j are not neighbors, and ∑ jWi, j = 1. Given the weights, the low-

dimensional configuration of points is next found by minimizing E(Y) = ∑i ‖yi−∑ jWi, jy j‖2 with

respect to the low-dimensional representation yi of each data point.

The Laplacian eigenmap (LE; see Belkin and Niyogi, 2002a) uses a graph embedding approach.

An undirected k-nearest-neighbor graph is formed, where each data point is a vertex. Points i and j

are connected by an edge with weight Wi, j = 1 if j is among the k nearest neighbors of i, otherwise

the edge weight is set to zero; this simple weighting method has been found to work well in practice

(Belkin and Niyogi, 2002b). To find a low-dimensional embedding of the graph, the algorithm tries

to put points that are connected in the graph as close to each other as possible and does not care what

happens to the other points. Technically, it minimizes 12 ∑i, j ‖yi− y j‖

2Wi, j = yTLy with respect to

the low-dimensional point locations yi, where L = D−W is the graph Laplacian and D is a diagonal

matrix with elements Dii = ∑ jWi, j . However, this cost function has an undesirable trivial solution:

putting all points in the same position would minimize the cost. This can be avoided by adding suit-

11

Page 12: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

able constraints. In practice the low-dimensional configuration is found by solving the generalized

eigenvalue problem Ly = λDy (Belkin and Niyogi, 2002a). The smallest eigenvalue corresponds

to the trivial solution, but the eigenvectors corresponding to the next smallest eigenvalues give the

Laplacian eigenmap solution.

The Laplacian eigenmap algorithm reduces to solving a generalized eigenvalue problem because

the cost function that is minimized is a quadratic form involving the Laplacian matrix L. The

Hessian-based locally linear embedding (HLLE; Donoho and Grimes, 2003) algorithm is similar,

but the Laplacian L is replaced by the Hessian H.

The maximum variance unfolding algorithm (MVU; Weinberger and Saul, 2006) expresses di-

mensionality reduction as a semidefinite programming problem. One way of unfolding a folded flag

is to pull its four corners apart, but not so hard as to tear the flag. MVU applies this idea to projecting

a manifold: the projection maximizes variance (pulling apart) while preserving distances between

neighbors (no tears). The constraint of local distance preservation can be expressed in terms of the

Gram matrix K of the mapping. Maximizing the variance of the mapping is equivalent to maxi-

mizing the trace of K under a set of constraints, which, it turns out, can be done using semidefinite

programming.

A notable disadvantage of MVU is the time required to solve a semidefinite program for n× n

matrices when the number of data points n is large. Landmark MVU (LMVU; Weinberger et al.,

2005) addresses this issue by significantly reducing the size of the semidefinite programming prob-

lem. Like LLE, LMVU assumes that the data manifold is sufficiently smooth and densely sampled

that it is locally approximately linear. Instead of embedding all the data points directly as MVU

does, LMVU randomly chooses m≪ n inputs as so-called landmarks. Because of the local linear-

ity assumption, the other data points can be approximately reconstructed from the landmarks using

a linear transformation. It follows that the Gram matrix K can be approximated using the m×m

submatrix of inner products between landmarks. Hence we only need to optimize over m×m matri-

ces, a much smaller semidefinite program. Other recent approaches for speeding up MVU include

matrix factorization based on a graph Laplacian (Weinberger et al., 2007).

In addition to the above comparison methods, other recent work on dimensionality reduction in-

cludes minimum volume embedding (MVE; Shaw and Jebara, 2007), which is similar to MVU, but

where MVU maximizes the whole trace of the Gram matrix (the sum of all eigenvalues), MVE max-

imizes the sum of the first few eigenvalues and minimizes the sum of the rest, in order to preserve

the largest amount of eigenspectrum energy in the few dimensions that remain after dimensionality

reduction. In practice, a variational upper bound of the resulting criterion is optimized.

Very recently, a number of unsupervised methods have been compared by van der Maaten et al.

(2009) in terms of classification accuracy and our old criteria trustworthiness-continuity.

4.2 Data Sets for Unsupervised Visualization

We used two synthetic benchmark data sets and four real-life data sets for our experiments.

The plain s-curve data set is an artificial set sampled from an S-shaped two-dimensional surface

embedded in three-dimensional space. An almost perfect two-dimensional representation should be

possible for a non-linear dimensionality reduction method, so this data set works as a sanity check.

The noisy s-curve data set is otherwise identical to the plain s-curve data set, but significant

spherical normally distributed noise has been added to each data point. The result is a cloud of

points where the original S-shape is difficult to discern by visual inspection.

12

Page 13: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

The faces data set consists of ten different face images of 40 different people, for a total of 400

images. For a given subject, the images vary in terms of lighting and facial expressions. The size of

each image is 64×64 pixels, with 256 grey levels per pixel. The data set is available for download

at http://www.cs.toronto.edu/˜roweis/data.html.

The mouse gene expression data set is a collection of gene expression profiles from different

mouse tissues (Su et al., 2002). Expression of over 13,000 mouse genes had been measured in

45 tissues. We used an extremely simple filtering method, similar to that originally used by Su

et al. (2002), to select the genes for visualization. Of the mouse genes clearly expressed (average

difference in Affymetrix chips, AD > 200) in at least one of the 45 tissues (dimensions), a random

sample of 1600 genes (points) was selected. After this the variance in each tissue was normalized

to unity.

The gene expression compendium data set is a large collection of human gene expression arrays

(http://dags.stanford.edu/cancer; Segal et al., 2004). Since the current implementations of

all methods do not tolerate missing data we removed samples with missing values altogether. First

we removed genes that were missing from more than 300 arrays. Then we removed the arrays

for which values were still missing. This resulted in a data set containing 1278 points and 1339

dimensions.

The sea-water temperature time series data set (Liitiainen and Lendasse, 2007) is a time series

of weekly temperature measurements of sea water over several years. Each data point is a time

window of 52 weeks, which is shifted one week forward for the next data point. Altogether there

are 823 data points and 52-dimensions.

4.3 Methodology for the Unsupervised Experiments

The performance of NeRV was compared with 11 unsupervised dimensionality reduction methods

described in Section 4.1, namely principal component analysis (PCA), metric multidimensional

scaling (here simply denoted MDS), locally linear embedding (LLE), Laplacian eigenmap (LE),

Hessian-based locally linear embedding (HLLE), isomap, curvilinear component analysis (CCA),

curvilinear distance analysis (CDA), maximum variance unfolding (MVU), landmark maximum

variance unfolding (LMVU), and local MDS (LMDS). LLE, LE, HLLE, MVU, LMVU and isomap

were computed with code from their developers; MDS, CCA and CDA used our code.

4.3.1 GOODNESS MEASURES

We used four pairs of performance measures to compare the methods. The first pair is mean

smoothed precision-mean smoothed recall, that is, our new measures of visualization quality. The

scale of input neighborhoods was fixed to 20 relevant neighbors (see Section 2.2).

Although we feel, as explained in Section 2, that smoothed precision and smoothed recall are

more sophisticated measures of visualization performance than precision and recall, we have also

plotted standard mean precision-mean recall curves. The curves were plotted by fixing the 20

nearest neighbors of a point in the original data as the set of relevant items, and then varying the

number of neighbors retrieved from the visualization between 1 and 100, plotting mean precision

and recall for each number.

Our third pair of measures are the rank-based variants of our new measures, mean rank-based

smoothed precision-mean rank-based smoothed recall. Recall that we introduced the rank-based

13

Page 14: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

variants as easier-to-interpret, but less discriminating, alternatives to mean smoothed precision and

mean smoothed recall. The scale of input neighborhoods was again fixed to 20 relevant neighbors.

Our fourth pair of measures is trustworthiness-continuity (Kaski et al., 2003). The intuitive

motivation behind these measures was the same trade-off between precision and recall as in this

paper, but the measures were defined in a more ad hoc way. At the time we did not have the

clear connection to information retrieval which makes NeRV particularly attractive, and we did

not optimize the measures. Trustworthiness and continuity can, however, now be used as partly

independent measures of visualization quality. To compute the trustworthiness and continuity, we

used neighborhoods of each point containing the 20 nearest neighbors.

As a fifth measure, when data classes are available, we use classification error given the display,

with a standard k-nearest neighbor classifier where we set k = 5.

4.3.2 CHOICE OF PARAMETERS

Whenever we needed to choose a parameter for any method, we used the same criterion, namely the

F-measure computed from the new rank-based measures. That is, we chose the parameter yielding

the largest value of 2(P ·R)/(P+R) where P and R are the mean rank-based smoothed precision

and recall.

Many of the methods have a parameter k denoting the number of nearest neighbors for con-

structing a neighborhood graph; for each method and each data set we tested values of k ranging

from 4 to 20, and chose the value that produced the best F-measure. (For MVU and LMVU we used

a smaller parameter range to save computational time. For MVU k ranged from 4 to 6; for LMVU

k ranged from 3 to 9.) The exceptions are local MDS (LMDS), one of our own earlier methods, and

NeRV, for which we simply set k to 20 without optimizing it.

Methods that may have local optima were run five times with different random initializations

and the best run (again, in terms of the F-measure) was selected.

4.4 Results of Unsupervised Visualization

We will next show visualizations for a few sets, and measure quantitatively the results of several.

We begin by showing an example of a NeRV visualization for the plain S-curve data set in Figure 3.

Later in this section we will show a NeRV visualization of a synthetic face data set (Figure 8), and in

Section 4.6 of the faces data set of real face images (Figure 11). The quantitative results are spread

across four figures (Figures 4–7), each of which contains results for one pair of measures and all six

data sets.

We first show the curves of mean smoothed precision-mean smoothed recall, that is, the loss

functions associated with our formalization of visualization as information retrieval. The results

are shown in Figure 4. NeRV and local MDS (LMDS) form curves parameterized by λ, which

ranges from 0 to 1.0 for NeRV and from 0 to 0.9 for LMDS. NeRV was clearly the best-performing

method on all six data sets, which is of course to be expected since NeRV directly optimizes a linear

combination of these measures. LMDS has a relatively good mean smoothed precision, but does

not perform as well in terms of mean smoothed recall. Simple metric MDS also stands out as a

consistently reasonably good method.

Because we formulated visualization as an information retrieval task, it is natural to also try

existing measures of information retrieval performance, that is, mean precision and mean recall,

even though they do not take into account grades of relevance as discussed in Section 2.1. Standard

14

Page 15: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

Original data NeRV visualization

Figure 3: Left: Plain S-curve data set. The glyph shapes (size, elongation, and angle) show the

three-dimensional coordinates of each point; the colors in the online version show the

same information. Right: NeRV visualization (here λ = 0.8).

mean precision-mean recall curves are shown in Figure 5; for NeRV and LMDS, we show the curve

for a single λ value picked by the F-measure as described in Section 4.3. Even with these coarse

measures, NeRV shows excellent performance: NeRV is best on four data sets in terms of the area

under the curve, CDA and CCA are each best on one data set.

Next, we plot our easier-to-interpret but less discriminating alternative measures of visualization

performance. The curves of mean rank-based smoothed precision-mean rank-based smoothed recall

are shown in Figure 6. These measures lie between 0 and 1, and may hence be easier to compare

between data sets. With these measures, NeRV again performs best on all data sets; LMDS also

performs well, especially on the seawater temperature data.

Finally, we plot the curves of trustworthiness-continuity, shown in Figure 7. The results are

fairly similar to the new rank-based measures: once again NeRV performs best on all data sets and

LMDS also performs well, especially on the seawater temperature data.

4.4.1 EXPERIMENT WITH A KNOWN UNDERLYING MANIFOLD

To further test how well the methods are able to recover the neighborhood structure inherent in the

data we studied a synthetic face data set where a known underlying manifold defines the relevant

items (neighbors) of each point. The SculptFaces data contains 698 synthetic images of a face (sized

64×64 pixels each). The pose and direction of lighting have been changed in a systematic way to

create a manifold in the image space (http://web.mit.edu/cocosci/isomap/datasets.html;

Tenenbaum et al., 2000). We used the raw pixel data as input features.

The pose and lighting parameters used to generate the images are available. These parameters

define the manifold of the faces embedded in the very high-dimensional image space. For any face

15

Page 16: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

−12000 −10000 −8000 −6000 −4000 −2000

−5000

−4000

−3000

−2000

−1000

0

1000

2000

3000

λ=0 NeRVλ=0LMDS

PCA

MDS

LLE

LE

Isomap CDALMVU

MVU

−3000 −2500 −2000 −1500 −1000 −500

−1500

−1000

−500

0

500NeRV

LMDS

PCA

MDS

LLE

LE

Isomap

CDA

CCA

LMVUMVU

HLLE

−10000 −8000 −6000 −4000 −2000 0

−10000

−8000

−6000

−4000

−2000

λ=1

NeRV

λ=0

LMDS

PCA

MDSLLE

HLLE

LE

Isomap

CDACCA

LMVUMVU

−15 −10 −5

x 104

−8

−6

−4

−2

0

2

4

x 104

NeRVλ=0λ=0.9

LMDS

PCA

MDS

LLE

LEIsomap

CDACCA

LMVU

HLLE

−4 −3 −2 −1 0

x 104

−3.5

−3

−2.5

−2

−1.5

−1

−0.5

0

x 104

λ=0

λ=1

NeRV

LMDSPCAMDS

LLE

HLLE

LE

Isomap

CDACCA

LMVU

MVU

−6000 −5000 −4000 −3000 −2000 −1000 0

−5000

−4000

−3000

−2000

−1000λ=0 NeRV

λ=0.9LMDS

PCA

MDS

LLE

LE

Isomap

CDA

CCA

LMVUMVU

Plain s−curve

Noisy s−curve

Faces

Seawater temperature time series

Mouse gene expression

Gene expression compendium

Mean smoothed precision (vertical axes) − Mean smoothed recall (horizontal axes)

Figure 4: Mean smoothed precision-mean smoothed recall plotted for all six data sets. For clarity,

only a few of the best-performing methods are shown for each data set. We have actually

plotted −1·(mean smoothed precision) and −1·(mean smoothed recall) to maintain visual

consistency with the plots for other measures: in each plot, the best performing methods

appear in the top right corner.

image, the relevant other faces are the ones that are neighbors with respect to the pose and lighting

parameters; we defined the ground truth neighborhoods using Euclidean distances in the pose and

lighting space, and we fixed the scale of the ground truth neighborhoods to 20 relevant neighbors

(see Section 2.2).

16

Page 17: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

−0.2 0 0.2 0.4 0.6 0.80

0.2

0.4

0.6

0.8

1

NeRV

λ=0.6

LMDS

λ=0.1

LE

CDACCA

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

NeRV

λ=0.3

LMDS

λ=0.1

MDS

CDA

MVU

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

NeRV

λ=0.4

LMDS

λ=0.3

Isomap

CDA

CCA

−0.2 0 0.2 0.4 0.6 0.80

0.2

0.4

0.6

0.8

1

NeRV

λ=0.7

LMDS

λ=0.1

LE

Isomap

CDA

MVU

CCA

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

NeRV

λ=0.9

LMDS

λ=0.4

MDS

Isomap

CDA

MVU

CCA

0 0.2 0.4 0.6 0.8 1

0.2

0.4

0.6

0.8

1NeRV

λ=0.8LMDS

λ=0

PCA MDS

LLE

LE

Isomap

CDAMVU

LMVUCCA

HLLE

Mouse gene expression

Gene expression compendium

Noisy s−curve

Faces

Mean precision (vertical axes) − Mean recall (horizontal axes)

Seawater temperature time seriesPlain s−curve

Figure 5: Mean precision-mean recall curves plotted for all six data sets. For clarity, only the best

methods (with largest area under curve) are shown for each data set. In each plot, the best

performance is in the top right corner. For NeRV and LMDS, a single λ value picked with

the F-measure is shown.

We ran all methods for this data as in all experiments in this section, and then calculated four

performance curves (mean smoothed precision-mean smoothed recall, mean precision-mean re-

call, mean rank-based smoothed precision-mean rank-based smoothed recall, and trustworthiness-

continuity) using the neighborhoods in pose and lighting space as the ground truth. The results are

shown in Figure 8. In spite of the very high dimensionality of the input space and the reduction of

17

Page 18: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

0.994 0.996 0.998 1 1.002 1.0040.991

0.992

0.993

0.994

0.995

0.996

0.997

0.998

0.999NeRVLocalMDS

PCA

MDS

LLE

LE

HLLE

IsomapCCACDAMVULMVU

0.6 0.7 0.8 0.90.45

0.5

0.55

0.6

0.65

0.7

0.75NeRV

LocalMDS

PCA

MDS

LLE

LE

Isomap

CCA

CDA

LMVU

HLLE

0.85 0.9 0.95 1

0.82

0.84

0.86

0.88

0.9

0.92

0.94

0.96

λ=0 NeRV

λ=1

LocalMDS

λ=0.9

PCA

MDS

LLE

LE

Isomap

CDA

MVULMVU

0.975 0.98 0.985 0.99 0.995 1

0.975

0.98

0.985

0.99

0.995 λ=0

NeRV

λ=1

LocalMDSλ=0.9

PCA

MDS

LLE

LE IsomapCCA

MVU

LMVU

CDA

0.8 0.85 0.9 0.95

0.65

0.7

0.75

0.8λ=0

NeRV

λ=1

λ=0

LocalMDS

λ=0.9

PCA

MDS

LLE LE

HLLE

IsomapCCA

CDA

MVU

LMVU

0.95 0.96 0.97 0.98 0.99 1

0.95

0.96

0.97

0.98

0.99NeRV

λ=0

LocalMDS

PCA

MDS

LLE

LE

HLLE

Isomap

CCACDA

MVU

LMVU

Plain s−curve

Mouse gene expressionNoisy s−curve

Faces Gene expression compendium

Mean rank−based smoothed precision (vertical axes) −

Mean rank−based smoothed recall (horizontal axes)

Seawater temperature time series

Figure 6: Mean rank-based smoothed precision-mean rank-based smoothed recall plotted for all six

data sets. For clarity, only a few of the best performing methods are shown for each data

set. We have actually plotted 1−(mean rank-based smoothed precision) and 1−(mean

rank-based smoothed recall) to maintain visual consistency with the plots for other mea-

sures: in each plot, the best performance is in the top right corner.

the manifold dimension from three to two, NeRV was able to recover the structure well. NeRV is

the best according to both of our proposed measures of visualization performance, mean smoothed

precision and recall; MDS and local MDS also perform well. In terms of the simple mean precision

and mean recall NeRV is the second best with CDA being slightly better. In terms of the rank-based

measures, NeRV is the best in terms of precision; LE and MDS attain the best mean rank-based

18

Page 19: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

0.96 0.97 0.98 0.99 1

0.965

0.97

0.975

0.98

0.985

0.99

0.995

NeRVλ=0 λ=1

LMDS

λ=0

λ=0.9

PCA

MDS

LLE

LEIsomap

CDA

CCA

LMVU

MVU

0.8 0.85 0.9 0.95

0.68

0.7

0.72

0.74

0.76

0.78

0.8

0.82

0.84

NeRVλ=0

λ=1LMDS

λ=0

λ=0.9

PCA

MDS

LLE

HLLE

LEIsomap

CDA

CCA

LMVU MVU

0.96 0.97 0.98 0.99 1 1.01 1.02

0.955

0.96

0.965

0.97

0.975

0.98

0.985

0.99

0.995NeRVLMDS

PCA

MDS

LLE

LE

Isomap

CDA

CCALMVU

MVU

HLLE

0.94 0.96 0.98 1

0.93

0.94

0.95

0.96

0.97

0.98

0.99 NeRVλ=0

λ=1

LMDS

λ=0.9PCA

MDS

LLE

HLLE

LE

Isomap

CDA

CCA

LMVU

MVU

0.85 0.9 0.95

0.82

0.84

0.86

0.88

0.9

0.92

NeRV

λ=0

λ=1

LMDS

λ=0

λ=0.9

PCA

MDS

LLE

LE

Isomap

CDA

LMVU

MVU

0.8 0.82 0.84 0.86 0.88 0.9 0.920.66

0.68

0.7

0.72

0.74NeRV

λ=0

λ=1

LMDSλ=0

λ=0.9

PCA

MDS

LLE

LE

Isomap

CDA

CCA

LMVU

HLLE

Trustworthiness (vertical axes) − Continuity (horizontal axes)

Seawater Temperature

Mouse Gene Expression

Gene Expression Compendium

S−curve

Noisy S−curve

Faces

Figure 7: Trustworthiness-continuity plotted for all six data sets. For clarity, only a few of the best

performing methods are shown for each data set. In each plot, the best performance is in

the top right corner.

smoothed recall; and local MDS and CDA also perform well. When performance was measured

with trustworthiness and continuity, NeRV was the best in terms of trustworthiness while MVU and

Isomap attained the highest continuity.

Overall, NeRV was the best in these unsupervised visualization tasks, although it was not the

best in all, and in some tasks it had tough competition.

19

Page 20: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

NeRV λ = 0.1 MDS

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1NeRV

λ=0.2LMDS

λ=0.1

LE

IsomapCDA

CCA

MVU

−15000 −10000 −5000−8000

−6000

−4000

−2000

0

2000

4000

λ=0 NeRVλ=0

λ=0.9

LMDS

PCA

MDS

LLE

LE

Isomap CDA

CCA

LMVUMVU

0.8 0.85 0.9 0.95 1 1.05

0.8

0.85

0.9

0.95 NeRV

λ=1

λ=0

LocalMDS

λ=0.9PCA

MDS

LLELE

Isomap

CDA

CCA

LMVU

MVU

0.85 0.9 0.95 1 1.05

0.85

0.9

0.95

NeRV

λ=1LMDS

λ=0

λ=0.9

PCA

MDS

LLE

LE

Isomap

CDA

CCA

LMVUMVU

Trustworthiness (vertical axis) −

Continuity (horizontal axis)

Mean smoothed precision (vertical axis) −

Mean smoothed recall (horizontal axis)

Mean precision (vertical axis) −

Mean recall (horizontal axis)

Mean rank−based smoothed precision

Mean rank−based smoothed recall(vertical axis) −

(horizontal axis)

Figure 8: Top: Sample projections of the SculptFaces data set (NeRV vs. the best alternative).

Bottom: How well were the ground truth neighbors in pose-lighting space retrieved from

the image data, evaluated by four pairs of measures. The measures were computed the

same way as before, as described in Section 4.3, but here taking the known pose and

lighting information as the input data. Only the best performing methods are shown for

clarity.

4.5 Comparison by Unsupervised Classification

For the data sets where a classification of samples is available, we additionally follow a traditional

way to evaluate visualizations: we measure how well samples can be classified based on the visual-

ization.20

Page 21: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

Data set Dimensions Classes

Letter 16 26

Phoneme 20 13

Landsat 36 6

TIMIT 12 41

Table 1: The data sets used in the unsupervised classification experiments.

Here all methods are unsupervised, that is, class labels of samples are not used in computing the

visualization. The parameters of methods are again chosen as described in Section 4.3.1. Methods

are evaluated by k-nearest neighbor classification accuracy (with k = 5), that is, each sample in the

visualization is classified by majority vote of its k nearest neighbors in the visualization, and the

classification is compared to the ground truth label.

We use four benchmark data sets, all of which include class labels, to compare the performances

of the methods. The data sets are summarized in Table 1. For all data sets we used a randomly

chosen subset of 1500 samples in the experiments, to save computation time.

The letter recognition data set (denoted Letter) is from the UCI Machine Learning Repository

(Blake and Merz, 1998); it is a 16-dimensional data set with 26 classes, which are 4×4 images of

the 26 capital letters of the alphabet. These letters are based on 20 different fonts which have been

distorted to produce the final images.

The phoneme data set (denoted Phoneme) is taken from LVQ-PAK (Kohonen et al., 1996) and

consists of phoneme samples represented by a 20-dimensional vector of features plus a class label

indicating which phoneme is actually represented. There are a total of 13 classes.

The landsat satellite data set (denoted Landsat) is from UCI Machine Learning Repository

(Blake and Merz, 1998). Each data point is a 36-dimensional vector, corresponding to a 3 × 3

satellite image measured in four spectral bands; the class label of the point indicates the terrain type

in the image (6 possibilitities, for example red soil).

The TIMIT data set is taken from the DARPA TIMIT speech database (TIMIT). It is similar

to the phoneme data from LVQ-PAK but the feature vectors are 12-dimensional and there are 41

classes in total.

The resulting classification error rates are shown in Table 2. NeRV is best on two out of four

data sets and second best on a third set (there our old method LocalMDS is best). CDA is best on

one.

4.6 NeRV, Joint Probabilities, and t-Distributions

Very recently, based on stochastic neighbor embedding (SNE), van der Maaten and Hinton (2008)

have proposed a modified method called t-SNE, which has performed well in unsupervised exper-

iments. The t-SNE makes two changes compared to SNE; in this section we describe the changes

and show that the same changes can be made to NeRV, yielding a variant that we call t-NeRV. We

then provide a new information retrieval interpretation for t-NeRV and t-SNE.

We start by analyzing the differences between t-SNE and the original stochastic neighbor em-

bedding. The original SNE minimizes the sum of Kullback-Leibler divergences

∑i

∑j 6=i

p j|i logp j|i

q j|i

21

Page 22: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

Letter Phoneme Landsat TIMIT

Eigenmap 0.914 0.121 0.168 0.674

LLE n/a 0.118 0.212 0.722

Isomap 0.847 0.134 0.156 0.721

MVU 0.763 0.155 0.153 0.699

LMVU 0.819 0.208 0.151 0.787

MDS 0.823 0.189 0.151 0.705

CDA 0.336 0.118 0.141 0.643

CCA 0.422 0.098 0.143 0.633

NeRV 0.532 0.079 0.139 0.626

LocalMDS 0.499 0.118 0.128 0.637

Table 2: Error rates of k-nearest neighbor classification based on the visualization, for unsupervised

visualization methods. The best results for each data set are in bold; n/a denotes that LLE

did not yield a result for the Letter data. NeRV attains the lowest error rate for two data

sets and second lowest error rate for one data set.

where p j|i and q j|i are defined by Equations (3) and (2). We showed in Section 2.2 that this cost

function has an information retrieval interpretation: it corresponds to mean smoothed recall of re-

trieving neighbors of query points. The t-SNE method makes two changes which we discuss below.

4.6.1 COST FUNCTION BASED ON JOINT PROBABILITIES

The first change in t-SNE is to the cost function: t-SNE minimizes a “symmetric version” of the

cost function, defined as

∑i

∑j 6=i

pi, j logpi, j

qi, j

where the pi, j and qi, j are now joint probabilities over both i and j, so that ∑i, j pi, j = 1 and similarly

for qi, j. The term “symmetric” comes from the fact that the joint probabilities are defined in a

specific way for which pi, j = p j,i and qi, j = q j,i; note that this need not be the case for all definitions

of the joint probabilities.

4.6.2 DEFINITIONS OF THE JOINT PROBABILITIES

The second change in t-SNE is that the joint probabilities are defined in a manner which does not

yield quite the same conditional probabilities as in Equations (3) and (2). The joint probabilities are

defined as

pi, j =1

2n(pi| j + p j|i) (6)

where pi| j and p j|i are computed by Equation (3) and n is the total number of data points in the data

set, and

qi, j =(1+ ||yi−y j||

2)−1

∑k 6=l(1+ ||yk−yl||2)−1. (7)

The former equation is intended to ensure that, in the input space, even outlier points will have some

other points as neighbors. The latter equation means that, in the visualization, the joint probability

22

Page 23: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

falls off according to a (normalized) t-distribution with one degree of freedom, which is intended

to help with a crowding problem: because the volume of a small-dimensional neighborhood grows

slower than the volume of a high-dimensional one, the neighborhood ends up stretched in the visu-

alization so that moderately distant point pairs are placed too far apart. This tends to cause clumping

of the data in the center of the visualization. Since the t-distribution has heavier tails than a Gaus-

sian, using such a distribution for the qi, j makes the visualization less affected by the placement of

the moderately distant point pairs, and hence better able to focus on other features of the data.

4.6.3 NEW METHOD: T-NERV

We can easily apply the above-described changes to the cost function in NeRV; we call the resulting

variant t-NeRV. We define the cost function as

Et-NeRV = λ∑i

∑j 6=i

pi, j logpi, j

qi, j+(1−λ)∑

i∑j 6=i

qi, j logqi, j

pi, j= λD(p,q)+ (1−λ)D(q, p) (8)

where p and q are the joint distributions over i and j defined by the pi, j and the qi, j, and the individual

joint probabilities are given by Equations (6) and (7).

It can be shown that this changed cost function again has a natural information retrieval interpre-

tation: it corresponds to the tradeoff between smoothed precision and smoothed recall of a two-step

information retrieval task, where an analyst looks at a visualization and (step 1) picks a query point

and then (step 2) picks a neighbor for the query point. The probability of picking a query point i

depends on how many other points are close to it (that is, it depends on ∑ j qi, j), and the probability

of picking a neighbor depends as usual on the relative closenesses of the neighbors to the query.

Both choices are done based on the visualization, and the choices are compared by smoothed pre-

cision and smoothed recall to the relevant pairs of queries and neighbors that are defined based on

the input space. The parameter λ again controls the tradeoff between precision and recall.

The connection between the D(p,q) and the recall of the two-step retrieval task can be shown

by a similar proof as in Appendix A, the main difference being that conditional distributions p j|i

and q j|i are replaced by joint distributions pi, j and qi, j , and the sums then go over both i and j. The

connection between D(q, p) and precision can be shown analogously.

As a special case, setting λ = 1 in the above cost function, that is, optimizing only smoothed

recall of the two-step retrieval task, yields the cost function of t-SNE. We therefore provide a novel

information retrieval interpretation of t-SNE as a method that maximizes recall of query points and

their neighbors.

The main conceptual difference between NeRV and t-NeRV is that in t-NeRV the probability

of picking a query point in the visualization and in the input space depends on the densities in the

visualization and input space respectively; in NeRV all potential query points are treated equally.

Which treatment of query points should be used depends on the task of the analyst. Additionally,

NeRV and t-NeRV have differences in the technical forms of the probabilities, that is, whether

t-distributions or Gaussians are used etc.

The t-NeRV method can be optimized with respect to visualization coordinates yi of points, by

conjugate gradient optimization as in NeRV; the computational complexity is also the same.

4.6.4 COMPARISON

We briefly compare t-NeRV and NeRV on the Faces data set. The setup is the same as in the previous

comparison experiments (Figures 4–7). For t-NeRV we use the effective number of neighbors k= 40

23

Page 24: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

to compute the joint probabilities pi, j; this corresponds to the perplexity value used by the authors

of t-SNE (van der Maaten and Hinton, 2008).

Figure 9 shows the results for the four unsupervised evaluation criteria. According to the mean

smoothed precision and mean smoothed recall measures, t-NeRV does worse in terms of recall.

The rank-based measures indicate a similar result; however, there t-NeRV does fairly well in terms

of mean rank-based smoothed precision. The trustworthiness-continuity curves are similar to the

rank-based measures. The curves of mean precision versus mean recall show that t-NeRV does

achieve better precision for small values of recall (i.e., for small retrieved neighborhoods), while

NeRV does slightly better for larger retrieved neighborhoods. These measures correspond to the

information retrieval interpretation of NeRV which is slightly different from that of t-NeRV, as

discussed above. Figure 9 E shows mean smoothed precision/recall in the t-NeRV sense, where

t-NeRV naturally performs relatively better.

Lastly, we computed k-nearest neighbor classification error rate (using k = 5) with respect to

the identity of the persons in the images. NeRV (with λ = 0.3) attained an error rate of 0.394 and

t-NeRV (with λ = 0.8) an error rate of 0.226. Here t-NeRV is better; this may be because it avoids

the problem of crowding samples near the center of the visualization.

Figures 10-12 show example visualizations of the faces data set. First we show a well-performing

comparison method (CDA; Figure 10); it has arranged the faces well in terms of keeping images of

the same person in a single area; however, the areas of each person are diffuse and close to other

persons, hence there is no strong separation between persons on the display. NeRV, here optimized

to maximize precision, makes clearly tighter clusters of each person (Figure 11), which yields better

retrieval of neighbor face images. However, NeRV has here placed several persons close to each

other in the center of the visualization. The t-NeRV visualization, again optimized to maximize

precision (Figure 12) has lessened this behavior, placing the clusters of faces more evenly.

Overall, t-NeRV is a useful alternative formulation of NeRV, and may be useful for data sets

especially where crowding near the center of the visualization is an issue.

5. Using NeRV for Supervised Visualization

In this section we show how to use NeRV for supervised visualization. The key idea is simple: NeRV

can be computed based on any input-space distances d(xi,x j), not only the standard Euclidean

distances. All that is required for supervised visualization is to compute the input-space distances in

a supervised manner. The distances are then plugged into the NeRV algorithm and the visualization

proceeds as usual. Note that doing the visualization modularly in two steps is an advantage, since it

will be later possible to easily change the algorithm used in either step if desired.

Conveniently, rigorous methods exist for learning a supervised metric from labeled data sam-

ples. Learning of supervised metrics has recently been extensively studied for classification pur-

poses and for some semi-supervised tasks, with both simple linear approaches and complicated

nonlinear ones; see, for instance, works by Xing et al. (2003), Chang and Yeung (2004), Globerson

and Roweis (2006) and Weinberger et al. (2006). Any such metric can in principle be used to com-

pute distances for NeRV. Here we use an early one, which is flexible and can be directly plugged

in the NeRV, namely the learning metric (Kaski et al., 2001; Kaski and Sinkkonen, 2004; Peltonen

et al., 2004) which was originally proposed for data exploration tasks.

We will call NeRV computed with the supervised distances “supervised NeRV” (SNeRV). The

information retrieval interpretation of NeRV carries over to SNeRV. Under certain parameter set-

24

Page 25: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

0.93 0.94 0.95 0.96

0.9

0.91

0.92

0.93

0.94

0.95

0.96 λ=0

λ=1

t−NeRVλ=0

λ=1

NeRV

−6 −4 −2 0

x 104

−700

−650

−600

−550

−500

−450

−400

−350

λ=0

λ=1

t−NeRV

λ=0

λ=1

NeRV

−1.5 −1 −0.5−2

−1.5

−1

−0.5λ=0

λ=1t−NeRV

λ=0

λ=1NeRV

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

t−NeRV, λ=0.8

NeRV, λ=0.3

0.9 0.91 0.92 0.93

0.88

0.89

0.9

0.91

0.92

0.93 NeRV

λ=0

λ=1

t−NeRV

λ=0

λ=1

Mean smoothed precision (vertical axis) −

Mean smoothed recall (horizontal axis)

Mean rank−based smoothed precision (vertical axis) −

Mean rank−based smoothed recall (horizontal axis)

Mean precision (vertical axis) −

Mean recall (horizontal axis)

Trustworthiness (vertical axis) −

Continuity (horizontal axis)

Mean smoothed precision in the t−NeRV sense (vertical axis) −

Mean smoothed recall in the t−NeRV sense (horizontal axis)

A B

DC

E

Figure 9: Comparison of NeRV and t-NeRV on the Faces data set according to the four goodness

measures described in Section 4.3 (A-D), and for mean smoothed precision/recall corre-

sponding to the information retrieval interpretation of t-NeRV (E; first and second terms

of Eqn. 8).

tings SNeRV can be seen as a new, supervised version of stochastic neighbor embedding, but more

generally it manages a flexible tradeoff between precision and recall of the information retrieval just

like the unsupervised NeRV does.

SNeRV has the useful property that it can directly compute embeddings for unlabeled training

points as well as labeled ones. By contrast, some supervised nonlinear dimensionality reduction

methods (Geng et al., 2005; Liu et al., 2005; Song et al., 2008) only give embeddings for labeled

points; for unlabeled points, the mapping is approximated for instance by interpolation or by training

25

Page 26: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

Figure 10: Example visualization of the Faces data set with CDA.

a neural network. Such approximation is not needed for SNeRV. (On the other hand, a trained neural

network can embed not only unlabeled training points, but also previously unseen new points; if

such generalization is desired, the same kinds of approximate mappings can be learned for SNeRV.)

In the next subsections we present the details of the distance computation, and then describe

experimental comparisons showing that SNeRV outperforms several existing supervised methods.

5.1 Supervised Distances for NeRV

The input-space distances for SNeRV are computed using learning metrics (Kaski et al., 2001;

Kaski and Sinkkonen, 2004; Peltonen et al., 2004). It is a formalism particularly suited for so-

called “supervised unsupervised learning” where the final goal is still to make discoveries as in

unsupervised learning, but the metric helps to focus the analysis by emphasizing useful features

and, moreover, does that locally, differently for different samples. Learning metrics have previously

been applied to clustering and visualization.

In brief, the learning metric is a Riemannian topology-preserving metric that measures dis-

tances in terms of changes in the class distribution. The class distribution is estimated through

26

Page 27: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

Figure 11: Example visualization of the Faces data set with NeRV, here maximizing precision

(tradeoff parameter λ = 0).

conditional density estimation from labeled samples. Topology preservation helps in generalizing

to new points, since class information cannot override the input space topology. In this metric, we

can compute input-space distances between any two data points, and hence visualize the points with

NeRV, whether they have known labels or not.

5.1.1 DEFINITION

The learning metric is a so-called Riemannian metric. Such a metric is defined in a local manner;

between two (infinitesimally) close-by points it has a simple form, and this simple form is extended

through path integrals to global distances.

In the learning metric, the squared distance between two close-by points x1 and x2 is given by

the quadratic form

dL(x1,x2)2 = (x1 −x2)

TJ(x1)(x1 −x2). (9)

27

Page 28: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

Figure 12: Example visualization of the Faces data set with t-NeRV, here maximizing precision

(tradeoff parameter λ = 0).

Here J(x) is the Fisher information matrix which describes the local dependency of the conditional

class distribution on the input features, that is,

J(x) = ∑c

p(c|x)

(

∂xlog p(c|x)

)(

∂xlog p(c|x)

)T

.

Here the c are the classes and the p(c|x) are the conditional class probabilities at point x. The

idea is that the local distances grow the most along directions where the conditional class distribu-

tion p(c|x) changes the most. It can be shown that the quadratic form (9) is, for close-by points,

equivalent to the Kullback-Leibler divergence D(p(c|x1), p(c|x2)).

The general distance dL(x1,x2) between two far-away points x1 and x2 is defined in the standard

fashion of Riemannian metrics: the distance is the minimal path integral over local distances, where

the minimum is taken over all possible paths connecting x1 and x2. Notice that in a Riemannian

metric, the straight path may not yield the minimum distance.

28

Page 29: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

Learning metrics defined in the above manner satisfy the three criteria required of any metric:

the distances dL are nonnegative, symmetric, and satisfy the triangle inequality. Because the learning

metric distances are defined as minimal path integrals they preserve the topology of the input space;

roughly speaking, if the distance between two points is small, then there must be a path between

them where distances are small along the entire path.

5.1.2 PRACTICAL COMPUTATION

In order to compute local distances using the Fisher information matrices J(x), we need an esti-

mate for the conditional probability distributions p(c|x). We learn the distributions by optimizing

a discriminative mixture of labeled Gaussian densities for the data (Peltonen et al., 2004). The

conditional density estimate is of the form

p(c|x) =∑Kk=1 βck exp(−||x−mk||

2/2σ2)

∑Kk=1 exp(−||x−mk||2/2σ2)

(10)

where the number of Gaussians K, the centroids mk, the class probabilities βck and the Gaussian

width σ (standard deviation) are parameters of the estimate; we require that the βck are nonnegative

and that ∑c βck = 1 for all k. The mk and βck are optimized by a conjugate gradient algorithm to

maximize the conditional class likelihood, and K and σ are chosen by internal cross-validation (see

Section 5.3).

Given the Fisher matrices, we next need to compute the global distances between all point pairs.

In most cases the minimal path integrals in the global distance definition cannot be computed ana-

lytically, and we use a graph-based approximation. We first form a fully connected graph between

all known data points, where the path between each pair of points is approximated by a straight

line. For these straight paths, the path integral can be computed by piecewise approximation (see

Peltonen et al., 2004, for details; we use T = 10 pieces in all experiments). We could then use graph

search (Floyd’s algorithm) to find the shortest paths in the graph and use the shortest path distances

as the learning metric distances. This graph approximation would take O(n3) time where n is the

number of data points; note that this would not be excessive since a similar graph computation is

needed in methods like isomap. However, in our experiments the straight line paths yielded about

equally good results, so we simply use them, which takes only O(n2) time. Therefore SNeRV as a

whole took only O(n2) time just like NeRV.

5.2 Comparison Methods for Supervised Visualization

For each data set to be visualized, the choice of supervised vs. unsupervised visualization is up to

the analyst; in general, supervised embedding will preserve differences between classes better but at

the expense of within-class details. In the experiments of this section we concentrate on comparing

performances of supervised methods; we will compare SNeRV to three recent supervised nonlinear

embedding methods.

Multiple relational embedding (MRE; Memisevic and Hinton, 2005) was proposed as an ex-

tension of stochastic neighbor embedding (Hinton and Roweis, 2002). MRE minimizes a sum of

mismatches, measured by Kullback-Leibler divergence, between neighborhoods in the embedding

and several different input neighborhoods: typically one of the input neighborhoods is derived from

the input-space coordinates and the others are derived from auxiliary information such as labels.

29

Page 30: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

MRE is able to use unlabeled data; for unlabeled points, divergences that involve neighborhoods

based on labels are simply left out of the cost function.

Colored maximum variance unfolding (Song et al., 2008) is an extension of the unsupervised

maximum variance unfolding. It maximizes the dependency between the embedding coordinates

and the labels according to the Hilbert-Schmidt independence criterion, which is based on a cross-

covariance operator. This leads to constrained optimization of the output kernel. Because of these

details the method is also called maximum unfolding via Hilbert-Schmidt independence criterion

(MUHSIC); we use this abbreviation.

Supervised isomap (S-Isomap; Geng et al., 2005) is an extension of the unsupervised isomap.

The only difference to unsupervised isomap is a new definition of the input-space distances: roughly

speaking, distances between points in different classes will grow faster than distances between

same-class points. The actual embedding is done in the same way as in unsupervised isomap (de-

scribed in Section 4.1). Other supervised extensions of isomap have been introduced by Li and Guo

(2006) and Gu and Xu (2007).

Parametric embedding (PE; Iwata et al., 2007) represents the embedded data with a Gaussian

mixture model with all Gaussians having the same covariances in the embedding space, and attempts

to preserve the topology of the original data by minimizing a sum of Kullback-Leibler divergences.

Neighbourhood component analysis (NCA; Goldberger et al., 2005; see also Kaski and Pel-

tonen, 2003, Peltonen and Kaski, 2005) is a linear and non-parametric dimensionality reduction

method which learns a Mahalanobis distance measure such that, in the transformed space, k-nearest

neighbor classification achieves the maximum accuracy.

5.3 Methodology for the Supervised Experiments

We used the four benchmark data sets having class information (Letter, Phoneme, Landsat, and

TIMIT described in Section 4.5) to compare supervised NeRV and the five supervised visualization

methods described in Section 5.2, namely multiple relational embedding (MRE), colored maximum

variance unfolding (MUHSIC), supervised isomap (S-Isomap), parametric embedding (PE), and

neighbourhood component analysis (NCA). We used a standard 10-fold cross-validation setup: in

each fold we reserve one of the subsets for testing and use the rest of the data for training. For each

data set, we use SNeRV and the comparison methods to find 2-dimensional visualizations.

In principle we could evaluate the results as in Section 4.3 for the unsupervised experiments,

that is, by mean smoothed precision and recall; the only difference would be to use the supervised

learning metric for the evaluation. However, unlike SNeRV, the other methods have not been for-

mulated using the same supervised metrics. To make an unbiased comparison of the methods, we

resort to a simple indirect evaluation: we evaluate the performance of the four methods by class pre-

diction accuracy of the resulting visualizations. Although it is an indirect measure, the accuracy is a

reasonable choice for unbiased comparison and has been used in several supervised dimensionality

reduction papers. In more detail, we provide test point locations during training but not their labels;

after the methods have computed their visualization results, we classify the test points by running

a k-nearest neighbor classifier (k = 5) on the embedded data, and evaluate the classification error

rates of the methods.

We use a standard internal 10-fold validation strategy to choose all parameters which are not

optimized by their respective algorithms: each training set is subdivided into 10 folds where 9/10 of

data is used for learning and 1/10 for validation; we learn visualizations with the different parameter

30

Page 31: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

values; the values that yielded the best classification accuracy for the embedded validation points

are then chosen and used to compute the final visualization for the whole training data.

We ran two versions of SNeRV using λ = 0.1 and λ = 0.3. The scaling parameters σi were set

by fixing the entropy of the input neighborhoods as described in Section 2.2. Here we specified the

rough upper limit for the number of relevant neighbors as 0.5 · n/K where n is the number of data

points and K is the number of mixture components used to estimate the metric; this choice roughly

means that for well-separated mixture components, each data point will on average consider half

of the data points from the same mixture component as relevant neighbors. A simplified validation

sufficed for the number K and width σ of Gaussians: we did not need to run the embedding step but

picked the values that gave best conditional class likelihood for validation points in the input space.

For S-Isomap we chose its parameter α and its number of nearest neighbors using the validation sets,

and trained a generalized radial basis function network to project new points, as suggested by Geng

et al. (2005). For MUHSIC, the parameters are the regularization parameter ν, number of nearest

neighbors, and number of eigenvectors in the graph Laplacian, and we used linear interpolation to

project new points as suggested by the MUHSIC authors. For MRE the only free parameter is its

neighborhood smoothness parameter σMRE . For the PE one needs to provide a conditional density

estimate: we used the same one that SNeRV uses (see Equation 10) to obtain a comparison as

unbiased as possible. Neighbourhood component analysis is a non-parametric method, therefore

we did not need to choose any parameters for it.

5.4 Results of Supervised Visualization

Figure 13 shows the average error rate over the 10 folds as well as the standard deviation. The best

two methods are SNeRV and PE, which give good results in all data sets. On two of the data sets

(Letter and TIMIT) SNeRV is clearly the best; on the other two data sets (Phoneme and Landsat)

SNeRV is about as good as the best of the remaining methods (S-Isomap and parametric embedding,

respectively). MRE is clearly worse than the other methods, whereas MUHSIC and NCA results

depend on the data set: on Letter they are the second and third worst methods after MRE, while in

the other data sets they are not far from the best methods.

The value of the tradeoff parameter λ did not affect the performance of SNeRV much; both

λ = 0.1 and λ = 0.3 produced good projections.

To evaluate whether the best method on each data set is statistically significantly better than the

next best one, we performed a paired t-test of the performances across the 10 cross-validation folds

(Table 3). The two best methods compared are always SNeRV and parametric embedding, except

for the Phoneme data set for which the two best methods are SNeRV and S-Isomap. For the Letter

and the TIMIT data sets SNeRV is significantly better than the next best method, while for the other

two data sets the difference is not significant. In summary, all the significant differences are in favor

of SNeRV.

Figure 14 presents sample visualizations of the letter recognition data set; projection results are

shown for one of the 10 cross-validation folds, including both training and test points. Although

there is some overlap, in general SNeRV shows distinct clusters of classes—for example, the letter

“M” is a well separated cluster in the top of the figure.

Parametric embedding also manages to separate some letters, such as “A” and “I”, but there

is a severe overlap of classes in the center of the figure. In S-Isomap we see that there are a few

very well separated clusters of classes, like the letters “W” and “N”, but there is a large area with

31

Page 32: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

Letter Phoneme Landsat TIMIT0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1C

lassific

atio

n e

rro

r ra

te

SNeRV λ = 0.1

SNeRV λ = 0.3

PE

S−Isomap

MUHSIC

MRE

NCAG

D

E

F

C

B

A

C D E FB C D E FB C D E FB C D E FBA A A A GGGG

Figure 13: Performance of the supervised nonlinear embedding methods in each benchmark data

set. The results are average classification error rates over 10 cross-validation folds

(smaller is better), and the standard deviations are shown with error bars.

Data set Best method Second best p-value

Letter SNeRV (λ = 0.1) PE 1.8 ·10−6

Phoneme S-Isomap SNeRV (λ = 0.3) 0.54

Landsat SNeRV (λ = 0.3) PE 0.28

TIMIT SNeRV (λ = 0.1) PE 3.4 ·10−3

Table 3: Statistical significance of the difference between the two best methods. The p-values are

from a paired t-test of the 10-fold cross-validation results; statistically significant winners

have been boldfaced.

overlapping classes near the center of the right edge of the figure. This overlap is worse than in

SNeRV but still roughly comparable; by contrast, MUHSIC, MRE and NCA performed poorly on

this data set, leaving most classes severely overlapped.

6. Conclusions and Discussion

By formulating the task of nonlinear projection for information visualization as an information re-

trieval task, we have derived a rigorously motivated pair of measures for visualization performance,

mean smoothed precision and mean smoothed recall. We showed that these new measures are exten-

sions of two traditional information retrieval measures: mean smoothed precision can be interpreted

as a more sophisticated extension of mean precision, the proportion of false positives in the neigh-

32

Page 33: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

AAAA AA A

A

A

AAA

AAA

AA

AA

AA

A

A

A

AA

AA

AA

A

A

A

A

A

AAA A

A

AAAA

A

AA

AA

A

AAAAAA

BBBBBBBBBBBBBBBBB

BBBBBBBB

BBBBBBBBBBB BBBBBBBBB BBBBB BBBBBBBBBBBB

C

CC

CCC

C

CCCC

C

C

C

C

CC

CCCC

C

CC

C

CCC

C

C

C

CC

CC

C

C

CCC

C

CC

C

CCC

CC

C

C

C

C

CC

C

C

CCC

C

DDDD

DDDDDD

DDDDDDD

D

DDD

DDDD D

DDD

D

DDDDDDD

DDDDDDD

D

DDDDDD

DDDD

DDDDDDD

E

EE

EE

EEEEE

E

E

E

E

EE

E

E

E E

EEE

E

E

E

E

E

E

E

E

E

EE

E

E

E EEE

EE

EE

E

F

FF

F

FFF

F

F

F

F

F FF

FF F

F

FF

FFF F

FFF

F

F FF

F

FF

FF

F

F

F

FF

FF F

FFF

FF

G

GGG

GGGG

GG

GGG

G

GGG

G

G

GGGG GG

G

GGGGGGGG

G

GG

GG

GG

GGGG

GG

G

GGGGG

G

G GGGG

GGGGG

H

H

H

H

HHH

HHH

H

H

HH

H

HH

HH

HHH

HH

HHHHHHHHHH

HH

H

HH

H

H

H

H

H

H

HH

H H

H

H

H

HH

H

H

II III I

I I

I

I

IIII

II

I

II

IIII II

I

III

I

II

I

I

IIII

I

II

IJJJJJ

JJ JJ

J

JJ

JJJJ

J

J JJJ JJ

J

J JJ J

J

JJJ J

JJJ

J

JJJ J

JJ

JJ

J JJ

J

J JJ

JJ

KKK

KKKKK

K

K

KK

K

KKKKKK

KKKKKKKKK

K

KKKKK

K

KKKKKKKK

LL

L

L

L LLL

LL

LLL

LLLLL

L

LL

L

LLL

L

LLL

L

L

LLL

LL

LLLL LL

LL

L

L

M

M

M

M

M

MMM

M

M

M

MM

M

M

MM

M

MM

M

M

M

M

MM

M

M

M

M

M

M

MM

M

MM

M

M

MM

M

M

M

M

M

MM

M

M

MM MMM

M

NN

NNN

N

NN

NNN

NN

N

NNNN

N

N

N

NN

N

N

N

NNNNN

NNN

NNN

N

N NNNN

N

NN NNNNN

NN

N

OOO OOOO

O

O

OO OOOOO OO

O

OOOOOOOO OOOOO

OOO OOOOOO OO OOOOOO OO OO

P

P

P

P

P

P

P

P

P

P

P

P

P

P

P

P

PPP P

P

P

P

P

P

P

P

PP

P

P

P

PPPP

PP

PP

P

P

P

P

P

P

P

P

P

P

PP

P

PP

P

Q QQQ QQQQ

QQ

Q QQ QQQ QQQ

QQ

QQQ

Q

Q

Q

QQQ

QQQ

Q Q

Q

Q

Q Q

Q

QQ

Q QQQQQ

Q

QQ

Q

QQQ

QQQ

R

R

R

R

RRRR

R

R

R

RRR

R

R

R

R

R

RR

R

R

R

R

R RR

R

R

RR

R

R

R

R

R

R

R

R

R

R

RR

R

R

R

RR

SSS

S

S

S

SSS

SSSS

SS

S

SSS

S

SS

SS

SSSS

SSS

SS

SSSSSS

SSSS

SS

S SSSS

S

S

TT

T TTTTT

TT TT

T

TTT

T

TT TTTTTTTT TT TTTTTTTTTT

T T TT

UU

UUUU

UU

UU

UUU

UU

UU

UU

U UU

U

UU

UUU

UUUU

UU U

UUUU

UUU

UU

UUU UU

UU

UU

UU

V

V

V

VV

V

V

VV

V

V

VV

V

V

VV

V

V

VV

V

V

VV

V

V

V

V

V

V

V

VVVVV VVV

V

V

V

V

V

V

V

VVV

WWW W

W

WWW

W

W

WWW

W WWW

W

WW

WW WWW W W

WW

W

W

WW WWWW

W

XXXX

XXXX X

XXX

XXX

XXX

XX

XX

XXXXXXXXXX

XXXXXX

XXX

XXXXXXXXXX

Y

Y

Y

Y

YYYY

YY

YY

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

YY

Y

Y

Y

Y

Y

Y

Y

Y

Y Y

Y

Y

YY

YY

YY

Y

YY

Z

Z

Z

ZZZ

Z

Z

Z

Z

ZZ

ZZ

Z

Z

Z

Z

ZZZ

ZZ

Z

ZZ

Z

ZZZ

ZZZ

Z

Z

Z

ZZ

Z

Z

ZZ

Z

Z

Z

Z

AAAAAAAA

A

AAAAAAAAAAAAAAAAAAAAA

AA

A

AAAAAAAAAAAAAA

A

AAAAAAAA

B

BBBB

B

B BBBBB

B

BBB B

BBBBB

B

BBBB

B

B

BB

B

BBBB

BBBBBBBBB B

B

B

B

BBBBBB

B B

BB B

BB

CCC

C

CCCCC

CCC

C

C

C

CCCC

C

CCCC

C

CCC

C

CCC

CCCCC

CCCCCCCCCC

C

C

CC

C

C

CCCCCCCC

DDD

DD

DDD DD

DDD

DD DDDDDD

DD

DDDD

DD

D DD DD

DD

D

D

DDD DD

DD D DD

DD D D DD

D DDDDD

DD

EE

E

E EEEEEE

E

EEEEE

E

E

EEEEEE

E

EE

EEEE EEE E

EEEE E

E

EEE E

FF

FFFFFFF

FFFF FFFF FFFFFFFFF

F

F

FFF FFFF

F

FFF FFFF FFFF

F

F

G

GGG GGG GG

GGG

G GGG

G

GGGGGGG

G

G

G

GG G

G

GGGG G GG

G

GG GGG

GGG

G GGGG G

GG GG

GGGG

G GG

H

HHHH HHH

HHH HH

H HHHHHHH

HH

HH

H

HH

HH

HH

H

HH HHH

HHHHH HHHHH

HH

H

H

H

HH

H

IIIIII I I

II III

IIII

III

II

I

III II

I

I

I

I

IIII III

I

II

J JJJJJJJJJ

JJ JJJJJJJJ

JJJJJ JJJJ

JJJJJJJJJJJJJJJJJJ

J

J

JJJJJ

K

K

K KKKKK

K

KKK KK

KKK KK KKKKKKKK

K

KK

KKK

K

K KK KK

K

KK KL LL LLLLL

L

L

L

LLLLLLLLLLL

LLLL LLLLL LLLL

LLLLLLL

L

LLL

MMMMM

M MMMMMMMMMMMM

MMMM

M

MMMMM

MMMMMMM

MM

MM

MMMMMMMMM

M

MMMMMMM

NNNN

N

N

NNNNNNN

N

NNN

NN

NN

NNNNNN

N

N

N

NNNN

NN

NN N

NNN

NNNN

NN

NNNN

NN

OOOO

O

OOO

OOOOO

OOOO O

O

OOOO OO

OO

OOOOOO O

OO O

OOOO

OOOOOOOOO O

OO

P PPPP

P

PPPPP PP

P

P

P

PPPPPPPPP

P

PPPP PP

PPPPP

P PPP PP PPPPPP PPPP

PP

P QQ

QQ Q

Q

QQQQQQ Q

QQ

QQQQQQ QQQQQQQ

Q

QQQQQQQ

Q

QQQQ

QQQQ QQQQQQ

QQQ

Q

QQQ

RRR

RRRRR RRRRR

RRRRR RRRR

R

R

RR R

R

R RR

RR RR RR

R

RR

R RRR

RRR

R

R

SS

S

S

S

SS SSSS

SSS SSS

S

SSSSS

S

S SS SSSSSS S

SS

SSS

SS

SSSSSS

S

SSSS

T TT TTTT

TT T TT

T

TT TT

TT T

TTT TTTT

TTTTTTTT TTTTTT

TT U U

UUUUUUUUUUUU U

UUU

U

UUUUUUUUUUUUU UUUUUUUUU UUU

UUUUU

UUUU

UU

V VVVVVV

V

VV VV

VV VV

V V

VV VVV VV

VV

VV

VVV

V

VVV

VV VV

VVVV

VV

V

VVV

WWW

WWWWWWWW

WWWW

WWW

WWWWWWWWWWWW

W

WWWWWWW

XXXX

XXX

X X XXXXX XXXX XXX

XXXX

X XX

X

XXX

X

XXXX X

X

X X

X

XXX XX

XXX

X

Y

YY

Y

Y

YYY Y

YYYYYYYY

Y

YYYYYY YYY

YYY

Y

YY

Y

Y

Y

Y

YY

Y YYY YY YY

YY

Z

ZZZZ

Z

Z

ZZZ ZZ

ZZ

Z

ZZZ

ZZ

ZZZZZ

ZZ

ZZ ZZ

ZZZ

ZZZ

ZZ

Z

ZZ

Z

Z

Z

Z

AA

AA

AA

A

A

BBBB BB B

B

B

CC

CC

C

CCC

C

C

DDD

EE

E EE

EEE

FF

FFF

FF

FGG

G G

HH

I

IIII

I J

J

JJJJJJ

K

KKKK

K

LL

LLL

MMMM

M

M

MM

NN

N

OOOO

O

OO

P

P

PP P

PP

PP

QQQ

Q

RR

R

R

R

S SS

S

TTTTTT

UUUU

V

V

VV

V

WW

WW

XXX

X

XXX

X

Y

Y

Y

Z ZZ

AAAA

A

A

A

A

BBBBB BB BB

C

C

CC

C

C

CC

C

C

DDD

E

E

EE

E

EE

E

F

FF

FF

FFF

GGG

G

HHI

II

II I

J

J JJJJ JJ

K

KKKKKL

L

LLL

M

M

M

MMM

M

M

N

N

NOOOOO OO

P

P

P

P

P

PP

P

P

QQQ

Q

R

RR

R

R

SSSSTTTTT T

U UU

U

V

VV

V

V

WWW

W

XXX

X

XXX

XY YY

Z

ZZ

AAA AA A

A

A

B

B

BBB

BB

B

B

C

C

CC CCC

C

C

C

DDD

EE

EE

EEE

E

FFFFFFF F

G GGGH

H

I

IIIIIJ

J JJJJ

J

J

K

KK

KK

K

LLLLL

MMMMMMMM NN N

O OOO

O

O OP

PP PP

P

PPP Q

QQQR RR

RR

S

S

SS

T T TTT T UUUU

V

V

V

V

V

WW

W

W

X

XX

X

XX

XX

Y YY

Z

Z

Z

AAAAAAAA

A

AAAAAA

A

A

AAAA AA

A AAA

AAA

A

A

A

AAAA

AA

A

AA AAA

AA A

A

AAAAAAA

BB

BB

B BBB

B

B

BBBB

B

BB B

BBBB BB

B

BBB B

BBBB

BBB

B

BBBB

BBBB

B

BBB

BB B

BBB

BB BBBBB

CC

CC

CC

C

CC

C

CCC

C

C

C

CC

CC

C CC

C

C

CCC

C

C CCCCC CC

CCC

C

C

CC CC

CC

C

C

CC

CC

CC

C

CCCC

D

DD

D DDDD DD

D D

D DDDD

D

D

DD

D

DD D

D

DDD

D

D DD

DDDD

D

DDD

DDD

D

DDDD

DD DDD

D

DDDDDDD

EE

E

E

E

EEEEE

E

EE

E

EEE

E

EEEE

E

E

E

EEE

EE

EEEEEE

EEE EEE

E

E

E

F FFF F

F F

FF

F

F

F F

F

FFFF

FF FFF

FFFF

F

F

FF

F

FFF

F

F

F

F

F

FFF

F

FFF

F

F

G

GG

G

G

G

G

G

GG

GG

G

GGGG

G

G

G

GGG

G G

G

G

GGG

G

G GG

G

GG

G

G

G

G

G

G

GG

G

G

G

GG

GGG

GG

G

GG

G

G

GGGG

H

H

H

H

HH

H

HHH

H

H

H

H

H

HH

HH

H

H

HH

H

H

H

H

HH

HHHHH

H

H

H

H

H

H

H

H

H

H

H

H

H

H

H

H

H

H

HHH

H

IIII

IIIII

I

IIIIII

I

I IIII IIIIIIIIIII

IIII II

II

I

JJ

J

JJ

J

JJJ

J

J

J

JJ JJ

JJ

J

JJJ

J

J

JJJ

JJ

JJJJJ

J

JJ

J

J

JJ

JJ

JJ

J

JJ

J

J

J

J

J

J

K

KK

K

K

K

K

K

K

K

KK

K

KK

K

K

K

K

KKK

K

KKK

KK

K K

K

K

K

K

K

KK

K

KK

K

K

K

L

L

L

L

LLLL

L

L LLL

LLLLL

L

LL

L

LLL

L

LLL

L

LL

LL LL

LLLLLLL

L L

L

M MM

MM

MMM MMM

MMM

MMM

M

MMM

M

M

MMM MMMMMMMMM

M

M

MM

MM

M

M

MMM MM

M

MMM

MM

MM

NN

NN

N

N

NNN

NN

NNNNN

N

N

NN

N

NN

N

NN

NN

N

N

N

N

NN

NNN

N

NNNNN

N

NN

NN

NNN

NN

N

OOO O

OO

O

O

O

OO

OO OO

OO

O

O

OO

OO

OOOO

OOO OO OOO

OOOOOOOO O

OOOO OO

OO

O

P

P PPP P

PPP PP

PP

P

PP PPPP

P

PP

PP

P

P

P PP P

P

PPPPP PPP P

P

P

P

PPP P

P

PPPP

P

P

P

QQQ QQ Q

QQQQ

Q

QQ

QQQ

QQ

QQQ

Q

Q

Q QQ

Q

QQ

Q

Q

Q

Q

QQQ

QQ

QQQQ

Q

Q

Q

Q

Q

Q

QQQQQ

QQ

Q

QQRR

R

RRRRRRR

RRRRRRRRRR

R

R

R

R

R

RR

RR

R

R

R

R

R

RRR

RRRR

RRRR

R

R

R

R

S SSS

S

S

SS

SSSSS

S

SS

S

S

SS

S

S

S

S

S

S

SS S

S

S

SS

SSSSSS

S

SSS SS

S

S

SSS

SS

T

T

T

T

TTT

T

TT

T

T

T

T

T

TT

TT

T

TT

T

T TTTT T TTT TT

T

TTTT

T

TT

T

UU

UU UUUUU UU

UU U UU UUUU UU

U

UU UU

U UUU UUU

UUUUU UU

U

UU

UU

UUU UU

UUUU

VV

V VVVVV

V V

V

VVVV

VV

V

VV

VVV VVV

V

V

VVVVVVVVVV V

VV

VVV

VVV VVV

WWWWWWWW WWWWWWWWW

W

WWWWWWWWW

W

WW

W

WWWW

WWW

X

XXX

X

XXXXX X X

X

XX

XXX

X

XX

X

XXXXXX

X

XXX

X

XX

X XX

X

XX

X

XXX XXX

X

X

XY

Y

Y

YY

Y

Y

YY

Y

YYY

YY

YY Y

YY

Y

YY

Y

YY

Y

Y

Y

YYY

Y YY

Y

Y

YY

Y

YY

YY

Y

Y

Y

YY

ZZ Z

Z Z ZZ

Z

Z

ZZ Z

ZZZZ

Z

Z

ZZZZZ

ZZZ

ZZZZZ ZZ ZZZ

ZZ

Z

ZZZ

Z

ZZZ

A

A

A

A

AA

A

A

B B

BB

BBB

B

B

CC

C

C

C

C

C

C

C

C

DD D

E

E

E

EE

E

E

E

FFF FFF

FF

G

G

G

G

H H

I

II IIIJ

JJJ

J

J

J

JK

KKK

K

KL

LL LL

M

MM

M

M

MM

M

NN

N

O

OO

O

OOO

P

PP

P

PP

P

PP

Q

Q

QQ

R R

R

R

RS

S

S

S

T

T

TTT

TU

U UU

V

V

V

V

V

WW

W

W

X

X X

XX X

X

X

Y

YY

ZZZ

A

AA

A

AAA

A

A

A

AA

AAAA A

A

A

A

AAA

A

AA

A

AA

A

A

A

A

A

A

A

A

A

A

A

A

A

A

A

A

A

A

A

A

A

A

A

AAAA

BB B

B

B

B

BBB

BB

BBB

BB

BB

B

B

BB

BB

B

B

B BB

B

B

BB

B

BB

B

B

BB

B

B

BB

B

B

BB

B

B

B

B

BB

B

B

B

B

B

B

BBC

C

CC

C

C

C

C

C

C

C

C

C

C

C

C

C

C

CC

CC

CC

C

CC

C

C

CCC

CCC

CC C

C

C

C

C

C

C

CC C

C

C

CC C

C

CC

CC

CC

CC

DD

D

DD

D

DD

D

D

D

D

DD

DDD

D

D

DD D

D

DD

D

D

DD

DD

D

D

DD

D

D

D

D

D

D

D

DD

D

D DD

D D

D

DD

DD

D

D

DD

D

DDEE

E

E

E

E

EEE

E

EE

E

EE

E

E

E

EE

EE

E

EE

E

E

E

E

E

EE

EEE

E E

E

E

EE

EE

E

E

FF

F

F

FFF

FFF

FF

F

F

F

F

F

F

FF

F

FFF

FFFFF F

F

F

F F

F F

F F

F

F

FF F

F

FF

FF

F

G

G

GG

G

G

GGG

GG

GG G

GG

GGG

GGG

G

G G

G

GG

G G

GGG

GGG

GG GGGG

G

G

GG G

G

G

G

G G

G

GG

G

GGG

G

G

GGG

HHH HH

H

H

HHH

H

H

H

H H

H

HH

H

H HH H

H

H

H

H

H

H

H

H

H

HH

HH

H

HH

HH

H

H

HH

HH

HH H

HH

H

H

HH

I

III

I

II

II

I

III

III

I

II

I

I

II

II

I II

I II II I

I II II

II

I

JJ

J

J

J

JJ

JJ

J

J

JJ

J

J

J

J

J

J

J

J

J

J

J

J

J

JJ JJ

J

J

J

J J

JJ

JJ

J

J

JJJ

J

J

J

JJ

J

J

JJ

J

K

KK

K

K

K

K

KK

KK

KKK

K

K

K

K

K

K

K

K KKKK

K

K

K

K

KK

K

KKK

K

K

K

K

KK

K

L

L

L

L

LL

L

L

L L

L

L

L

LLLL

L

L

LL

L

LL

L

L

LL

L LL

L

L

L

LL

LLLL

LL

L

L

L

L

MM

MM

M

M

MM

MM

M

MM

M

MM

M

M

MM

M

M

M

M

MMMM M

M M

M

MMM

MM

MM

M

M

M

M

M

M

M

MM

M

M

M

M

M

M

MMN

N

NN

N

N

NN

NN

NN

NNN

N

N

N

NN

N

NN

N

NN

NN

N

N

N

N

NN N

NN

NN

N

N

N

N

N

NN

NN

NNN

N

NN

O

O

O

O

OO

O

O

O

OOO

O

O

OO OO

O

OO

O

O OO

O

O

OO

O

OO

O

OO O O

O

OOO O

OOO

OO

O

O

O

OO

O

P

P

P

P

P

P

P

P

P

P

P P

P

P

PPPP

PP

P

P

PP

P

P

P

PP P

P

P

PP

PP

P

PP

PP

P

P

P

PPP

PP

PPP P P

P

P

QQ

Q

Q

Q

Q

Q

Q

Q

Q

Q Q

Q

QQ

Q

Q

Q

Q QQ

Q

Q

Q

Q

QQ

Q

Q

Q

Q

Q

Q

Q

QQ

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

QQ

R

R

R

R

R

R

R

R

RRR RRR

R R

R

R

R RRRR

R

R

R

RR RR

R

RR

RR

R

R

RR

RRR

R

RR

RR

R

RS

S

S

S

S

S

S

S

S

S

S

S

S S

SS

S

S

S

S

S

S

S

S

S

S

S

S

S

S

S

S

S

S

S

S

S

S

S

SSS

S

S

SS

S

SS

S

S

S

T

T

T

T

T

T

T

T

TT

T

T

T

T

T

T

T

T

T

T

TTT

T

TTT

T

T

TT

T

T

T

T

TT

TT T

T T

T

U

U

UU

U

UU

UU

U

U

U

U

U

U

U

U

U

U

U

U

U UU

U

UU

U

U

UU

UUU

UU

U

U

U

UU

U

U

UU

U

U

UU

UU

U

U

U

U

VVV

VVVV

V

VV

V

VVV

V

VV

V

V VV

V V

V

V VV

V

VV

VV

V

VVVV

VV

VVVVVV

V

V

VV

V

WWW W

WW

W

W

W

WWW WW

WW

W

W

WW W

WW

W

W

W

W

W

W W

W

W

W WWW

W

W

XX

X

XX

X

XX

X

X

XX

X

X

X

XXX

X

X

X XXX

XX

X

X

XX

XX

X

X

X

XXX

X

XX

X

XXX

X X

X

X

XX

Y

Y

Y

Y

Y

Y

Y

Y

YYY

Y

Y YY

Y

Y

Y

YY

Y

Y

Y Y

Y

Y

Y

Y

YY

Y

Y

Y

Y

Y

YY

Y

Y

Y

YY

Y

Y

Y

Y

Y

YY

Z Z

Z

Z

Z

Z

Z

Z

ZZ

Z

Z

Z

Z ZZ Z

ZZZ

Z

ZZ

Z

Z

Z

Z

ZZ

ZZZ

ZZZ

Z

Z

ZZ

ZZ

Z

Z

Z

Z

Z

A

AA

A

A A

AA B

B

BBB

B

BB

B

C

CC

C

CC

C

C

C

C

DDD

E

E

E

E

E

E

EE

F

FFFF

F

F

F

GG

GG

H

H

I

I

III I JJ

J

JJJ JJ

K

K

K

KK

KL

L

L

L

L

MM

M

M

M

M

M

M

N

N

N

O

OO

O

OO

OP

P

P

P

P

PP

P

P

QQ

Q

Q

R

R

R

R

R

S

S

S

S

TT

TT

T

T

U

U

UU

V

V

VV

V

W

W

W

W

X XX

X

X

X

X

X

Y

Y

Y

ZZZ

A

AA

AAAAA

A

AA

A

AAAA

A

A

AAA

AAA

AAAA

A

A

A

A

A

A

AA

A

AA

A

A

AA

AA

AA

A

A

A

A

A

A

A

AA B

B

B

B

B

B

B

BB

B

B

B

B

B

B

BB

B

B

B

B

B

B

BBB

B

B

B

B

B

BB

B

B

B

B

BB

BB

B

B

B

BB

BBB

B

BB

B

B

BBB

B

B

B

B

BC

C

CC

CC

C

C

CC

C

CCCC

C

CC

CC

C

CC

C

CCC

C

C

C

C

C

CCCC

C

CC

C

C

C

C

C

C

C

C

C

C

C

C

CCCCC

C

CCCC D

D

D

DD

D

DD

D

D

DDDDDDD

D

D

DD

D

D

D

D

DDD

D

D

D

D

DDD

D

D

D

D

D

D

DD

D

D

D

D

D

DD

D

D

D

D

D

DDDD

D

D

DE

E

EE

E

EEEE

E

E

E

EE

EE

E

E

E

E

EE

EE

E

E

EEE

E

E

E

E

EEEE

E

E

E

EEEEE

F

F

F

F

FFF

F

F

F

F

FF

F

F

F

F

F

F

F

FF

FFF

FF

F

F

F

F

F

FFF

F

FF

F

F

FFF

F

FFF

F

F

G

G

GG

G

GG

GGG

GG

GGGGG

G

G

GGG

GG

GG

G

GGG

G

G

GGG

G

GG

G

G

G

GGG

GG

G

G

G

G

GGG

G

GG

G

GGG

GGG

G

H

H

H

H

H

H

H

H

HH

H

H

H

H

H

H

H

HH

H

H

HH

H

HHH

H

H

H

HH

H

HH

H

H

H

H

H

H

HH

H

H

H

H

H

H

H

H

H

H

H

H

H

I

II

II

I

III

I

IIII

I

I

I

II

II

I

I

II

IIII

IIIIIIIIII

I

I

IJ

J

J

J

J

J

J

JJ

J

J

J

J

J

J

J

JJJ

J

J

J

J

JJ

JJ

JJ

J

J

J

J

J

J

J

J

J

JJJ

JJJ

J

J

J

J

JJJJ

J

J

K

K

K

K

K

K

KK

K

K

K

K

KK

K

K

K

K

K

K

K

K

K

KKK

K

K

K

KK

K

K

K

K

K

K

K

K

K

K

KK

L

L

LL

L

L

L

L

L

L

L

L

L

LLL

L

L

L

L

L

L

L

LL

L

L

L

L

L

L

L

LL

L

L

LL

LL

L

L

LL

L

L

M

MMM

M

MM

M

MM

M

MM

M

M

M

M

M

M

MM

M

M

M

M

M

M

M

M

M

M

M

M

MM

M

M

M

M

M

M

M

M

MMM

M

M

M

M

M

M

M

M

M

N

N

N

N

N

NN

N

N

N

N

NN

NN

N

N

N

N

NN

NN

NN

N

N

N

N

N

N

N

NN

N

N

N

N

N

NNNN

N

N

N

NN

N

N

N

N

N

N

OO

O

O

O

O

O

O

OOOOO

OO

O

OO

O

OO

OO

O

O

O

O

OO

OO

O

O

O

O

O

O

OO

O

OO

O

OO

O

OOOO

O

OO

P

P

P

P

P

P

PP

P

P

P

P

P

P

P

P

PPP

PP

P

P

P

P

PP

PP

P

P

P

P

P

P

P

PP

P

P

P

P

P

P

PP

P

P

PPPP

P

PP

P

QQ

QQ

Q

Q

QQ

QQ

Q

Q

Q

QQ

Q

Q

Q

Q

Q

Q

Q

QQQ

Q

QQ

Q

Q

Q

Q

Q

Q

Q

QQ

Q

Q

Q

Q

QQ

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

QQ

R

R

R

R

RR

R

R

R

R

RRRRR

R

RRR

R

R

RR

R

R

R

R

R

RR

R

R

R

R

R

R

R

R

R

RRR

R

R

R

R

R

R

R

S

S

S

S

S

SS

S

SS

S

S

S

S

SS

S

SS

S

S

S

S

S

S

SS

S

S

S

SS

S

SS

S

SS

SSSSSS

S

S

S

SS

SSS

T

T

T

TT

TT

T

T

T

T

T

TT

T

T

T

TT

T

T

T

T

T

TTT

T

T

T

T

T

T

T

TT

T

TT

T

T

T

T

U

U

UU

UU

U

UU

U

U

U

UU

U

U

U

UU

U

U

UU

U

U

U

UU

U

U

U

U

U

U

U

U

U

UU

U

U

UU

U

U

U

U

UU

UU

UU

U

U

V

V

VV

V

VVVVV

V

VV

V

V

VV

V

VVVV

V

VVV

V

V

V

VV

V

VVVV

VVV

VV

VVV

V

V

V

VV

V

WW

W

W

WW

W

W

W

WW

WWW

W

WW

W

W

W

W

W

WW

W

W

W

W

WW

W

WW

WW

WW

W

X

X

XXXXX

X

X

X

X

XX

XX

X

X

XX

X

XX

XXXX

X

X

X

XX

X

X

X

XXXX

XXXX

XX

X

XX

X

X

X

X

Y

Y

Y

Y

Y

YY

Y

YY

YYY

Y

Y

Y

Y

Y

YY

Y

Y

YY

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

YY

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

YY

Z

Z

ZZ

Z

Z

Z

Z

Z

ZZZZ

Z

ZZ

Z

ZZ

Z

Z

Z

ZZ

Z

Z

ZZZZZZ

ZZZ

ZZ

Z

Z

ZZZ

Z

Z

Z

Z

A

A

A

AAA

A

A

A

A

A

A

A

A

A

A

A

A

A

A

A

A

A

A

A

A A

A

A

A

A

A

A

A

A

A

AA

A

A

A

A

AA

A

AA

A A

A

A

A

A

A

A

A

B

B

B

B

B

B

B

B

B

B

BB

B

B

B

B

B

B

B

B

B

B

B

B

B

B

B B

B

BB

B

B

B

B

B

B

BB

B

BB

B

B

B

B

B

B

B

B BBB

B

BBB

B

B

B

B

B

C

C

CC

C

C

C

C

C

CC

C

C

C

C

C

C

C

C

C

C

CC

C

C

C

C

C

C

C

C

C

CCC

CC

C C

C

C

C

C

C

C

C

C

C

C

C

C

C

C

C

C

C

C

C

C

CC

D

D

D

D

D

D

DD

D

D

D

D

DD

D

D

D

D

D

D

D

D

D

D

D

D

D

D

D

D

D

D

D

D

D

D

DD

D

D

D

D

D

D

D

DD

D

D

D

D

D

D

D

D

D

DDD

D

D

DE

E

E

E

E

E

EE

EE

EE

E

E

E

E

E

E

EE

E

E E

E

E

E

E

E

EE

E

E

E

E

E

E

E

E

E

E

E

E

E

E

EF

F

F

FF

F

FF

F

F

F

F

F F

F

F

FF

F

F

F

FFF F

F

F

F

F

F

F

F

FF

F

F

F

FF

F

F

F

F F

F

F

F

F

F

G

GG

G

G

GG

GGG

G

G

G

G

G

G G

G

G

GGG

G G

G

G

G

G

G

G

G

G

GG

G

G

GG

G

G

G

G

G

G

GG

G

G

G

G

GG

G

G

G

GG

G

G

G

GG

G

G

H

H

H

H

H

H

H

HH

H

H

H

H

H

H

H

H

HH

H

H

H

H

H

H

H

H

H

H

H

HH

H

H

HH

H

H

H

H

H

H

H

H

H H

H

H

H

H

H

H

H

H

H

H

I

I

I

I

I

II

I

I

I

I

I

I I

I

I

I

I

I

I

II

II

I

I

II

I

I

I

II

II

I

IIII

I

I

JJ

J

JJ

J

J

J

J

J

J

J

J

JJ

J

J

J

J

J

J

JJ

J

J

J

J

J

J

J

J

JJ

J

J

J

JJ

J

J

J

JJ

J

J

J

J

J

JJ

J

J

J

J

K

K

K

K

K

K

K

K

K

K

K

K

KK

K

K

K

K

K

K

K

K

K

KK

K

K

K

K

K

K

K

K

K

K

KK

K

KK

K

K

K L

L

L

L

L

L

LL

LL

L

L

L

L

L

L

L

L

L

L

L

L

L

L

L

L

L

L

L

L

L

L

L

L

L

L

L L

L

L

L

L

L

L

L

L

MM

M

M

M

M

M

MM

MM

M

M

M

M

M

M

M

M

MM

M

M

M

M M

M

M

M

M

M

M

MM

M

M

MM

M

M

M

M

MM

M

M

M

M

M

M

M

M

M

M

M

M

N

N

N

N

N

N

N N N

NN

NN

N

N

NNN

N

N

N

NN

N

N

N

N

NN

NNN

N

NN

N

N

N

N

N

N

NN

N

N

N

N

N

N

N

N

N

N

N

O

O

O

O

O

O

O

O

O

O

OO

O

O

O

O

O

O

O

O

O

O

O

O

O

O

O

OO

O

O

O

O

O

O

O

O

O

O

O

O

O

O

OO

O

O

O

OO

O

OO

P

P

P

P

P

P

PP

P

P

P

P

P

P

P

P

P

P

P

PP

P

P

P

P

P

P

P

P

PPPP

PP

P

P

P

P

P

P

P

P

P

PP

P

P

PP

PP

P

P

P

P

Q

Q

Q

Q

Q

Q

Q

Q

QQ

Q

Q

Q

QQ

Q

Q

Q

Q

Q

Q

Q

QQ

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

Q

QQ

Q

R

R

R

R

RR

R

R

R

RR

RR

R

R

R

R

R

R

R

R

R

R

R

R

R

R

R

R

R

R

R

RR

RR

R

R

R

R

R R

R

R

R

R

R

R

R

S

S

SS

S

S

S

SS

S

S

S

S

S

SS

S

SSS

S S

S

S

S

SS

SS

S

S

S

S

S

S

S

S S

S

S

SS

S

S

S

S

S

S

S

SSST

T

T

T

TT

T

T

T

T

T

T

TT

T

TT

T

T

TT

T

T

TT

TTT

T

T T

T

T

T

T

T

TT

TT

T

T

T

U

U

UU

U

U

U

U

U

U

U

U

UU

U

U

U

U

U

U

UU

UU

UU

UU

UU

U

U

UU

U

UU

UU

U

U

U

U

U

UU

UU

UU

U

U

U

U

U

V

V

V

V

V

V VV

V

V

V

V

V

V V

V

V

VV

V

VV

V

V

V

V

V

V

V

VVV

V

V

V

V

V

V

VVV

V

VV

V

V

V

VV

VWW

W

W

WW

W

W

W

WWW W

W

W

W

W

W

W

W

WW

WW

W

W

W

W

W

W

W

W

W

W

WW

W

W

X

X

XX XXX

X

X

XX

X

X

X

X

X

X

XX

X

XX

XX

XX

X

X

XX

X

X

X

X

X

X

XX

XXX X

XX

X

X

XX

X

X

XY

Y

Y

Y

Y

Y

YY

YY

YY

Y

Y

YY

Y

YY Y

Y

Y

Y

Y

Y

Y

Y

YY

Y Y

Y Y

Y

Y

Y

Y

YY

YY

Y

Y

Y

Y

Y

Y

Y

Y

Z

Z

Z

ZZ

ZZ

Z

Z

Z

Z

Z

Z

Z

Z

Z

Z

Z

Z

Z

Z

Z

Z

Z

Z

Z

Z

ZZ Z

ZZ

Z

ZZ

Z

Z

ZZ

ZZ

Z

Z

Z

ZZ

A

A A

A

A

A

A

A

B

B

B

B

B

B

B

B

B

C

C

C

C

CC

C

C

C

C D

DD

E

E

E

E

E

E

E

EF

F

FF

FF

F F G G

G

G

HHI

I

I II

IJ

J

JJ J

J

JJ

K

K

K

K

KK

L

L

L

LL

M

MM

M

M

M

M

MN N

N

O

OO

O

O

O

O

P

P

P

P

P

PP P

P

Q

Q

Q

Q

R

R

R

R

R

S

S

S

STT

TT

T

T

U

U

U

U

V

V

V V

VW

WW

WX

X

X

X

X

X

X

XY

Y

Y

Z

Z

Z

SNeRV(λ = 0.1) PE S−Isomap

SNeRV(λ = 0.1) PE S−Isomap

Tra

inin

g p

oin

tsT

est

poin

ts

NCA

NCA

MRE

MRE

MUHSIC

MUHSIC

Tes

t poin

tsT

rain

ing p

oin

ts

Figure 14: Visualizations of the letter recognition data set by all supervised methods.

borhood retrieved from the visualization. Analogously, mean smoothed recall is an extension of

mean recall, the proportion of misses incurred by the retrieved neighborhood.

We introduced an algorithm called neighbor retrieval visualizer (NeRV) that optimizes the total

cost, interpretable as a tradeoff between mean smoothed precision and mean smoothed recall. The

tradeoff is governed by a parameter λ set by the user according to the desired cost of a miss relative

33

Page 34: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

to the desired cost of a false positive. The earlier method stochastic neighbor embedding is obtained

as a special case when λ = 1, optimizing mean smoothed recall.

We showed that NeRV can be used for both unsupervised and supervised visualization. For

unsupervised visualization, we simply use fixed input distances; for supervised visualization we

learn a supervised distance metric for the input space and plug the resulting input distances to the

NeRV algorithm. In the latter case the key idea is to use supervision (labeled data) in a way that

does not override the input feature space; we use a topology-preserving class-discriminative metric

called the learning metric for the input space.

In unsupervised visualization, NeRV outperformed alternatives for most of the six data sets we

tried, for four different pairs of measures, and was overall the best method. NeRV also performed

well in a comparison by unsupervised classification. Many of the best manifold extraction meth-

ods perform surprisingly poorly, most likely because they have not been designed to reduce the

dimensionality below the intrinsic dimensionality of the data manifold. In visualization, however,

we generally have no choice but to reduce the dimensionality of the data to two or three, even if

its intrinsic dimensionality is higher. NeRV is designed to find a mapping that is, in a well-defined

sense, optimal for a certain type of visualization regardless of the intrinsic dimensionality of the

data.

In supervised visualization, the supervised version of NeRV performed as well as or better than

the best alternative method Parametric embedding; this shows that the plug-in learning metrics work

well in incorporating supervision.

6.1 Discussion

NeRV models relevance using probability distributions, which makes sense if the total “amount”

of relevance for any query is normalized to a fixed sum. Such normalization is desirable for any

relevance measure, because for any query (point of interest) the relevance of a retrieved neighbor

point should depend on its proximity relative to the proximities of other points, rather than on its

absolute distance from the query point. (Our previous method, local MDS, can be thought of as an

attempt to approximate NeRV without the normalization.)

The Kullback-Leibler divergences in NeRV are natural choices for measuring the difference

between two probability distributions, but in principle other divergence measures could be used as

well. The notions of neighbor retrieval and a probabilistic relevance model are the crucial parts of

NeRV, not the specific divergence measure.

Our notion of plug-in supervised metrics could in principle be used with other methods too;

other unsupervised embedding algorithms that work based on a distance matrix can also be turned

into supervised versions, by plugging in learning metric distances into the distance matrix. We

performed an initial experiment with Sammon’s mapping (Peltonen et al., 2004); a similar idea

for isomap appeared later (Weng et al., 2005). However, we believe that NeRV is an especially

attractive choice for the embedding step since it has the information retrieval interpretation and it

performed well empirically.

An implementation of the NeRV and local MDS algorithms as well as the mean smoothed

precision-mean smoothed recall measures is available at http://www.cis.hut.fi/projects/

mi/software/dredviz

Acknowledgments

34

Page 35: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

The authors belong to the Adaptive Informatics Research Centre, a national CoE of the Academy

of Finland, and to Helsinki Institute for Information Technology HIIT. Jarkko Venna is currently at

Numos Ltd. JP was supported by the Academy of Finland, decision number 123983, and HA by the

Portuguese Foundation for Science and Technology, scholarship number SFRH/BD/39642/2007.

This work was also supported in part by the PASCAL2 Network of Excellence, ICT 216886.

Appendix A. Proof of the Connection between the Probabilistic Cost Functions and

Precision and Recall

In Section 2.2 we introduced Kullback-Leibler divergences as cost functions for visual neighbor

retrieval, based on probability distributions qi and pi which generalize the relevance model implicit

in precision and recall. We will next show that in the simple case of “binary neighborhoods” the

cost functions reduce to precision and recall. By “binary neighborhoods” we mean that, in both the

input space and the visualization, (i) the point of interest has some number of relevant neighbors

and all the other points are completely irrelevant, and (ii) the points that are relevant are all equally

relevant.

In the probabilistic model the binary neighborhoods can be interpreted as follows. Let i be the

point of interest, and let Pi be the set of relevant neighbors for point i in the input space. Pi can

be the set of all points (other than i itself) falling inside some fixed radius from point i in the input

space, or it can be the set containing some fixed number of points nearest to i in the input space. In

either case, let ri be the size of Pi.

We define that the relevant neighbors of the point of interest i have an equal non-zero probability

of being chosen, and all the other points have a near-zero probability of being chosen. In other

words, we define

p∗j|i =

{

ai ≡1−δri

, if point j is in Pi

bi ≡δ

N−ri−1, otherwise.

Here N is the total number of data points, and 0 < δ ≪ 0.5 gives the irrelevant points a very small

probability.

Similarly, let Qi be the set of neighbors for point i in the visualization. Again, Qi can be the set

of all points (other than i itself) falling inside some fixed radius from point i in the visualization, or

it can be the set containing some fixed number of points nearest to i in the visualization. In either

case, let ki be the size of Qi. Note that the sizes of Qi and Pi can be different, that is, ki can be

different from ri.

We define the probability of choosing a neighbor from the visualization as

q∗j|i =

{

ci ≡1−δki

, if point j is in Qi

di ≡δ

N−ki−1, otherwise.

Consider the Kullback-Leibler divergence D(p∗i ,q∗i ) for any fixed i. We now show that mini-

mizing this divergence is equivalent to maximizing recall where point i is the query. The divergence

is a sum over elements p∗j|i log

p∗j|i

q∗j|i

, thus the sum can be divided into four parts depending on which

35

Page 36: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

value p∗j|i takes (two possibilities) and which value q∗

j|i takes (two possibilities). We get

D(p∗i ,q∗i ) = ∑

j 6=i,p∗j|i=ai,q∗j|i=ci

(

ai logai

ci

)

+ ∑j 6=i,p∗

j|i=ai,q∗j|i=di

(

ai logai

di

)

+ ∑j 6=i,p∗

j|i=bi,q∗j|i=ci

(

bi logbi

ci

)

+ ∑j 6=i,p∗

j|i=bi,q∗j|i=di

(

bi logbi

di

)

=

(

ai logai

ci

)

NTP,i +

(

ai logai

di

)

NMISS,i +

(

bi logbi

ci

)

NFP,i +

(

bi logbi

di

)

NTN,i

where on the last line the terms inside parentheses are simply constant coefficients. Here NTP,i is

the number of true positives for this query, that is, points for which the probability is high in both

the data and the visualization. The number of misses, that is, the number of points that have a low

probability in the visualization although the probability in the data is high, is NMISS,i. The number

of false positives (high probability in the visualization, low in the data) is NFP,i. Finally the number

of true negatives (low probability in both the visualization and the data) is NTN,i.

It is straightforward to check that if δ is very small, then the coefficients for the misses and false

positives dominate the divergence. This yields

D(p∗i ,q∗i ) ≈ NMISS,i

(

ai logai

di

)

+NFP,i

(

bi logbi

ci

)

= NMISS,i1−δ

ri

(

log(N− ki−1)

δ+ log

(1−δ)

ri

)

+NFP,iδ

N− ri−1

(

logδ

N− ri−1− log

(1−δ)

ki

)

= NMISS,i1−δ

ri

(

log(N− ki−1)

ri+ log

(1−δ)

δ

)

+NFP,iδ

N− ri−1

(

logki

N− ri−1− log

(1−δ)

δ

)

. (11)

Because the terms log[(1−δ)/δ] dominate the other logarithmic terms, (11) further simplifies to

D(p∗i ,q∗i ) ≈

(

NMISS,i1−δ

ri−NFP,i

δ

N− ri−1

)

log(1−δ)

δ

≈ NMISS,i1−δ

rilog

(1−δ)

δ=

NMISS,i

riC

whereC is a constant that only depends on δ and not on i. Hence if we minimized this cost function,

we would be maximizing the recall of the query, which is defined as

recall(i) =NTP,i

ri= 1−

NMISS,i

ri.

We can analogously show that for any fixed i, minimizing D(q∗i , p∗i ) is equivalent to maximizing

precision of the corresponding query.

Because D(q∗i , p∗i ) and D(p∗i ,q

∗i ) are equivalent to precision and recall, and pi and qi can be

seen as more sophisticated generalizations of p∗i and q∗i , we interpret D(qi, pi) and D(pi,qi) as more

sophisticated generalizations of precision and recall.

36

Page 37: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

References

Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding

and clustering. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural

Information Processing Systems 14, pages 585–591. MIT Press, Cambridge, MA, 2002a.

Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data

representation. Technical Report TR-2002-01, Department of Computer Science,The University

of Chicago, 2002b.

Mira Bernstein, Vin de Silva, John C. Langford, and Joshua B. Tenenbaum. Graph approximations

to geodesics on embedded manifolds. Technical report, Department of Psychology, Stanford

University, 2000.

Catherine L. Blake and C. J. Merz. UCI repository of machine learning databases.

http://www.ics.uci.edu/∼mlearn/MLRepository.html, 1998.

Ingwer Borg and Patrick Groenen. Modern Multidimensional Scaling. Springer, New York, 1997.

Hong Chang and Dit-Yan Yeung. Locally linear metric adaptation for semi-supervised clustering.

In Proceedings of the Twenty-first International Conference on Machine Learning 2004, pages

153–160. Omnipress, Madison, WI, 2004.

Lisha Chen and Andreas Buja. Local multidimensional scaling for nonlinear dimension reduction,

graph drawing and proximity analysis. Journal of the American Statistical Association, 104(485):

209–219, 2009.

Pierre Demartines and Jeanny Herault. Curvilinear component analysis: A self-organizing neural

network for nonlinear mapping of data sets. IEEE Transactions on Neural Networks, 8(1):148–

154, 1997.

David L. Donoho and Carrie Grimes. Hessian eigenmaps: Locally linear embedding techniques for

high-dimensional data. Proceedings of the National Academy of Sciences, 100:5591–5596, 2003.

Xin Geng, De-Chuan Zhan, and Zhi-Hua Zhou. Supervised nonlinear dimensionality reduction for

visualization and classification. IEEE Transactions on Systems, Man, and Cybernetics–Part B:

Cybernetics, 35:1098–1107, 2005.

Amir Globerson and Sam Roweis. Metric learning by collapsing classes. In Y. Weiss, B. Scholkopf,

and J. Platt, editors, Advances in Neural Information Processing 18, pages 451–458. MIT Press,

Cambridge, MA, 2006.

Jacob Goldberger, Sam Roweis, Geoffrey Hinton, and Ruslan Salakhutdinov. Neighbourhood com-

ponents analysis. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information

Processing Systems 17, pages 513–520. MIT Press, Cambridge, MA, 2005.

John C. Gower. Some distance properties of latent root and vector methods used in multivariate

analysis. Biometrika, 53:325–338, 1966.

37

Page 38: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

Rui-jun Gu and Wen-bo Xu. Weighted kernel Isomap for data visualization and pattern classifica-

tion. In Y. Wang, Y.-m. Cheung, and H. Liu, editors, Computational Intelligence and Security

(CIS 2006), pages 1050–1057. Springer-Verlag, Berlin Heidelberg, 2007.

Geoffrey Hinton and Sam T. Roweis. Stochastic neighbor embedding. In T. G. Dietterich, S. Becker,

and Z. Ghahramani, editors, Advances in Neural information processings systems 14, pages 833–

840. MIT Press, Cambridge, MA, 2002.

Harold Hotelling. Analysis of a complex of statistical variables into principal components. Journal

of Educational Psychology, 24:417–441,498–520, 1933.

Tomoharu Iwata, Kazumi Saito, Naonori Ueda, Sean Stromsten, Thomas L. Griffiths, and Joshua B.

Tenenbaum. Parametric embedding for class visualization. Neural Computation, 19:2536–2556,

2007.

Samuel Kaski and Jaakko Peltonen. Informative discriminant analysis. In Proceedings of the Twen-

tieth International Conference on Machine Learning (ICML-2003), pages 329–336. AAAI Press,

Menlo Park, CA, 2003.

Samuel Kaski and Janne Sinkkonen. Principle of learning metrics for exploratory data analysis.

Journal of VLSI Signal Processing, special issue on Machine Learning for Signal Processing, 37:

177–188, 2004.

Samuel Kaski, Janne Sinkkonen, and Jaakko Peltonen. Bankruptcy analysis with self-organizing

maps in learning metrics. IEEE Transactions on Neural Networks, 12:936–947, 2001.

Samuel Kaski, Janne Nikkila, Merja Oja, Jarkko Venna, Petri Toronen, and Eero Castren. Trust-

worthiness and metrics in visualizing similarity of gene expression. BMC Bioinformatics, 4:48,

2003.

Teuvo Kohonen, Jussi Hynninen, Jari Kangas, Jorma Laaksonen, and Kari Torkkola. LVQ PAK:

The learning vector quantization program package. Technical Report A30, Helsinki University

of Technology, Laboratory of Computer and Information Science, FIN-02150 Espoo, Finland,

1996.

Joseph B. Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypoth-

esis. Psychometrica, 29(1):1–27, 1964.

John Aldo Lee, Amaury Lendasse, Nicolas Donckers, and Michel Verleysen. A robust nonlinear

projection method. In M. Verleysen, editor, ESANN’2000, Eighth European Symposium on Arti-

ficial Neural Networks, pages 13–20. D-Facto Publications, Bruges, Belgium, 2000.

John Aldo Lee, Amaury Lendasse, and Michel Verleysen. Nonlinear projection with curvilinear

distances: Isomap versus curvilinear distance analysis. Neurocomputing, 57:49–76, 2004.

Chun-Guang Li and Jun Guo. Supervised Isomap with explicit mapping. In J.-S. Pan, P. Shi, and

Y. Zhao, editors, Proceedings of the First International Conference on Innovative Computing,

Information and Control (ICICIC’06), volume 3, pages 345–348. IEEE, 2006.

38

Page 39: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

DIMENSIONALITY REDUCTION FOR VISUALIZATION

E. Liitiainen and A. Lendasse. Variable scaling for time series prediction: Application to the

ESTSP’07 and the NN3 forecasting competitions. In IJCNN 2007, International Joint Conference

on Neural Networks, pages 2812–2816. IEEE, Piscataway, NJ, 2007.

Ning Liu, Fengshan Bai, Jun Yan, Benyu Zhang, Zheng Chen, and Wei-Ying Ma. Supervised semi-

definite embedding for email data cleaning and visualization. In Y. Zhang, K. Tanaka, J. Xu Yu,

S. Wang, and M. Li, editors, Web Technologies Research and Development–APWeb 2005, pages

972–982. Springer-Verlag, Berlin Heidelberg, 2005.

Roland Memisevic and Geoffrey Hinton. Multiple relational embedding. In L. K. Saul, Y. Weiss,

and L. Bottou, editors, Advances in Neural Information Processing Systems 17, pages 913–920.

MIT Press, Cambridge, MA, 2005.

Jaakko Peltonen and Samuel Kaski. Discriminative components of data. IEEE Transactions on

Neural Networks, 16(1):68–83, 2005.

Jaakko Peltonen, Arto Klami, and Samuel Kaski. Improved learning of Riemannian metrics for

exploratory analysis. Neural Networks, 17:1087–1100, 2004.

Sam T. Roweis and Lawrence K. Saul. Nonlinear dimensionality reduction by locally linear embed-

ding. Science, 290:2323–2326, 2000.

Eran Segal, Nir Friedman, Daphne Koller, and Aviv Regev. A module map showing conditional

activity of expression modules in cancer. Nature Genetics, 36:1090–1098, 2004.

Blake Shaw and Tony Jebara. Minimum volume embedding. In M. Meila and X. Shen, editors,

Proceedings of AISTATS*07, the 11th International Conference on Artificial Intelligence and

Statistics (JMLR Workshop and Conference Proceedings Volume 2), pages 460-467, 2007.

Le Song, Alex Smola, Kersten Borgwardt, and Arthur Gretton. Colored maximum variance unfold-

ing. In J. C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information

Processing Systems 20, pages 1385–1392. MIT Press, Cambridge, MA, 2008.

Andrew I. Su, Michael P. Cooke, Keith A. Ching, Yaron Hakak, John R. Walker, Tim Wiltshire,

Anthony P. Orth, Raquel G. Vega, Lisa M. Sapinoso, Aziz Moqrich, Ardem Patapoutian, Gar-

ret M. Hampton, Peter G. Schultz, and John B. Hogenesch. Large-scale analysis of the human

and mouse transcriptomes. Proceedings of the National Academy of Sciences, 99:4465–4470,

2002.

Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. A global geometric framework for

nonlinear dimensionality reduction. Science, 290:2319–2323, December 2000.

TIMIT. CD-ROM prototype version of the DARPA TIMIT acoustic-phonetic speech database,

1998.

Warren S. Torgerson. Multidimensional scaling: I. theory and method. Psychometrika, 17:401–419,

1952.

Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine

Learning Research, 9:2579–2605, 2008.

39

Page 40: Information Retrieval Perspective to Nonlinear ...research.cs.aalto.fi/pml/papers/jmlr10_preprint.pdf · The cost function of similarity visualization (1) bears a close relationship

VENNA, PELTONEN, NYBO, AIDOS AND KASKI

Laurens van der Maaten, Eric Postma, and Jaap van der Herik. Dimensionality reduction: A com-

parative review. Technical report 2009–005, Tilburg centre for Creative Computing, Tilburg

University, Tilburg, The Netherlands, 2009.

Jarkko Venna. Dimensionality Reduction for Visual Exploration of Similarity Structures. PhD thesis,

Helsinki University of Technology, Espoo, Finland, 2007.

Jarkko Venna and Samuel Kaski. Local multidimensional scaling. Neural Networks, 19:889–99,

2006.

Jarkko Venna and Samuel Kaski. Comparison of visualization methods for an atlas of gene expres-

sion data sets. Information Visualization, 6:139–54, 2007a.

Jarkko Venna and Samuel Kaski. Nonlinear dimensionality reduction as information retrieval. In

M. Meila and X. Shen, editors, Proceedings of AISTATS*07, the 11th International Conference

on Artificial Intelligence and Statistics (JMLR Workshop and Conference Proceedings Volume 2),

pages 572–579, 2007b.

Kilian Weinberger, Benjamin Packer, and Lawrence Saul. Nonlinear dimensionality reduction by

semidefinite programming and kernel matrix factorization. In R. G. Cowell and Z. Ghahramani,

editors, Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics

(AISTATS 2005), pages 381–388. Society for Artificial Intelligence and Statistics, 2005. (Avail-

able electronically at http://www.gatsby.ucl.ac.uk/aistats/).

Kilian Weinberger, John Blitzer, and Lawrence Saul. Distance metric learning for large margin

nearest neighbor classification. In Y. Weiss, B. Scholkopf, and J. Platt, editors, Advances in

Neural Information Processing Systems 18, pages 1473–1480. MIT Press, Cambridge, MA, 2006.

Kilian Q. Weinberger and Lawrence K. Saul. Unsupervised learning of image manifolds by semidef-

inite programming. International Journal of Computer Vision, 70:77–90, 2006.

Kilian Q. Weinberger, Fei Sha, Qihui Zhu, and Lawrence K. Saul. Graph Laplacian regularization

for large-scale semidefinite programming. In B. Scholkopf, J. Platt, and T. Hoffman, editors, Ad-

vances in Neural Information Processing Systems 19, pages 1489–1496. MIT Press, Cambridge,

MA, 2007.

Shifeng Weng, Changshui Zhang, and Zhonglin Lin. Exploring the structure of supervised data by

Discriminant Isometric Mapping. Pattern Recognition, 38:599–601, 2005.

Eric P. Xing, Andrew Y. Ng, Michael I. Jordan, and Stuart Russell. Distance metric learning, with

application to clustering with side information. In S. Becker, S. Thrun, and K. Obermayer, editors,

Advances in Neural Information Processing Systems 15. MIT Press, Cambridge, MA, 2003.

40