Top Banner
Eigenvector Eigenvectors are a special set of vectors associated with a linear system of equations (i.e., a matrix equation ) that are sometimes also known as characteristic vectors, proper vectors, or latent vectors (Marcus and Minc 1988, p. 144). The determination of the eigenvectors and eigenvalues of a system is extremely important in physics and engineering, where it is equivalent tomatrix diagonalization and arises in such common applications as stability analysis, the physics of rotating bodies, and small oscillations of vibrating systems, to name only a few. Each eigenvector is paired with a corresponding so-called eigenvalue . Mathematically, two different kinds of eigenvectors need to be distinguished: left eigenvectors and right eigenvectors . However, for many problems in physics and engineering, it is sufficient to consider only right eigenvectors. The term "eigenvector" used without qualification in such applications can therefore be understood to refer to a right eigenvector . The decomposition of a square matrix into eigenvalues and eigenvectors is known in this work as eigen decomposition , and the fact that this decomposition is always possible as long as the matrix consisting of the eigenvectors of is square is known as the eigen decomposition theorem . Define a right eigenvector as a column vector satisfying (1 ) where is a matrix , so (2 ) which means the right eigenvalues must have zero determinant , i.e., (3
31
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Eigenvector

Eigenvector

Eigenvectors are a special set of vectors associated with a linear system of equations (i.e., a matrix equation) that

are sometimes also known as characteristic vectors, proper vectors, or latent vectors (Marcus and Minc 1988,

p. 144).

The determination of the eigenvectors and eigenvalues of a system is extremely important in physics and

engineering, where it is equivalent tomatrix diagonalization and arises in such common applications as stability

analysis, the physics of rotating bodies, and small oscillations of vibrating systems, to name only a few. Each

eigenvector is paired with a corresponding so-called eigenvalue. Mathematically, two different kinds of eigenvectors

need to be distinguished: left eigenvectors and right eigenvectors. However, for many problems in physics and

engineering, it is sufficient to consider only right eigenvectors. The term "eigenvector" used without qualification in

such applications can therefore be understood to refer to a right eigenvector.

The decomposition of a square matrix   into eigenvalues and eigenvectors is known in this work as eigen

decomposition, and the fact that this decomposition is always possible as long as the matrix consisting of the

eigenvectors of   is square is known as the eigen decomposition theorem .

Define a right eigenvector as a column vector   satisfying

(1

)

where   is a matrix, so

(2

)

which means the right eigenvalues must have zero determinant, i.e.,

(3

)

Similarly, define a left eigenvector as a row vector   satisfying

(4

)

Page 2: Eigenvector

Taking the transpose of each side gives

(5

)

which can be rewritten as

(6

)

Rearrange again to obtain

(7

)

which means

(8

)

Rewriting gives

(9)

(10

)

(11

)

where the last step follows from the identity

(12

)

Equating equations (◇) and (11), which are both equal to 0 for arbitrary   and  , therefore requires that 

, i.e., left and righteigenvalues are equivalent, a statement that is not true for eigenvectors.

Let   be a matrix formed by the columns of the right eigenvectors and   be a matrix formed by the rows of the left

eigenvectors. Let

(13

)

Then

Page 3: Eigenvector

(14

)

(15

)

and

(16

)

(17

)

so

(18

)

But this equation is of the form

(19

)

where   is a diagonal matrix, so it must be true that   is also diagonal. In particular, if   is a symmetric

matrix, then the left and right eigenvectors are simply each other's transpose, and if   is a self-adjoint matrix (i.e., it

is Hermitian), then the left and right eigenvectors areadjoint matrices.

Eigenvectors may not be equal to the zero vector. A nonzero scalar multiple of an eigenvector is equivalent to the

original eigenvector. Hence, without loss of generality, eigenvectors are often normalized to unit length.

While an   matrix always has   eigenvalues, some or all of which may be degenerate, such a matrix may have

between 0 and   linearly independent eigenvectors. For example, the matrix   has only the single

eigenvector  .

Eigenvectors may be computed in Mathematica using Eigenvectors[matrix]. This command always returns a list

of length  , so any eigenvectors that are not linearly independent are returned as zero vectors. Eigenvectors and

eigenvalues can be returned together using the command Eigensystem[matrix].

Given a   matrix   with eigenvectors  ,  , and   and corresponding eigenvalues  ,  , and  , then an

arbitrary vector   can be written

(20

Page 4: Eigenvector

)

Applying the matrix  ,

(21

)

(22

)

so

(23

)

If  , and  , it therefore follows that

(24

)

so repeated application of the matrix to an arbitrary vector amazingly results in a vector proportional to the

eigenvector with largest eigenvalue.

(http://mathworld.wolfram.com/Eigenvector.html)

Page 5: Eigenvector

Let A be a square matrix of order n and   one of its eigenvalues. Let X be an eigenvector of A associated to  . We must have 

This is a linear system for which the matrix coefficient is  . Since the zero-vector is a solution, the system is consistent. In fact, we will in a different page that the structure of the solution set of this system is very rich. In this page, we will basically discuss how to find the solutions. 

Remark. It is quite easy to notice that if X is a vector which satisfies  , then the vector Y = c X (for any arbitrary number c) satisfies the same equation, i.e.  . In other words, if we know that X is an eigenvector, then cX is also an eigenvector associated to the same eigenvalue. 

Let us start with an example. 

Example. Consider the matrix 

First we look for the eigenvalues of A. These are given by the characteristic

equation  , i.e. 

If we develop this determinant using the third column, we obtain 

Page 6: Eigenvector

Using easy algebraic manipulations, we get 

which implies that the eigenvalues of A are 0, -4, and 3. Next we look for the eigenvectors.1.

Case  : The associated eigenvectors are given by the linear system 

which may be rewritten by 

Many ways may be used to solve this system. The third equation is identical to the first. Since, from the second equations, we have y = 6x, the first equation reduces to 13x + z = 0. So this system is equivalent to 

So the unknown vector X is given by 

Therefore, any eigenvector X of A associated to the eigenvalue 0 is given by 

where c is an arbitrary number.2.

Page 7: Eigenvector

Case  : The associated eigenvectors are given by the linear system 

which may be rewritten by 

In this case, we will use elementary operations to solve it. First we consider the

augmented matrix  , i.e. 

Then we use elementary row operations to reduce it to a upper-triangular form. First we interchange the first row with the first one to get 

Next, we use the first row to eliminate the 5 and 6 on the first column. We obtain 

If we cancel the 8 and 9 from the second and third row, we obtain 

Page 8: Eigenvector

Finally, we subtract the second row from the third to get 

Next, we set z = c. From the second row, we get y = 2z = 2c. The first row will imply x = -2y+3z = -c. Hence 

Therefore, any eigenvector X of A associated to the eigenvalue -4 is given by 

where c is an arbitrary number.2.

Case  : The details for this case will be left to the reader. Using similar ideas as the one described above, one may easily show that any eigenvector X of A associated to the eigenvalue 3 is given by 

where c is an arbitrary number. 

Remark. In general, the eigenvalues of a matrix are not all distinct from each other (see the page on the eigenvalues for more details). In the next two examples, we discuss this problem. 

Example. Consider the matrix 

Page 9: Eigenvector

The characteristic equation of A is given by 

Hence the eigenvalues of A are -1 and 8. For the eigenvalue 8, it is easy to show that any eigenvector X is given by 

where c is an arbitrary number. Let us focus on the eigenvalue -1. The associated eigenvectors are given by the linear system 

which may be rewritten by 

Clearly, the third equation is identical to the first one which is also a multiple of the second equation. In other words, this system is equivalent to the system reduced to one equation 

2x+y + 2z= 0.

To solve it, we need to fix two of the unknowns and deduce the third one. For

example, if we set   and  , we obtain  . Therefore, any eigenvector X of A associated to the eigenvalue -1 is given by 

Page 10: Eigenvector

In other words, any eigenvector X of A associated to the eigenvalue -1 is a linear combination of the two eigenvectors 

Example. Consider the matrix 

The characteristic equation is given by 

Hence the matrix A has one eigenvalue, i.e. -3. Let us find the associated eigenvectors. These are given by the linear system 

which may be rewritten by 

This system is equivalent to the one equation-system x - y = 0.

Page 11: Eigenvector

So if we set x = c, then any eigenvector X of A associated to the eigenvalue -3 is given by 

Let us summarize what we did in the above examples. 

Summary: Let A be a square matrix. Assume   is an eigenvalue of A. In order to find the associated eigenvectors, we do the following steps: 

1.Write down the associated linear system 

2.Solve the system.

3.Rewrite the unknown vector X as a linear combination of known vectors. 

The above examples assume that the eigenvalue   is real number. So one may wonder whether any eigenvalue is always real. In general, this is not the case except for symmetric matrices. The proof of this is very complicated. For square matrices of order 2, the proof is quite easy. Let us give it here for the sake of being little complete. Consider the symmetric square matrix 

Its characteristic equation is given by 

Page 12: Eigenvector

This is a quadratic equation. The nature of its roots (which are the eigenvalues of A) depends on the sign of the discriminant 

Using algebraic manipulations, we get 

Therefore,   is a positive number which implies that the eigenvalues of A are real numbers. 

Remark. Note that the matrix A will have one eigenvalue, i.e. one double root, if and only if  . But this is possible only if a=c and b=0. In other words, we have 

A = a I2.

In the next page, we will discuss the case of complex eigenvalues.

http://www.sosmath.com/matrix/eigen2/eigen2.html

Page 13: Eigenvector

Matrix Tutorial 3: Eigenvalues and Eigenvectors

A tutorial on eigenvalues, eigenvectors and their properties. Includes step by step how-to calculations. An introduction to vector iteration, the Power Method and the Deflation Method is provided.

Dr. E. GarciaMi Islita.comEmail | Last Update: 07/17/06

Topics

Putting Everything TogetherThe Eigenvalue ProblemCalculating EigenvaluesEigenvectorsProperties of Eigenvalues and EigenvectorsComputing Eigenvectors from EigenvaluesComputing Eigenvalues from EigenvectorsThe Power Method (Vector Iteration)The Deflation MethodWhy should we care about all this?Beware of Link Model SpeculatorsTutorial ReviewReferences

Putting Everything Together

In Part 1 of this three-part tutorial we defined different type of matrices. We covered digraphs, stochastic matrices, and markov chains. We also mentioned how some search engine marketers have derived blogonomies out of these and similar concepts.

In Part 2 we covered matrix operations like addition, substraction and multiplication of matrices. We also discussed multiplication and division of matrices by a scalar and calculation of determinants from square matrices. We mentioned that if a determinant has a nonzero value, its matrix is described as regular and that if a determinant has zero value, its matrix is described as singular.

It is now time to put everything together, to demystify eigenvalues, eigenvectors, and present some practical applications.

The Eigenvalue Problem

Consider a scalar matrix Z, obtained by multiplying an identity matrix by a scalar; i.e., Z = c*I. Deducting this from a regular matrix A gives a new matrix A - c*I.

Page 14: Eigenvector

Equation 1: A - Z = A - c*I.

If its determinant is zero,

Equation 2: |A - c*I| = 0

and A has been transformed into a singular matrix. The problem of transforming a regular matrix into a singular matrix is referred to as the eigenvalue problem.

However, deducting c*I from A is equivalent to substracting a scalar c from the main diagonal of A. For the determinant of the new matrix to vanish the trace of A must be equal to the sum of specific values of c. For which values of c?

Calculating Eigenvalues

Figure 1 shows that the computation of eigenvalues is a straightforward process.

Figure 1. The eigenvalue problem.

In the figure we started with a matrix A of order n = 2 and deducted from this the Z = c*I matrix. Applying the method of determinants for m = n = 2 matrices discussed in Part 2 gives

Page 15: Eigenvector

|A - c*I| = c2 - 17*c + 42 = 0

Solving the quadratic equation,

c1 = 3 and c2 = 14.

Note that c1 + c2 = 17, confirming that these characteristic values must add up to the trace of the original matrix A (13 + 4 = 17).

The polynomial expression we just obtained is called the characteristic equation and the c values are termed the latent roots or eigenvalues of matrix A.

Thus, deducting either c1 = 3 or c2 = 14 from the principal of A results in a matrix whose determinant vanishes (|A - c*I| = 0)

In terms of the trace of A we can write:

c1/trace = 3/17 = 0.176 or 17.6%c2/trace = 14/17 = 0.824 or 82.4%

Thus, c2 = 14 is the largest eigenvalue, accounting for more than 82% of the trace. The largest eigenvalue of a matrix is also called the principal eigenvalue.

There are many scenarios like in Principal Component Analysis (PCA) and Singular Value Decomposition (SVD) in which some eigenvalues are so small that are ignored. Then the remaining eigenvalues are added together to compute an estimated fraction. This estimate is then used as a correlation criterion for the so-called Rank Two approximation.

SVD and PCA are techniques used in cluster analysis. In information retrieval, SVD is used in Latent Semantic Indexing (LSI) while PCA is used in Information Space (IS). These will be discussed in upcoming tutorials.

Now that the eigenvalues are known, these are used to compute the latent vectors of matrix A. These are the so-called eigenvectors.

Eigenvectors

Equation 1 can be rewritten for any eigenvalue i as

Equation 3: A - ci*I

Multiplying by a column vector Xi of same number of rows as A and setting the results to zero leads to

Equation 4: (A - ci*I)*Xi = 0

Thus, for every eigenvalue ci this equation constitutes a system of n simultaneous homogeneous equations, and every system of equations has an infinite number of solutions. Corresponding to

Page 16: Eigenvector

every eigenvalue ci is a set of eigenvectors Xi, the number of eigenvectors in the set being infinite. Furthermore, eigenvectors that correspond to different eigenvalues are linearly independent from one another.

Properties of Eigenvalues and Eigenvectors

At this point it might be a good idea to highlight several properties of eigenvalues and eigenvectors. The following pertaint to the matrices we are dicussing here, only.

the absolute value of a determinant (|detA|) is the product of the absolute values of the eigenvalues of matrix A

c = 0 is an eigenvalue of A if A is a singular (noninvertible) matrix If A is a nxn triangular matrix (upper triangular, lower triangular) or diagonal matrix , the

eigenvalues of A are the diagonal entries of A. A and its transpose matrix have same eigenvalues. Eigenvalues of a symmetric matrix are all real. Eigenvectors of a symmetric matrix are orthogonal, but only for distinct eigenvalues. The dominant or principal eigenvector of a matrix is an eigenvector corresponding to the

eigenvalue of largest magnitude (for real numbers, largest absolute value) of that matrix. For a transition matrix, the dominant eigenvalue is always 1. The smallest eigenvalue of matrix A is the same as the inverse (reciprocal) of the largest

eigenvalue of A-1; i.e. of the inverse of A.

If we know an eigenvalue its eigenvector can be computed. The reverse process is also possible; i.e., given an eigenvector, its corresponding eigenvalue can be calculated.

Let's illustrate these two cases.

Computing Eigenvectors from Eigenvalues

Let's use the example of Figure 1 to compute an eigenvector for c1 = 3. From Equation 2 we write

Page 17: Eigenvector

Figure 2. Eigenvectors for eigenvalue c1 = 3.

Note that c1 = 3 gives a set with infinite number of eigenvectors. For the other eigenvalue, c2 = 14, we obtain

Page 18: Eigenvector

Figure 3. Eigenvectors for eigenvalue c2 = 14.

In addition, it is confirmed that |c1|*|c2| = |3|*|14| = |42| = |detA|.

As show in Figure 4, plotting these vectors confirms that eigenvectors that correspond to different eigenvalues are linearly independent of one another. Note that each eigenvalue produces an infinite set of eigenvectors, all being multiples of a normalized vector. So, instead of plotting candidate eigenvectors for a given eigenvalue one could simply represent an entire set by its normalized eigenvector. This is done by rescaling coordinates; in this case, by taking coordinate ratios. In our example, the coordinates of these normalized eigenvectors are:

1. (0.5, -1) for c1 = 3.2. (1, 0.2) for c2 = 14.

Page 19: Eigenvector

Figure 4. Eigenvectors for different eigenvalues are linearly independent.

Mathematicians love to normalize eigenvectors in terms of their Euclidean Distance (L), so all vectors are unit length. To illustrate, in the preceeding example the coordinates of the two eigenvectors are (0.5, -1) and (1, 0.2). Their lengths are

for c1 = 3: L = [0.52 + -12]1/2 = 1.12for c2 = 14: L = [12 + 0.22]1/2 = 1.02

Their new coordinates (ignoring rounding errors) are

for c1 = 3: (0.5/1.12, -1/1.12) = (0.4, -0.9)for c2 = 14: (1/1.02, 0.20/1.02) = (1, 0.2)

You can do the same and normalize eigenvectors to your heart needs, but it is time consuming (and boring). Fortunately, if you use software packages these will return unit eigenvectors for you by default.

How about obtaining eigenvalues from eigenvectors?

Computing Eigenvalues from Eigenvectors

Page 20: Eigenvector

This is a lot easier to do. First we rearrange Equation 4. Since I = 1 we can write the general expression

Equation 5: A*X = c*X

Now to illustrate calculations let's use the example given by Professor C.J. (Keith) van Rijsbergen in chapter 4, page 58 of his great book The Geometry of Information Retrieval (3), which we have reviewedalready.

Figure 5. Eigenvalue obtained from an eigenvector.

This result can be confirmed by simply computing the determinant of A and calculating the latent roots. This should give two latent roots or eigenvalues, c = 41/2 = +/- 2. That is, one eigenvalue must be c1 = +2 and the other must be c2 = -2. This also confirms that c1 + c2 = trace of A which in this case is zero.

An Alternate Method: Rayleigh Quotients

An alternate method for computing eigenvalues from eigenvectors consists in calculating the so-called Rayleigh Quotient, where

Rayleigh Quotient = (XT*A*X)/(XT*X)

where XT is the transpose of X.

For the example given in Figure 5, XT*A*X = 36 and XT*X = 18; hence, 36/18 = 2.

Rayleigh Quotients give you eigenvalues in a straightforward manner. You might want to use this method instead of inspection or as double-checking method. You can also use this in combination with other iterative methods like the Power Method.

Page 21: Eigenvector

The Power Method (Vector Iteration)

Eigenvalues can be ordered in terms of their absolute values to find the dominant or largest eigenvalue of a matrix. Thus, if two distinct hypothetical matrices have the following set of eigenvalues

5, 8, -7; then |8| > |-7| > |5| and 8 is the dominant eigenvalue. 0.2, -1, 1; then |1| = |-1| > |0.2| and since |1| = |-1| there is no dominant eigenvalue.

One of the simplest methods for finding the largest eigenvalue and eigenvector of a matrix is the Power Method, also called the Vector Iteration Method. The method fails if there is no dominant eigenvalue.

In its basic form the Power Method is applied as follows:

1. Asign to the candidate matrix an arbitrary eigenvector with at least one element being nonzero.2. Compute a new eigenvector.3. Normalize the eigenvector, where the normalization scalar is taken for an initial eigenvalue.4. Multiply the original matrix by the normalized eigenvector to calculate a new eigenvector.5. Normalize this eigenvector, where the normalization scalar is taken for a new eigenvalue.6. Repeat the entire process until the absolute relative error between successive eigenvalues

satisfies an arbitrary tolerance (threshold) value.

It cannot get any easier than this. Let's take a look at a simple example.

Page 22: Eigenvector

Figure 6. Power Method for finding an eigenvector with the largest eigenvalue.

What we have done here is apply repeatedly a matrix to an arbitrarily chosen eigenvector. The result converges nicely to the largest eigenvalue of the matrix; i.e.

Equation 6: AkXi = cik*Xi

Figure 7 provides a visual representation of the iteration process obtained through the Power Method for the matrix given in Figure 3. As expected, for its largest eigenvalue the iterated vector converges to an eigenvector of relative coordinates (1, 0.20).

Page 23: Eigenvector

Figure 7. Visual representation of vector iteration.

It can be demonstrated that guessing an initial eigenvector in which its first element is 1 and all others are zero produces in the next iteration step an eigenvector with elements being the first column of the matrix. Thus, one could simply choose the first column of a matrix as an initial seed.

Whether you want to try a matrix column as an initial seed, keep in mind that the rate of convergence of the power method actually depends on the nature of the eigenvalues. For closely spaced eigenvalues, the rate of convergence can be slow. Several methods for improving the rate of convergence have been proposed (Shifted Iteration, Shifted Inverse Iteration or transformation methods). I will not discuss these at this time.

How about calculating the second largest eigenvalue of a matrix?

The Deflation Method

There are different methods for finding subsequent eigenvalues of a matrix. I will discuss only one of these: The Deflation Method. Deflation is a straightforward approach. Essentially, this is what we do:

1. First, we use the Power Method to find the largest eigenvalue and eigenvector of matrix A.2. multiply the largest eigenvector by its transpose and then by the largest eigenvalue. This

produces the matrix Z* = c *X*(X)T

3. compute a new matrix A* = A - Z* = A - c *X*(X)T

4. Apply the Power Method to A* to compute its largest eigenvalue. This in turns should be the second largest eigenvalue of the initial matrix A.

Figure 8 shows deflection in action for the example given in Figure 1 and 2. After few iterations the method converges smoothly to the second largest eigenvalue of the matrix. Neat!

Page 24: Eigenvector

Figure 8. Finding the second largest eigenvalue with the Deflation Method.

Note. We want to thanks Mr. William Cotton for pointing us of an error in the original version of this figure, which was then compounded in the calculations. These have been corrected since then. After corrections, still deflation was able to reach the right second eigenvalue of c = 3. Results can be double checked using Raleigh's Quotients.

We can use deflation to find subsequent eigenvector-eigenvalue pairs, but there is a point wherein rounding error reduces the accuracy below acceptable limits. For this reason other methods, like Jacobi's Method, are preferred when one needs to compute many or all eigenvalues of a matrix.

Why should we care about all this?

Armed with this knowledge, you should be able to understand better articles that discuss link models like PageRank, their advantages and limitations, when these succeed or fail and why. The

Page 25: Eigenvector

assumption from these models is that surfing the web by jumping from links to links is like a random walk describing a markov chain process over a set of linked web pages.

The matrix is considered the transition probability matrix of the Markov chain and having elements strictly between zero and one. For such matrices the Perron-Frobenius Theorem tells us that the largest eigenvalue of the matrix is equal to one (c = 1) and that the corresponding eigenvector, which satisfies the equation

Equation 7: A*X = X

does exists and is the principal eigenvector (state vector) of the Markov Chain, with elements of X being the pageranks. Thus, according to theory, iteration should enable one to compute the largest eigenvalue and this principal eigenvector, whose elements are the pagerank of the individual pages.

Beware of Link Model Speculators

If you are interested in reading how PageRank is computed, stay away from speculators, especially from search engine marketers. It is hard to find accurate explanations in SEO or SEM forums or from those that sell link-based services. I rather suggest you to read university research articles from those that have conducted serious research work on link graphs and PageRank-based models. Great explanations are all over the place. However, some of these are derivative work and might not reflect how Google actually implements PageRank these days (only those at Google know or should know this or if PageRank has been phased out for something better). With all, these research papers are based on experimentation and their results are verifiable.

There is a scientific paper I would like readers to at least consider: Link Analysis, Eigenvectors and Stability, from Ng, Zheng and Jordan from the University of California, Berkeley (5). In this paper the authors use many of the topics herein described to explain the HITS and PageRank models. Regarding the later they write:

Page 26: Eigenvector

Figure 9. PageRank explanation, according to Ng, Zheng and Jordan from University of California, Berkeley

Note that the last equation in Figure 9 is of the form A*X = X as in Equation 7; that is, p is the principal eigenvector (p = X) and can be obtained through iterations.

After completing this 3-part tutorial you should be able to grasp the gist of this paper. The group even made an interesting connection between HITS and LSI (latent semantic indexing).

If you are a student and are looking for a good term paper on Perron-Frobenius Theory and PageRank computations, I recommend you the term paper by Jacob Miles Prystowsky and Levi Gill Calculating Web Page Authority Using the PageRank Algorithm (6). This paper discusses PageRank and some how-to calculations involving the Power Method we have described.

How many iterations are required to compute PageRank values? Only Google knows. According to this Perron-Frobenius review from Professor Stephen Boyd from Stanford (7), the original paper on Google claims that for 24 million pages 50 iterations were required. A lot of things have changed since then, including methods for improving PageRank and new flaws discovered in this and similar link models. These flaws have been the result of the commercial nature of the Web. Not surprisingly, models that work well under controlled conditions and free from noise often fail miserably when transferred to a noisy environment. These topics will be discussed in details in upcoming articles.

Meanwhile, if you are still thinking that the entire numerical apparatus validates the notion that on the Web links can be equated to votes of citation importance or that the treatment validates the

Page 27: Eigenvector

link citation-literature citation analogy a la Eugene Garfield's Impact Factors, think again. This has been one of the biggest fallacies around, promoted by many link spammers, few IRs and several search engine marketers with vested interests.

Literature citation and Impact Factors are driven by editorial policies and peer reviews. On the Web anyone can add/remove/exchange links at any time for any reason whatever. Anyone can buy/sell/trade links for any sort of vested interest or overwrite links at will. In such noisy environment, far from the controlled conditions observed in a computer lab, peer review and citation policies are almost absent or at best contaminated by commercialization. Evidently under such circumstances the link citation-literature citation analogy or the notion that a link is a vote of citation importance for the content of a document cannot be sustained.

Prev: Matrix Tutorial 2: Matrix Operations

Tutorial Review

1. Prove that a scalar matrix Z can be obtained by multiplying an identity matrix I by a scalar c; i.e., Z = c*I.

2. Prove that deducting c*I from regular matrix A is equivalent to substracting a scalar c from the diagonal of A.

3. Given the following matrix,

Prove that these are indeed the three eigenvalues of the matrix. Calculate the corresponding eigenvectors.

4. Use the Power Method to calculate the largest eigenvalue of the matrix given in Exercise 3.5. Use the Deflation Method to calculate the second largest eigenvalue of the matrix given in

Exercise 3.

http://www.miislita.com/information-retrieval-tutorial/matrix-tutorial-3-eigenvalues-eigenvectors.html