SC - Lezione 8 - Eigenvalue, Eigenvectors, Spectral Decomposition, QR algorithm and Power Method


Checklist

Checklist

Domande, Keyword e Vocabulary

  • Eigenvalue
  • Eigenvector
  • Eigenvalues, Eigenvectors and complex numbers
  • Physical meaning of an eigenvector
  • Complex eigenvalues and eigenvector
  • Polynomial Characteristics
  • Null Space
  • geometric and algebraic multiplicity
  • Singular matrix
  • Spectral decomposition
  • Diagonalization of a matrix
  • Geometrical Meaning of Spectral Decomposition for symmetric matrices
  • What can we say about the eigenvalues of an orthogonal matrix?
  • What can be say about the egienvectors of an orthogonal matrix Q?
  • of a matrix
  • QR Algorithm
  • Power Method

Appunti SC - Lezione 8

Eigenvalues and eigenvectors (autovalori e autovettori)

Suppose that i have a square matrix.

If you multiply both there by any number on the left, it will be the same on the right

If x is an eigenvector of A then even is an eigenvector of A corresponding to the same eigenvalue .

An eigenvector should be understood as a specific direction in space rather than merely a vector.

Physical interpretation of an eigenvector and eigenvalue: When a matrix is multiplied by a vector, the result is another vector. However, if the vector lies along the direction of an eigenvector, the multiplication will not alter the vector’s direction; it will only scale it by a factor, denoted by the eigenvalue .

Eigenvectors indicate particular directions along which the action of the matrix leaves the direction unchanged, modifying only the magnitude by a factor determined by the corresponding eigenvalue.

Eigenvalues, Eigenvectors and complex numbers

It’s worth nothing that, an eigenvalue can be a complex number.

If is a complex eigenvalue, its complex conjugate will also exist and be symmetric. Similarly, eigenvectors can be complex, and their conjugate vectors will also exist

Rewriting the equation

can be then rewritten as , we have that the eigenvalues must make the matrix singular. But if a matrix is singular, its determinant is zero and then the eigenvalues must satisfy

The determinant of this matrix is a polynomial of degree in the variable . A way to compute an eigenvalue is to write the characteristic polynomial associated with the matrix and find the zeros of the polynomial.

By the fundamental theorem of algebra, a polynomial of degree n has n roots. Therefore has n solutions (eigenvalues = solutions).

Algebraic Multiplicity

An eigenvalue that is repeated times is called an eigenvalue of algebraic multiplicity of .

Largest eigenvalue

The largest eigenvalue is an important concept that will be used and its associated eigenvector.

Norm of an eigenvector in matlab

The eigenvector in matlab is normalized such that it has always norm one

Finding eigenvalues

What do i do to find an eigenvalue is to solve the omogeneous system:

I want a new space of

I have to solve n omogeneous linear system. And this solution is a basis for the null space. If you consider , you get one matrix and you get one null space and one solution, and so on to .

In general if you pick up a random matrix, there is an high probability that eigenvalues/eigenvectors are a complex number.

Linear independent means they have not the same direction.

Geometric Multiplicity

Several linearly independent eigenvectors can correspond to an eigenvalue with algebraic multiplicity greater than 1. The number of such eigenvectors is called the geometric multiplicity

Property: the geometric multiplicity is algebraic multiplicity

Suppose that you have eigenvalue with multiplicity 2. The same eigenvalue, corresponds to different eigenvectors. They have not same direction. Either this or the other situation. The other situation is just one direction.

Opposite eigenvectors are the same eigenvector because that’s the same directions.

Singular matrix and Eigenvalues

In singular matrix, atleast one eigenvalue must be zero. A matrix is singular when the determinant is zero. Another characteristics of these matrices it that the columns are linearly dependent.

Property: Eigenvalues can be used to compute the determinant The determinant can be computed as the product of the eigenvalues, so if the singular matrix has determinant zero, atleast one of his eigenvalues must be zero.

“Singular matrices are bad matrices” - cit prof

The rank of the singular matrix will be n - 1 if it has two eigenvalue equal to zero. n -2 if it has three eigenvalues equal to zero and so on. This will be true also for singular value decomposition so remember this part.

Does this mean that there is a relationship between the number of eigenvalues and the rank of a matrix? Does this only apply to singular matrices?

Answer The answer I came to is: if by “number” we include repeated eigenvalues and those equal to zero, then the number is always the same and equals the number of columns. If we exclude the zeros, then yes, the number of eigenvalues different from zero is equal to the rank of the matrix.

Regarding the second question: no, this also applies to non-singular matrices. For non-singular matrices, since the columns are linearly independent, the rank will be maximum, the determinant will be non-zero, and therefore there will be no eigenvalues equal to zero. As a result, the number of non-zero eigenvalues will be equal to the rank of the matrix.

Trick to compute a symmetric matrix in Matlab: First you generate a matrix, then you multiply it by S=S'*S

Spectral decomposition - eigendecomposition

Let be a matrix, to be eigendecomposed:

  • must be square matrix
  • must have linearly independent eigenvectors (i.e maximum rank)

Why A must be square? The equation can be solved only for square matrices. For non square matrices you have to consider “singular values” (see the next lesson Singular Value Decomposition)

We can decompose into: where

  • X has the eigenvectors for columns
  • (lambda) is a diagonal matrix with eigenvalues on the diagonal

Since is a non-singular matrix, it admits the inverse matrix so you can write: to obtain:

This is a concept called diagonalization of a matrix. This equation also shows why it’s a decomposition of the matrix A

Spectre of a matrix

The spectre of a matrix is the set of its eigenvalue.

Spectral decomposition for symmetric matrices

Moreover, for symmetric matrices we can write: . We also have that because i multiply both term i got this equation.

Clarification: I take a symmetric matrix whose elements above the diagonal are equal to the elements below the diagonal. This type of matrix has a special property: the eigenvectors (and thus the matrix formed by the eigenvectors) are orthogonal. So, I write as the matrix of eigenvectors of and rewrite exactly the same equation as the one above for the spectral decomposition:

I apply the same property as before (diagonalization) by multiplying both sides on the right by . I remember that for orthogonal matrices, the transpose is equal to the inverse, so we get:

Simplifying and , we obtain:

The geometrical meaning of this is that there is always a rotation that can transform the matrix in a diagonal matrix.

What can we say about the eigenvalues of a orthogonal matrix Q?

Consider any orthogonal matrix Q: What can we say about its eigenvalues? and remember that the orthogonal matrix is important when you apply it to a vector because the vector that you obtain has the same length so it’s a rotation.

When you multiply an orthogonal matrix by a vector, by the properties of the orthogonal matrix, so, what does it change is the direction but not the “length” (magnitude) of the vector. Now, how does this influences the equation ? lambda must be one.

Since lambda can be a complex number, you have to put the modulus so otherwise it’s a complex number of length one. we can verify in matlab with the following code:

A=randi([-10,10],3,3);
[Q,R]=qr(A);
[X,L]=eig(Q);
abs(diag(L)); % Must be one

is the matrix that has all the eigenvalue on the diagonal of a matrix.

We asked to the first question.

Property: The eigenvalues of an orthogonal matrix are 1

What can be said about the eigenvectors of an orthogonal matrix Q?

The eigenvectors forms a basis in the same space (like another space) If the matrix Q is also symmetric, then his eigenvectors are orthogonal.

We can verify in matlab by doing X'*X

Exercise: analyize the eigenvalues and vectors of a 3D rotation matrix

Ask ChatGPT for writing a matlab code for computing the 3-D Rotation matrix respect to the vertical axis.

Then compute and analyize its eigenvalues and eigenvectors.

The vertical axis doesn’t change, that this corresponds to an eigenvalue of value 1

The eigenvalues are: one and two complex numbers (also complex conjugate)

theta =

   1.570796326794897

    R = [cos(theta), 0, sin(theta);
         0, 1, 0;
         -sin(theta), 0, cos(theta)];

Particular Case: Power of a matrix and eigenvalues/vectors

Let
We can decompose into: as shown before. is a diagonal matrix that is straightforward, it’s enough to raise each element on the diagonal. has the same eigenvectors of and same holds for and The eigenvalues of are the same eigenvalue of , but each raised to power p.

The basic idea behind the QR-algorithm for computing eigenvalues

The QR Algorithm is different from the QR factorization. It uses actually the QR factorization.

Suppose you start with where is the result of the first QR factorization. The algorithm performs multiple iterations. If you reverse the order of the multiplication and do: you obtain a new matrix with the same eigenvalues as . Then we have the apply again QR factorization to , then compute

And so on…

As you continue this iterative process, the matrices ​ become increasingly upper triangular. Upper triangular matrices have their eigenvalues on the diagonal. Eventually, at some iteration step ​, the matrix is nearly upper triangular, and the diagonal elements will approximate the eigenvalues of the original matrix .

These operations are very fast and there is no roundoff error

Power Method

Let be a square matrix, let and be two matrices from the spectral decomposition of A. The power method is defined as:

is fixed, Starting with a randomly choosed vector, multiplied by to get a new vector . Repeating this process iteratively produces a serioes of vectors.

This sequence ultimately converges to the dominant eigenvector of (the eigenvector corresponding to the largest eigenvalue in magnitude).

You can also, to be strict there, the basic power method can be easily extended converge also also the dominant eigenvalue By extending the power method though this computing:

you can approximate the largest eigenvalue of .

Quadratic Form

The previous formula represents a quadratic form in defined by the matrix . This is commonly encounreted in optimization problems and in defining quadratic functions.

Dividing by is like diving so it’s a normalization factor.

Application: Smoothing of random polygon and Stochastic Matrix

We generate a polygon with n random vertices in and center the polygon at the origin. Then we normalize the vector of abscissas and ordinates. Then we compute the average of each pair of abscissas and ordinates, this will generate a new set of vertices that will form a polygon more smoothed.

At each iteration the polygon become smoother and smoother, and its area becomes smaller around the origin.. The limit will be the zero vector that is the origin.

The previous actions can be described as applying the power method to the matrix M of the forward averages.

The matrix is non-negative and stochastic respect to the columns

A matrix is called stochastic if the sum of the entries in each column is equal to 1, and all the entries are non-negative.

We note that has 1 dominant eigenvalue (with algebraic multiplicity = 1) and that the associated eigenvector has constant components. And this eigenvector can be seen as the limit of sequence vectors generated by the power method.

Smoothing iterations corresponds to apply the power method two times: the first acts on the vector of the abscissas and the second on the vector of the ordinates.

The concept of stochastic matrix is also used in Page Rank algorithm.