SC - Lezione 11


Checklist

Checklist

Domande, Keyword e Vocabulary

  • Applications of SVD
  • SVD and Least Squares (full ranks)
  • Overdetermined system
  • Pseudoinverse of a matrix with SVD
  • Solving a least square with explicit solution (full rank)
  • Solving a least square with pseusodinverse solution (full rank)
  • Solving a least square with orthogonal projection solution (full rank)
  • Solving the least square (rank deficient)
  • Why ?
  • Sparse solution of least square rank deficient
  • SVD Application in Latent Semantic Analysis

Appunti SC - Lezione 11

Applications of the SVD - Solving an overdetermined linear system WITH FULL RANK

We want to solve: but the system is overdetermined there are more equations than unknowns

I passaggi li hai già fatti su OneNote se non te li ricordi e comunque ci arrivi facilmente per ragionamento logico.

We consider the factorization SVD of A such that

By inverse formula we can write that the vector can be partitioned in two smaller vectors such that

and we write:

Sigma is a diagonal system and rectangular so i can decompose it into: (See also SVD factorization - economy form)

We also pose and we obtain that:

Since is a diagonal square matrix and that has rank , the same as the matrix , these are square system that can be solved finding a value such that it is zero.

Consider that the decomposition is square. We have to solve the following square system:

If you have a diagonal matrix, what is its inverse? It’s the same matrix but its diagonal elements are reversed (i.e, if an element is , the inverse is ).

So we consider the inverse diagonal matrix composed of the inverse of each singular value of . We can solve the problem by doing:

Important thing to recall: YOU CANNOT DIVIDE BY A MATRIX

Given ->

These passages lead us to the solution:

  • is the least square solution (since we reduced an overdetermined system to a square system)

Exercise

Compute in Matlab using the formulation above

SVD factorization and Pseudoinverse of a matrix

Another way to solve this overdetermined system using the pseudoinverse: by this we obtain that:

Solving the overdetermined system with the orthogonal projection

Another way is to use the orthogonal projection, since we know that the solution is such that the vector is the orthogonal projection of on the range of .

Recall the definition of orthogonal projector in the context of SVD: the matrices and .

We want to solve the system using the following definition ( is a solution such that the vector is the orthogonal projection of on the range of ).

Then we apply the SVD: Then we can delete those by multiplying both side by :

Then put and to the other side:

And notice how this solution is equivalent to the pseudoinverse computed before

SVD and Least Squares (rank deficient)

If the Linear System is rank deficient, there are infinite solutions. So the question is what solutions are we interested in? And how do we find one?

The simplest solution is the vector with the minimum norm 2, that is often considered in real world application like engineering.

In data science or in problem where the scale is much larger, it’s better to take the sparsest solution vector with a lot of zeros.

There are infinite solutions but the solution vector with the minimal length is unique.

In matlab can be computed using the pseudoinverse:

xlsmatlabpinv = pinv(A)*b   

We use the usual solution (reduced SVD): and express the solution though orthogonal projection on the range of such that: (substitute SVD into A and eliminate ) which gives the underdetermined linear system: since the matrix is in and this sytem has infinite solutions.

Notice that we are not anymore in the situation of before where the linear was overdetermined. In this case we have an underdetermined system.

I want the solution with minimum norm. So what i do is multiply both sides of the above system by , we in fact obtain the projected solution on the row space of .

I know that i take the vector on the range of row space of , this is the minimum value. So is like just (but which one along the infinite solutions? the one with minimum value).

because it resides entirely within the span of the right singular vectors corresponding to the non-zero singular values, without any components in the null space of A.

What happens if you use the rank-deficient LS solution provided by the backslash operator in Matlab? We notice that the solution is the sparsest among the infinite number of solution. We get the same solution if we use the backslash for solving: like this:

xls1 = (Sr*Vr')\(Ur'*b)

The sparse solution is usually the best when you have big big big amount of data.

To recap here’s the solution we computed in Matlab:

  • xls1 = (Sr*Vr')\(Ur'*b) -> -> solution with largest number of null components
  • xlsmatlab = A\b
  • xlsmatlabpinv = pinv(A)*b -> with
  • xlsm=PseudoinvA*b where PseudoinvA=Vr*inv(Sr)*Ur' -> -> solution with minimum 2-norm (same as above)

In matlab we verified that:

  • are the same in Matlab
  • But
    • This solution isn’t sparse

Note that: If i multiply by i get