Since is a diagonal square matrix and that has rank , the same as the matrix , these are square system that can be solved finding a value such that it is zero.
Consider that the decomposition is square.
We have to solve the following square system:
If you have a diagonal matrix, what is its inverse? It’s the same matrix but its diagonal elements are reversed (i.e, if an element is , the inverse is ).
So we consider the inverse diagonal matrix composed of the inverse of each singular value of .
We can solve the problem by doing:
Important thing to recall: YOU CANNOT DIVIDE BY A MATRIX
Given ->
These passages lead us to the solution:
is the least square solution (since we reduced an overdetermined system to a square system)
Another way to solve this overdetermined system using the pseudoinverse:
by this we obtain that:
Solving the overdetermined system with the orthogonal projection §
Another way is to use the orthogonal projection, since we know that the solution is such that the vector is the orthogonal projection of on the range of .
If the Linear System is rank deficient, there are infinite solutions.
So the question is what solutions are we interested in? And how do we find one?
The simplest solution is the vector with the minimum norm 2, that is often considered in real world application like engineering.
In data science or in problem where the scale is much larger, it’s better to take the sparsest solution vector with a lot of zeros.
There are infinite solutions but the solution vector with the minimal length is unique.
In matlab can be computed using the pseudoinverse:
xlsmatlabpinv = pinv(A)*b
We use the usual solution (reduced SVD):
and express the solution though orthogonal projection on the range of such that: (substitute SVD into A and eliminate )
which gives the underdetermined linear system:
since the matrix is in and this sytem has infinite solutions.
Notice that we are not anymore in the situation of before where the linear was overdetermined. In this case we have an underdetermined system.
I want the solution with minimum norm. So what i do is multiply both sides of the above system by , we in fact obtain the projected solution on the row space of .
I know that i take the vector on the range of row space of , this is the minimum value. So is like just (but which one along the infinite solutions? the one with minimum value).
because it resides entirely within the span of the right singular vectors corresponding to the non-zero singular values, without any components in the null space of A.
What happens if you use the rank-deficient LS solution provided by the backslash operator in Matlab?
We notice that the solution is the sparsest among the infinite number of solution.
We get the same solution if we use the backslash for solving:
like this:
xls1 = (Sr*Vr')\(Ur'*b)
The sparse solution is usually the best when you have big big big amount of data.
To recap here’s the solution we computed in Matlab:
xls1 = (Sr*Vr')\(Ur'*b) -> -> solution with largest number of null components
xlsmatlab = A\b
xlsmatlabpinv = pinv(A)*b -> with
xlsm=PseudoinvA*b where PseudoinvA=Vr*inv(Sr)*Ur' -> -> solution with minimum 2-norm (same as above)