What happens after calculating the singular value decomposition (SVD) of a matrix? In this topic, you'll explore the main applications of this decomposition. You'll finally get a geometric interpretation of the transpose and easily compute the orthonormal basis of spaces closely related to the matrix.
You'll also develop an alternative form of the SVD that allows you to progressively rebuild any matrix and accurately approximate it.
In the following topic, you'll be working with an matrix of rank with SVD given by:And, as before, the columns of its pieces are:Also, the singular values are ordered non-increasingly .
The geometry of the inverse and the transpose
Here comes the first application of singular values. When all of them are different from zero and is square, then it's important to note that is the diagonal matrix. This matrix's entries are the multiplicative inverse of the singular values. As a result, is invertible and its inverse is simply:
You already know all of the pieces of this decomposition. Now the problem of finding is reduced to the simpler tasks of getting and then performing only two matrix products!
Have you ever noticed that so far, you don't have a geometric interpretation of the transpose? It's time to fill this embarrassing gap. By leveraging the SVD, it immediately becomes clear that:
Think about the decompositions of both and . They look quite similar to each other. Geometrically they undo the transformations that but in the opposite order:
- First, they apply in order to neutralize the effect of .
- Then, they stretch the resulting space.
- Finally, as in the first step, they counteract by applying .
But the second step is the actual difference between and . While the former undoes the stretch of , the second simply stretches by the same amount. Thus, you can roughly think of as rotating in the opposite direction as but strechting by the same way.
The four fundamental spaces
The relationship between , its transpose and the SVD is even deeper than you've just seen. The four fundamental spaces of are , , and . They're connected to each other by several remarkable relations. The most important one says that once you've computed the SVD, you have enough information to reconstruct every space!
- is an orthonormal basis of .
-
is an orthonormal basis of .
-
is an orthonormal basis of .
-
is an orthonormal basis of .
An example would be a useful illustration. Consider the following matrix:
An SVD for is:Since it has three positive singular values, its rank is . This means that:
- is an orthonormal basis for
- .
- is an orthonormal basis for
- is an orthonormal basis for
An alternative form of the SVD
Let's express the SVD in a simpler way through a sum of simpler matrices that involves the singular vectors:
Each term in the summation—specifically , is considered a latent component of the original matrix. This is because each component represents incremental computations of the hidden elements within the matrix. Notice that the columns of each latent component are linearly dependent, so they're rank-1 matrices.
Proof
You can assume that (otherwise, first compute the product between and ). Then:
Since for every , this implies that:
In order to get this alternative form, you'll study this matrix:Its SVD is:
Thus:
Now, putting it all together:
Extra: SVD for the linear transformation associated with A
Note that after computing the SVD, you don't have to calculate the value of in any vector explicitly anymore.
Truncated SVD
The alternative form of the SVD is the most important source of the applications of this decomposition. The more latent components you add, the closer you get to the matrix. Each of these partial sums is known as a truncated singular value decomposition. For this reason, for every we define:
The important thing about all of this is that, for all , among all the matrices of rank , is the one that most resembles A. This is the main reason why SVD is used in real applications. You can interpret it as the SVD arranging into its “most important” and “least important” pieces. For this reason, the largest singular values describe the broad strokes of , whilst the smallest singular values take care of the finer details.
Let's compute the truncated SVD for the matrix from the previous section:
Its latent components are:Then, the best approximations (of rank , and respectively) for are:
Image compression
Truncated singular value decomposition often retains a stunningly large level of accuracy even when the values of are much smaller than . This is because, in real-world matrices, only a minuscule proportion of singular values are large. As a result, serves as an accurate approximation of A.
This is particularly useful for image compression. A black and white image can be represented as a matrix with values from 0 to 255, where 0 is full black and 255 equals white. As the numbers increase, lighter and lighter shades are obtained. Let's see truncated SVD in action with this cute panda:
This image corresponds to a matrix . Since every column is nearly unique, the rank of is —the biggest possible. This implies that there are latent components. The first singular value is larger, and the first latent component is the best rank- approximation to the image:
Perhaps it's not a good approximation, but note that as it's rank , every row is multiple of any other one—and the same occurs with the columns. Now look at the approximation with :
It's getting better with only singular values. But when the results are amazing:
singular values are excellent, and note that this is much less than . As the next singular values are negligible, when the approximation is so good that the difference is not even distinguished anymore:
Conclusion
- When every singular value of is positive, the matrix is invertible and .
- The geometry of and is closely related to that of .
-
The four fundamental spaces of are , , and . The SVD of gives you an orthonormal basis for every such space.
-
The alternative form of the SVD of is the sum of its latent components .
-
The best way to approximate an rank matrix by a rank one () is its truncated SVD .
-
The singular values are ordered non-increasingly .