You will finally discover the intimate connection between matrices and linear transformations, which is fundamental to understanding both. With it, you'll be able to easily analyze a system of equations completely with just a couple of calculations!
Now you will begin to reap the fruits of your labor. You'll put to work practically everything you've learned so far. This topic will even be much easier than the previous ones: you have already done all the hard work, it's time to enjoy!
The transformation generated by a matrix
Recent topics insisted that any matrix induces a linear transformation thanks to the properties of the matrix product:
Furthermore, the null space and the range of tell us several properties of the system of . But let's stop for a minute to improve our notation. Although it's important to refer to the individual entries of the matrix, from now on, the main focus will be its columns. Let's denote them as and write the matrix in terms of them as follows:
Clearly, the columns are vectors in . Here comes a little surprise. Do you remember that any linear transformation is completely determined by its values on a basis? Let's see what this looks like for . Let's only transform the first vector of the canonical basis:
Proof
By the definition of matrix product:
By transforming , you recover the first column of ! In fact, the same happens with the other vectors of the basis, and this is a key point:
The values of in the basis are the columns of
As a direct consequence, you can reduce the product to a linear combination:
The matrix of a transformation
There's something suspicious here. You already know that all the information of an arbitrary linear transformation is contained in its values in the basis:
So if you saved those values somewhere you could rebuild the whole transformation without any problem. But look closely at the last equation... doesn't it look very similar to the last equation in the previous section, where you saw how to calculate the transformation of a matrix?
It seems that you can no longer avoid the fact that the place where you must store the values of the transformation is a matrix!
And is called the matrix associated with . This means that:
Thus, to fully know all the possible values of , it is enough to just calculate their values in the basis and store them in a matrix!
In summary, you can think of every matrix as if it were a linear transformation that deforms into and, conversely, every linear transformation between these spaces is completely determined by an matrix.
Calculating the matrix of a transformation
Before you take a closer look at the relationship we just discovered, let's get our hands dirty with some examples.
Let's start with the following transformation:
The first thing you have to do is calculate its values in the basis:
Finally, use these values for the columns of the matrix of :
Suppose now that you need to solve such a system of linear equations . The columns of are linearly independent (check it!), so the range of is a plane. By the dimension theorem, the null space of only contains and this, in turn, means that is injective. In consequence:
- doesn't always have a solution. For instance, if there aren't any solutions.
- When has a solution, it's unique. For example, if the unique solution is
You've just derived a couple of properties of and proved that a particular system of equations doesn't always have a solution just by building the matrix of . Great, don't you think?
Now let's take the following operator:
Transforming the basis:
Thus the matrix of is:
Now let's look at the operator. It is clear from the columns that its range is all . This means that the system has a solution for any . By the dimension theorem, the null space of is just , in consequence is injective. But as it's an operator, it turns out that it's also bijective! Thus, the system always has a unique solution.
Two sides of the same coin
Hopefully, the above examples have given you an idea of the power of the relationship between linear transformations and matrices. Consider the set of all linear transformations from into , denoted by , and the set of matrices .
To begin with, you know that they are vector spaces. But the interesting thing is that now you can connect them with a function! It is natural
that this function would be the one that assigns its matrix to each linear transformation. It is quite easy to show that:
Proof
You have just built a linear transformation between and ! Fortunately, this function is injective and surjective, which implies that it is an isomorphism! In particular, the dimension theorem tells you that both spaces have the same dimension, .
Proof
- Injective: The constant transformation sends the entire basis to the vector , so its matrix is the zero matrix. But if another transformation had that matrix, then it would send the entire base to and that would mean that it would coincide with the constant transformation in the base and therefore they would be equal.
- Surjective: Given any matrix, we can construct a linear transformation simply indicating that it sends the first vector of the base to the first column of the matrix, the second vector of the base to the second column of the matrix, and so on. Then the matrix associated with that transformation would be the original matrix.
This means that these sets are essentially the same, you can think of them as two ways of thinking of the same object. Since they are almost the same, you can study the properties of one through those of the other. Sometimes a result is easier to understand for matrices and then it is also true for transformations and vice versa.
Have you ever wondered why the matrix product has such a strange definition? Well, it turns out that it is made to mimic the behavior of the composition of linear transformations, but with matrices. If, after all, both sets are almost the same, it is not crazy to think that operations defined in one have an analog in the other. This very useful connection is reflected in the following result:
If and are linear, then:
Conclusion
Let be the set of all linear transformations from into , and the set of matrices .
Let and .
-
The values of in the basis are the columns of
-
-
-
-
There is an isomorphism between and . You can think of linear transformations and matrices as the same set but seen in different ways.
-
If and are linear, then