Subspaces are usually sets that we know pretty well, like lines and planes, or those that are easy to handle: think of polynomials of degree less than or equal to within the set of all functions from into . Most of the time subspaces arise as solutions to specific problems, such as solutions to systems of linear equations, and for this reason, it is common to have to combine them with each other. Ideally, the result should be a subspace in order to exploit its structure.
In this topic, we will explore the most useful ways to combine subspaces, their geometric interpretations, and the main results that in the near future will help us simplify calculations and understand more advanced concepts.
The dimension of a subspace
Until now the subspaces have been really small sets of space, like the lines inside the plane. The dimension of a space is our notion of size and thanks to it we can make our intuition concrete: If is a subspace of , then .
In fact, the dimension is so important that the previous result is actually slightly stronger since when a subspace has the size of the space, they are actually the same:
if and only if
Intersection
The lines that pass through the origin are our prototype vector subspace. What can happen if we intersect them? Since both contain the origin, we are sure that their intersection also contains it. But is it the only point in both planes simultaneously? Let's look at some examples of intersecting lines!
Well, unless the lines are identical, their only common point is the origin. But look closely, the set that only contains is itself a subspace (it is closed under sums and products by a scalar).
Remember that planes are also subspaces? Again they both pass through the origin, so their intersection does as well. But this time there are infinitely many more points in common.
Surprisingly, the intersection is a whole line that intersects the origin, but even more interesting is the fact that it is also a subspace! Try seeing how a plane and a line intersect, the result is also a subspace.
Actually, this is not a coincidence, since the intersection of any subspaces and is itself a subspace.
Proof
This is easy to verify since both contain , so the only thing we have to check is that it is closed under sums and products by a scalar. Let's see it quickly. If the vectors and are in , then they are both in and so is their sum. And the same is true for . Therefore the sum is also in the intersection. Furthermore, if we multiply a vector of the intersection by a scalar, then by the same reasoning the result is still in the intersection.The intersection is really robust. If we take any collection of subspaces (finite or infinite), the intersection of all of them is still a subspace! You know, in the worst case at least there is an intersection at .
Union
After our notable discovery regarding intersections, it is quite natural to think that the union of subspaces should also be a subspace. But not so fast. As always let's start with a picture.
Let's take the simplest possible example, the axes of the plane. Let's choose a vector from each axis, say and . Clearly, the union of the two lines contains both vectors, but for it to be a subspace it must also contain its sum .
It seems that the union is not stable under addition at all. It is actually very difficult for it to become a subspace. The truth is that the union of two subspaces is a subspace if and only if one of them contains the other. But in this case, the union is simply the larger of the two spaces, not very encouraging.
Sum of subspaces
Even if the union disappointed us like this, it is still natural to think that we can combine two subspaces in such a way that we generate a larger space that contains them both. Since the union is not stable enough with addition, we should build a set that is, and this is precisely the reason for its name.
The sum of two subspaces and is the set of all possible sums of elements of both spaces:
The sum of two subspaces is a subspace, just as what we wanted.
Proof
Consider and in . There must exist and in and and in such that and . Then ; it is immediate that . The same happens if we multiply by a scalar. Therefore the sum is a subspace, and in fact, it is the smallest that contains both and .In addition, the sum of spaces goes perfectly with the dimension:
Addition can help us break down a subspace into smaller ones. If a subspace had a basis, then it could be decomposed as the sum of the spaces spanned by each basis vector. Let's say that has a base , if , then u = , but since is in , it follows immediately that .
Direct sums
Remember that linearly dependent vectors are a bit redundant? This happens because one of them can be built with the others and that is why we always look for linearly independent vectors. The same occurs with the addition of spaces, since sometimes their information overlaps, for example, two planes in :
Any vector can be seen as the sum of one element in the red plane and one in the blue plane. But if we simply take a line inside the blue plane, then the same phenomenon holds.
This motivates the concept of direct sum, the sum of spaces that do not "overlap", that is, they share the least possible number of vectors in common. Formally, the sum of two subspaces U and W is direct if and in such a case we denote it by . Analogously with linearly independent vectors, any vector in the direct sum can be uniquely written as the sum of a element of and another of .
Here, the result of the previous section is simplified: Its main use is that if we manage to decompose our space as the direct sum of two subspaces that we know well, then we calculate the dimensions of the subspaces and finally we only add them to obtain that of the entire space, which comes in handy.
Just as the sum of subspaces is similar to the union, you can think of the direct sum as an analoguous to the union of disjoint sets.
The plane has its canonical basis , which means that it can be written as the sum of the subspaces spanned by and . But these spaces only intersect at the origin, so the sum is direct: Similarly, can be separated as the direct sum of and of the vertical axis, which means that:
As a last example, consider the set polynomial functions of degree less than or equal to . We know that a function is even if , whereas it is odd if . If is the set of even polynomials and is the set of odd polynomials, then a base for is and one for is . Since the union of these bases is a base of , it is immediate that:
Any polynomial can be separated as the sum of two different polynomials, one with even powers and the other with odd powers. Here is an illuminating example:
Conclusion
Let's say that V is a vector space and U and W are two subspaces of it. Then we came following statements:
- is a subspace
- is a subspace if and only if or
- The sum of and is defined as and is a subspace.
- The sum of U and W is direct when , and is denoted by .
-