MathAlgebraLinear algebraLinear operators

Intersection and sums of subspaces

8 minutes read

Subspaces are usually sets that we know pretty well, like lines and planes, or those that are easy to handle: think of polynomials of degree less than or equal to 22 within the set of all functions from R\mathbb{R} into R\mathbb{R}. Most of the time subspaces arise as solutions to specific problems, such as solutions to systems of linear equations, and for this reason, it is common to have to combine them with each other. Ideally, the result should be a subspace in order to exploit its structure.

In this topic, we will explore the most useful ways to combine subspaces, their geometric interpretations, and the main results that in the near future will help us simplify calculations and understand more advanced concepts.

The dimension of a subspace

Until now the subspaces have been really small sets of space, like the lines inside the plane. The dimension of a space is our notion of size and thanks to it we can make our intuition concrete: If UU is a subspace of VV, then dim(U)dim(V)\dim(U) \leq \dim(V).

In fact, the dimension is so important that the previous result is actually slightly stronger since when a subspace has the size of the space, they are actually the same:

dim(U)=dim(V)\dim(U) = \dim(V) if and only if U=VU = V

Intersection

The lines that pass through the origin are our prototype vector subspace. What can happen if we intersect them? Since both contain the origin, we are sure that their intersection also contains it. But is it the only point in both planes simultaneously? Let's look at some examples of intersecting lines!

Several lines passing through the origin

Well, unless the lines are identical, their only common point is the origin. But look closely, the set that only contains 00 is itself a subspace (it is closed under sums and products by a scalar).

Remember that planes are also subspaces? Again they both pass through the origin, so their intersection does as well. But this time there are infinitely many more points in common.

Intersection of planes

Surprisingly, the intersection is a whole line that intersects the origin, but even more interesting is the fact that it is also a subspace! Try seeing how a plane and a line intersect, the result is also a subspace.

Actually, this is not a coincidence, since the intersection of any subspaces UU and WW is itself a subspace.

Proof This is easy to verify since both contain 00, so the only thing we have to check is that it is closed under sums and products by a scalar. Let's see it quickly. If the vectors uu and ww are in UWU \cap W, then they are both in UU and so is their sum. And the same is true for WW. Therefore the sum u+wu + w is also in the intersection. Furthermore, if we multiply a vector of the intersection by a scalar, then by the same reasoning the result is still in the intersection.

The intersection is really robust. If we take any collection of subspaces (finite or infinite), the intersection of all of them is still a subspace! You know, in the worst case at least there is an intersection at 00.

Union

After our notable discovery regarding intersections, it is quite natural to think that the union of subspaces should also be a subspace. But not so fast. As always let's start with a picture.

Let's take the simplest possible example, the axes of the R2\mathbb{R}^2 plane. Let's choose a vector from each axis, say e1=(1,0)Te_1=(1,0)^T and e2=(0,1)Te_2=(0,1)^T. Clearly, the union of the two lines contains both vectors, but for it to be a subspace it must also contain its sum e1+e2=(1,1)Te_1 + e_2 = (1,1)^T.

Vectors in the plane that are not in the union of lines

It seems that the union is not stable under addition at all. It is actually very difficult for it to become a subspace. The truth is that the union of two subspaces is a subspace if and only if one of them contains the other. But in this case, the union is simply the larger of the two spaces, not very encouraging.

Sum of subspaces

Even if the union disappointed us like this, it is still natural to think that we can combine two subspaces in such a way that we generate a larger space that contains them both. Since the union is not stable enough with addition, we should build a set that is, and this is precisely the reason for its name.

The sum of two subspaces UU and WW is the set of all possible sums of elements of both spaces:

U+W={u+w:uU,wW}U + W = \left\{ u + w : u \in U, w \in W \right\}The sum of two subspaces is a subspace, just as what we wanted.

Proof Consider v1v_1 and v2v_2 in U+WU + W. There must exist u1u_1 and u2u_2 in UU and w1w_1 and w2w_2 in WW such that v1=u1+w1v_1 = u_1 + w_1 and v2=u2+w2v_2 = u_2 + w_2. Then v1+v2=(u1+u2)+(w1+w2)v_1 + v_2 = (u_1 + u_2) + (w_1 + w_2); it is immediate that v1+v2U+Wv_1 + v_2 \in U + W. The same happens if we multiply v1v_1 by a scalar. Therefore the sum is a subspace, and in fact, it is the smallest that contains both UU and WW.

In addition, the sum of spaces goes perfectly with the dimension: dim(U+W)=dim(U)+dim(W)dim(UW)\dim(U + W) = \dim(U) +\dim(W) - \dim(U \cap W)

Addition can help us break down a subspace into smaller ones. If a subspace had a basis, then it could be decomposed as the sum of the spaces spanned by each basis vector. Let's say that UU has a base {u1,u2,...,uk}\{u_1, u_2, ..., u_k\}, if uUu \in U, then u = j=1kλkuk\sum_{j=1}^k \lambda_k u_k, but since λkuk\lambda_k u_k is in span(uk)span(u_k), it follows immediately that U=span(u1)+span(u2)++span(uk)U = span(u_1) + span(u_2) + \dots + span(u_k).

Direct sums

Remember that linearly dependent vectors are a bit redundant? This happens because one of them can be built with the others and that is why we always look for linearly independent vectors. The same occurs with the addition of spaces, since sometimes their information overlaps, for example, two planes in R3\mathbb{R}^3:

Two redundant planes in the space

Any vector can be seen as the sum of one element in the red plane and one in the blue plane. But if we simply take a line inside the blue plane, then the same phenomenon holds.

This motivates the concept of direct sum, the sum of spaces that do not "overlap", that is, they share the least possible number of vectors in common. Formally, the sum of two subspaces U and W is direct if UW=U \cap W = \emptyset and in such a case we denote it by UWU \bigoplus W. Analogously with linearly independent vectors, any vector in the direct sum can be uniquely written as the sum of a element of UU and another of WW.

Here, the result of the previous section is simplified: dim(UW)=dim(U)+dim(W)dim(U⨁W)=dim(U)+dim(W)Its main use is that if we manage to decompose our space as the direct sum of two subspaces that we know well, then we calculate the dimensions of the subspaces and finally we only add them to obtain that of the entire space, which comes in handy.

Just as the sum of subspaces is similar to the union, you can think of the direct sum as an analoguous to the union of disjoint sets.

The plane R2\mathbb{R}^2 has its canonical basis {e1,e2}\{e_1, e_2\}, which means that it can be written as the sum of the subspaces spanned by e1e_1 and e2e_2. But these spaces only intersect at the origin, so the sum is direct: R2=span(e1)span(e2)\mathbb{R}^2 = span(e_1) \bigoplus span(e_2) Similarly, R3\mathbb{R}^3 can be separated as the direct sum of R2\mathbb{R}^2 and of the vertical axis, which means that:R3=span(e1,e2)span(e3)=span(e1)span(e2)span(e3)\mathbb{R}^3 = span(e_1, e_2) \bigoplus span(e_3) = span(e_1) \bigoplus span(e_2) \bigoplus span(e_3)

As a last example, consider the set PP polynomial functions of degree less than or equal to nn. We know that a function ff is even if f(x)=f(x)f(-x) = f(x), whereas it is odd if f(x)=f(x)f(-x) = -f(x). If PEP^E is the set of even polynomials and POP^O is the set of odd polynomials, then a base for PEP^E is {1,x2,x4,}\{1, x^2, x^4, \dots\} and one for POP^O is {x,x3,x5,}\{x, x^3, x^5, \dots\}. Since the union of these bases is a base of PP, it is immediate that:P=PEPOP = P^E \bigoplus P^O

A polynomial represented by the sum of other two

Any polynomial can be separated as the sum of two different polynomials, one with even powers and the other with odd powers. Here is an illuminating example: x3+4x2+5x+1=(4x2+1)+(x3+5x)x^3 + 4x^2 + 5x + 1 = (4x^2 + 1) + (x^3 + 5x )

Conclusion

Let's say that V is a vector space and U and W are two subspaces of it. Then we came following statements:

  • dim(U)dim(V)\dim(U) \leq \dim(V)
  • UWU \cap W is a subspace
  • UWU \cup W is a subspace if and only if UWU \subset W or WUW \subset U
  • The sum of UU and WWis defined as U+W={u+w:uU,wW}U + W = \left\{ u + w : u \in U, w \in W \right\} and is a subspace.
  • dim(U+W)=dim(U)+dim(W)dim(UW)\dim(U + W) = \dim(U) +\dim(W) - \dim(U \cap W)
  • The sum of U and W is direct when UW=U \cap W = \emptyset, and is denoted by UWU \bigoplus W.
  • dim(UW)=dim(U)+dim(W)\dim(U \bigoplus W) = \dim(U) +\dim(W)

4 learners liked this piece of theory. 0 didn't like it. What about you?
Report a typo