MathAlgebraLinear algebraLinear operators

Linear subspaces

7 minutes read

Welcome to the world of vector subspaces, where simplicity meets abstraction. By exploring these remarkable constructs, we explore the geometry of space in an elegant and efficient way, develop our algebraic intuition and build fundamental concepts such as linear combinations.

Getting familiar with subspaces will be useful to better understand more advanced concepts, gain practice with vector operations, and represent huge sets using just a few elements.

Generating Lines: A Single Vector's Magic

Consider a line in the plane and take any point vv on it.

A vector in a line

The point is a vector, and by moving in its direction, you can create an infinite set of other points from the line. Now, by moving in the opposite direction, you can get the rest of the points on the line! All the points on the line are scaled versions of vv.

Starting at the origin, you can draw a vector to said point vv. By further moving into the direction of the vector, you can create an infinite set of other points that are all located on the line. Moving in the opposite direction will result in another set of infinite points on this line! And here you go, you recreated the line using only points that are scaled versions of vv.

One vector generating a whole line

Despite the fact that the line is inside the plane, none of its vectors leave it, as if they don't communicate with the outside. Let's analyze how they interact with each other. If you take two points on the line, they have the form λv\lambda v and μv\mu v, due to being scaled versions of vv. The sum of both is λv+μv=(λ+μ)v\lambda v + \mu v = (\lambda + \mu )v, which means that it is also inside the line. The same thing would happen if you multiplied them by any scalar.

One vector generating a whole line


We've just discovered something: The line is stable under the addition and the scalar product as if it were a vector space itself within the plane.

This phenomenon captures the essence of a vector subspace.

A simple definition

The math is simple and clean: A subset UU of a vector space VV is a subspace if it is itself a vector space, with the same operations as VV.
This means that the vectors of UU stay in UU when you add them or multiply them by a scalar.

Do you remember that vector spaces have an additive neutral, that is, a "00"? Well, since UU is a vector space, it has a 00, but since it is also a subset of VV, that 00 must be also in VV. Well, as the 00 of VV is unique, it turns out that both spaces share the same 00. In other words, for UU to be a vector space it must contain 00.


If a subset UU does not contain 00, it cannot be a vector space.

Does this give you a clue as to why only the lines that cross the origin are subspaces?

Lines that aren't subspaces

But how can we know if a set UU is a vector subspace of VV without having to prove all the properties? This is where our previously discovered facts about lines come into play. The sum and scalar product of UU satisfy the properties, so by restricting them to UU they remain valid, but if we take two vectors of UU, we are not guaranteed that their sum or scalar product will still be in UU. It may not be stable as a line. Putting together everything we develop, we have the following result:
A subset UU of a vector space VV is a vector subspace if for every v,wUv, w \in U and λR\lambda \in \mathbb{R}:

  • v+wUv + w \in U

  • λvU\lambda v \in U
  • 0U0 \in U

We think of a subspace as a subset that is stable under addition and scalar product, and that contains 00.

Building planes

The line is the prototype of a subspace, but is it the only one? Let's go to three-dimensional space to take a look. A single vector also generates a simple line.

A line in three dimensions

Now let's focus on two distinct vectors vv and ww. How can we combine them? We can add them or add scaled versions of them, say λv+μw\lambda v + \mu w. This is the most natural way to combine them, and for this reason, we call them linear combinations. If ww were on the line spanned by vv, then their linear combinations would really still be vectors on the line. But what happens if ww is not on the line of vv? What do their linear combinations look like?
Now we not only control the scalar λ\lambda of vv, but the scalar μ\mu of ww. A clever way to deal with this problem is to set λ=1\lambda=1 and play with μ\mu, i.e. v+μwv + \mu w. When considering all the possible values of μ\mu, by the parallelogram rule, we would be visualizing the line generated by ww but which is displaced in such a way that it passes through vv instead of crossing the origin:

A line generated by two vectors

If now λ=2\lambda = 2, then 2v+μw2v + \mu w is again the line generated by ww through 2v2 v. This sounds simple, right? Now think about the case where λ\lambda is equal to 33, then 1,2,3,-1, -2, -3, etc... Each choice of a simply gives us the line generated by ww but shifted slightly. So all the linear combinations of vv and ww is the set of all these lines together... wait! Looks like we just generated a whole plane!

A plane generated by lines


This plane was built from the linear combinations of vv and ww, so as with lines, we call it the plane generated by vv and ww. Everything seems to indicate that if we combine vectors of the plane, we will not get out of it, and therefore it is a subspace! To be sure, we must check that linear combinations are stable under addition and scalar product, try it as a good exercise!

Generalizing

It's okay if you are not familiar with the following terms about functions, you will come across them in the Calculus track, for now, we will only mention them, but they will not appear in the tasks

With a vector, we generate a line, while with 2 (that do not belong to the same line), we obtain a plane. The next step seems quite natural, doesn't it? In Rn\mathbb{R}^n, we can take kk mutually different vectors (that are not 00) v1,v2,,vkv_1, v_2, \dots, v_k and consider the set of all their linear combinations:
{λ1v1+λ2v2++λkvk:λ1,λ2,,λkR}\{\lambda_1 v_1 + \lambda_2 v_2 + \dots + \lambda_k v_k : \lambda_1, \lambda_2, \dots, \lambda_k \in \mathbb{R}\}
This set is called the space spanned by v1,v2,,vkv_1, v_2, \dots, v_k and is denoted by span(v1,v2,,vk)\text{span}(v_1, v_2, \dots, v_k). When k=1k=1 this is simply a line and in case k=2k=2 this is a plane. Following the same reasoning as in the previous section, you can directly verify that it is a vector subspace: when you add two linear combinations, you get another (the new coefficients are the sum of the original ones), and the same thing happens when you multiply any of them by a scalar (the new coefficients are the old ones multiplied by the scalar).

Now, when we have only 33 vectors that are not on the same plane, we obtain a three-dimensional space. Even if we are not able to draw them, the same thing happens with 44, 55 and more vectors. But whenever we take a new vector we have to be sure that it is not in the space generated by the others.

In future topics, you will discover that this is defined as linear independence, a simple but indispensable concept in linear algebra. Isn't it amazing that with just a few vectors, you can completely describe immensely large sets like lines and planes?

Importance

It's okay if you are not familiar with the following terms about functions, you will come across them in the Calculus track. For now, we will only mention them, but they will not appear in the tasks

Now that we know what a subspace is, it is natural to ask why we should care about them. With everything we've done so far, we've discovered a key point: subspaces can be fully described with just a few elements. This is really useful, but it's not the only valuable thing about subspaces.

Rn\mathbb{R}^n is a fundamental space, but in linear algebra, we can deal with more complicated places. Examples are functions from R\mathbb{R} to R\mathbb{R}. It is a huge set, but we can also find quite simple subspaces there: polynomial, integrable, continuous, and differentiable functions are some of the most notable, if you know a little calculus, you can easily verify it. Knowing some subspaces can help us better understand the space itself.

In linear algebra, systems of linear equations are one of the most important applications, practically the reason for its existence! Within them are the homogeneous systems, and a crucial result is that their solutions form a subspace: think about it for a moment, if this subspace were a plane, then we could describe all the solutions with only two vectors. One important thing is that although systems that are not homogeneous are much more common and realistic, we can easily reduce them to the case of homogeneous ones and apply our previous results.

Sometimes space is so complicated that we cannot even think of manipulating its elements as easily as the points on the plane. However, sometimes we know one of its subspaces well and can use its elements to approximate those of the space as closely as possible. This concept is one of the deepest and most used of the Algera linea and is called orthogonal projection. Think of an arbitrary function that is very difficult to handle. We know polynomial functions well, and one famous technique is to find the closest polynomial to our function of interest: although that function is very complicated, the polynomial is very similar but much easier to use and calculate.

The closest polynomial to a function

Conclusion

  • If VV is a vector space, then a subset UU is a vector subspace if it contains 00 and is closed under the sum and scalar product.
  • Lines are the simplest subspaces and are described by a single vector.
  • The planes are a vector space and are described by two vectors.
  • The space spanned by the vectors v1,v2,,vkv_1, v_2, \dots, v_k is the set of all their linear combinations and is denoted by span(v1,v2,,vk)\text{span}(v_1, v_2, \dots, v_k).
  • Recognizing subspaces has many advantages that facilitate the study of spaces.
8 learners liked this piece of theory. 0 didn't like it. What about you?
Report a typo