You are already familiar with vectors and the key role they play in linear algebra and other fields. A vector is something that has magnitude and direction. It’s useful for describing various things that may need to be dealt with in programming situations, like the velocity of a vehicle in a video game.
However, vectors are rarely studied individually. We usually want to know how they are related and how they affect each other. For example, we may want to model whether the velocity of the wind affects that of a car. This is where the concept of linear dependence comes in handy, as it allows us to understand and describe the relationship between vectors, and this topic is a fundamental concept in linear algebra and other areas of math and science.
What is linear dependence?
Let's try to understand the concept of linear dependence through an analogy. Suppose that you love painting and are considering which color paints to buy. You don't have a lot of money, so you want to be strategic when picking the colors.
Does it make sense to buy red, yellow, and orange? Well, we can make orange by mixing red and yellow paints, so buying orange paint seems redundant. What if you buy red, yellow, and blue? In this case, none of the colors can be combined to produce the other, so we can say these colors are independent of each other.
A set of colored paints are dependent if some of them can be combined to produce another. Similarly, a vector is dependent on other vectors if it can be expressed as a linear combination of others. Such a vector is redundant; it can be obtained by combining other vectors, so it does not add more information. Let's go deeper into the concept of linear combinations and linear dependence in the next section.
Formal definition
Now we are ready to define our new concepts formally. Consider some vectors and scalars . The vector:
is called a linear combination of .
If all scalars are zero, the linear combination is called trivial. If at least one scalar is not zero, it is called non-trivial. For example, the following combination is trivial.
And the following combination is non-trivial.
Now let's use our understanding of linear combinations and triviality to define linear dependence. A set of vectors is linearly dependent if one of the vectors in the set can be represented by some non-trivial linear combination of the other vectors in the set. More formally, we can say that vectors are linearly dependent if there exist scalars , at least one of which is not zero, such that:For example, the vectors are linearly dependent. Indeed, we found the coefficients , such that their linear combination is zero:
We know about linear dependence, but how can we define linear independence? Pretty simple: if a set of vectors is not linearly dependent, then it is linearly independent. Not good enough? Well, formally we can say that vectors are linearly independent if the only scalars satisfying , are
That is, a set of vectors is linearly independent if only the trivial linear combination and no other linear combination of the vectors equals the zero vector.
For example, the following vectors are linearly independent, because only the trivial combination of the vectors gives the zero vector (Try it yourself!):
Geometric intuition
There is a pretty intuitive geometric interpretation of linear dependence in terms of geometry. In geometric representation, two non-zero vectors are linearly dependent if and only if they are collinear, that is, they lie on a single straight line. Algebraically, two vectors and are collinear if for some scalar ,
Here is an example:
As shown in the graph above, , , and .
What do we know about these vectors? One thing is that and lie on the same line, so they are collinear. can be expressed as , and as . On the other hand, the vector is not collinear with the other two and is not linearly dependent on them. So, no matter which number we pick, we won't be able to multiply or by it to get .
Now let's get back to linear dependence. Any 3 vectors in 2D space would be linearly dependent. Let there be three 2D vectors:
Then it is always possible to solve the following system of equations with respect to :
Listing the solutions in all possible cases (, and so on) would be tedious, but you can try and see it for yourself.
For example, in the case of , , and :
Geometrically, out of three vectors on a plane (i.e. directions of movement), one always can be expressed as the sum of two others. Hence any plane is a two-dimensional space.
We looked at 2D space, but what about 3D space? Considering everything just said, it is easy to geometrically interpret linear dependency in 3D space. Three vectors are linearly dependent if and only if they are coplanar, i.e. lie on the same plane.
Conclusion
Now let's quickly sum up the main points of what you have seen and learned about linear combinations and linear dependence:
- Vectors are linearly dependent if you can express one of them as a sum of other vectors multiplied by some scalars, such that at least one of these scalars is different from .
- With linear combinations, the procedure is the same as with linearly dependent vectors. We express one vector in terms of the sum of other vectors multiplied by some scalars. If this can be satisfied only when all scalars are equal to , then the linear combination is trivial. Otherwise, if one or more of the scalars are different from , then the linear combination is non-trivial.
- If you interpret linear dependence geometrically, you will see that vectors are linearly dependent if they are colinear in 2D, coplanar in 3D, and vectors are linearly dependent if they are in dimensional space
- You can also see that if elements of two vectors are proportional, then they are linearly dependent.