8 minutes read

You are already familiar with vectors and the key role they play in linear algebra and other fields. A vector is something that has magnitude and direction. It’s useful for describing various things that may need to be dealt with in programming situations, like the velocity of a vehicle in a video game.

However, vectors are rarely studied individually. We usually want to know how they are related and how they affect each other. For example, we may want to model whether the velocity of the wind affects that of a car. This is where the concept of linear dependence comes in handy, as it allows us to understand and describe the relationship between vectors, and this topic is a fundamental concept in linear algebra and other areas of math and science.

What is linear dependence?

Let's try to understand the concept of linear dependence through an analogy. Suppose that you love painting and are considering which color paints to buy. You don't have a lot of money, so you want to be strategic when picking the colors.

Does it make sense to buy red, yellow, and orange? Well, we can make orange by mixing red and yellow paints, so buying orange paint seems redundant. What if you buy red, yellow, and blue? In this case, none of the colors can be combined to produce the other, so we can say these colors are independent of each other.

Red, yellow, and orange form a dependent set of colors while red, yellow, and blue are an independent set

A set of colored paints are dependent if some of them can be combined to produce another. Similarly, a vector is dependent on other vectors if it can be expressed as a linear combination of others. Such a vector is redundant; it can be obtained by combining other vectors, so it does not add more information. Let's go deeper into the concept of linear combinations and linear dependence in the next section.

Formal definition

Now we are ready to define our new concepts formally. Consider some vectors v1,v2,,vnv_1, v_2, \ldots, v_n and scalars c1,c2,,cnc_1, c_2, \ldots, c_n. The vector:w=c1v1+c2v2++cnvnw = c_1v_1 + c_2 v_2 + \ldots + c_nv_n

is called a linear combination of v1,v2,,vnv_1, v_2, \ldots, v_n.

If all scalars are zero, the linear combination is called trivial. If at least one scalar is not zero, it is called non-trivial. For example, the following combination is trivial.

Breakdown of example linear combination into parts

And the following combination is non-trivial.

1(23)3(11.5)=(11.5)\textcolor{red}{1}\cdot \begin{pmatrix} 2 \\ 3 \end{pmatrix}-\textcolor{red}{3}\cdot \begin{pmatrix} 1 \\ 1.5 \end{pmatrix} = \textcolor{blue}{\begin{pmatrix} -1 \\ -1.5 \end{pmatrix}}

Now let's use our understanding of linear combinations and triviality to define linear dependence. A set of vectors is linearly dependent if one of the vectors in the set can be represented by some non-trivial linear combination of the other vectors in the set. More formally, we can say that nn vectors x1,x2,,xnx_1, x_2, \ldots, x_n are linearly dependent if there exist scalars c1,c2,,cnc_1, c_2, \ldots, c_n, at least one of which is not zero, such that:c1x1+c2x2++cnxn=0.c_1x_1 + c_2x_2 + \ldots+c_nx_n=0.For example, the vectors x1=(23),  x2=(11.5),  x3=(32)x_1 = \begin{pmatrix} 2 \\ 3 \end{pmatrix} , \; x_2 = \begin{pmatrix} 1 \\ 1.5 \end{pmatrix}, \; x_3 = \begin{pmatrix} 3 \\ 2 \end{pmatrix} are linearly dependent. Indeed, we found the coefficients c1=1,  c2=2,  c3=0c_1 = 1,\; c_2 = -2,\; c_3 = 0, such that their linear combination is zero:

1(23)2(11.5)+0(32)=(00)1\begin{pmatrix} 2 \\ 3 \end{pmatrix}-2 \begin{pmatrix} 1 \\ 1.5 \end{pmatrix} + 0\begin{pmatrix} 3 \\ 2 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}

We know about linear dependence, but how can we define linear independence? Pretty simple: if a set of vectors is not linearly dependent, then it is linearly independent. Not good enough? Well, formally we can say that nn vectors x1,x2,,xnx_1, x_2, \ldots, x_n are linearly independent if the only scalars c1,c2,,cnc_1, c_2, \ldots, c_n satisfying c1x1+c2x2++cnxn=0c_1x_1 + c_2x_2 + \ldots+c_nx_n=0, arec1=c2==cn=0.c_1 = c_2 = \ldots =c_n= 0.

That is, a set of vectors is linearly independent if only the trivial linear combination and no other linear combination of the vectors equals the zero vector.

For example, the following vectors are linearly independent, because only the trivial combination of the vectors gives the zero vector (Try it yourself!):

0(23)+0(310)=(00)0 \begin{pmatrix} 2 \\ 3 \end{pmatrix}+0 \begin{pmatrix} 3 \\ 10 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}

Geometric intuition

There is a pretty intuitive geometric interpretation of linear dependence in terms of geometry. In geometric representation, two non-zero vectors are linearly dependent if and only if they are collinear, that is, they lie on a single straight line. Algebraically, two vectors p\overrightarrow{p} and q\overrightarrow{q} are collinear if for some scalar nn,

p=nq\overrightarrow{p} = n \cdot \overrightarrow{q} Here is an example:

Example of linearly dependent and linearly independent vectors

As shown in the graph above, AB=(12)\overrightarrow{AB} = \begin{pmatrix} 1 \\ 2 \end{pmatrix}, AC=(24)\overrightarrow{AC} = \begin{pmatrix} 2 \\ 4 \end{pmatrix}, and AD=(22)\overrightarrow{AD} = \begin{pmatrix} 2 \\ 2 \end{pmatrix}.


What do we know about these vectors? One thing is that AB\overrightarrow{AB} and AC\overrightarrow{AC} lie on the same line, so they are collinear. AB\overrightarrow{AB} can be expressed as 12AC\frac{1}{2} \overrightarrow{AC}, and AC\overrightarrow{AC} as 2AB2 \overrightarrow{AB}. On the other hand, the vector AD\overrightarrow{AD} is not collinear with the other two and is not linearly dependent on them. So, no matter which number we pick, we won't be able to multiply AB\overrightarrow{AB} or AC\overrightarrow{AC} by it to get AD\overrightarrow{AD}.

Now let's get back to linear dependence. Any 3 vectors in 2D space would be linearly dependent. Let there be three 2D vectors:

x=(x1x2),  y=(y1y2),  z=(z1z2).x = \begin{pmatrix} x_1 \\ x_2 \end{pmatrix}, \; y = \begin{pmatrix} y_1 \\ y_2 \end{pmatrix}, \; z = \begin{pmatrix} z_1 \\ z_2 \end{pmatrix}.

Then it is always possible to solve the following system of equations with respect to c1,c2,c3c_1, c_2, c_3:

{c1x1+c2y1+c3z1=0c1x2+c2y2+c3z2=0\begin{cases} c_1 x_1 + c_2 y_1 +c_3 z_1 = 0 \\ c_1 x_2 + c_2 y_2 +c_3 z_2 = 0 \end{cases}

Listing the solutions in all possible cases (z1=0z_1 = 0, z10z_1 \neq 0 and so on) would be tedious, but you can try and see it for yourself.

For example, in the case of AB\overrightarrow{AB}, AC\overrightarrow{AC}, and AD\overrightarrow{AD}:

(12)+0(22)0.5(24)=(00)\begin{pmatrix} 1 \\ 2 \end{pmatrix} + 0\begin{pmatrix} 2 \\ 2 \end{pmatrix} - 0.5\begin{pmatrix} 2 \\ 4 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}Geometrically, out of three vectors on a plane (i.e. directions of movement), one always can be expressed as the sum of two others. Hence any plane is a two-dimensional space.

Geometrically, out of three vectors on a plane , one always can be expressed as the sum of two others

We looked at 2D space, but what about 3D space? Considering everything just said, it is easy to geometrically interpret linear dependency in 3D space. Three vectors are linearly dependent if and only if they are coplanar, i.e. lie on the same plane.

So, in general, nn vectors are linearly dependent if and only if they lie in (n1)(n-1)-dimensional space. And the geometric interpretation of that fact is that the number of dimensions of a vector space is equal to the maximum number of independent directions of movement.

Conclusion

Now let's quickly sum up the main points of what you have seen and learned about linear combinations and linear dependence:

  • Vectors are linearly dependent if you can express one of them as a sum of other vectors multiplied by some scalars, such that at least one of these scalars is different from 00.
  • With linear combinations, the procedure is the same as with linearly dependent vectors. We express one vector in terms of the sum of other vectors multiplied by some scalars. If this can be satisfied only when all scalars are equal to 00, then the linear combination is trivial. Otherwise, if one or more of the scalars are different from 00, then the linear combination is non-trivial.
  • If you interpret linear dependence geometrically, you will see that vectors are linearly dependent if they are colinear in 2D, coplanar in 3D, and nn vectors are linearly dependent if they are in n1n-1 dimensional space
  • You can also see that if elements of two vectors are proportional, then they are linearly dependent.
15 learners liked this piece of theory. 0 didn't like it. What about you?
Report a typo