How to Determine if a Vector Set is Linearly Independent


The odds favor that by the time someone has reached this article, myself included, they have spent at least the briefest of moments (frustratedly?) questioning the practical applications for  linear combination, linear independence and linear math. In a sentence, these concepts allow us to mathematically understand and represent multidimensional coordinate systems. If you’re looking for a quick explanation for a homework problem feel free to skim through the bolded topics for help in specific areas of concern. Otherwise, here’s something to think about.  Imagine maneuvering in three dimensional space. An instantaneous position can be described using a three dimensional coordinate system. When following a consistent pattern of movement, an instantaneous position can be described with a fourth dimension, time. Suppose you have just landed the snowball throw of a lifetime and hit a target moving across your view plane, increasing the distance between you, and uphill. You have properly estimated the intersection of two moving objects in four dimensions. This is not always an easy task to execute. Now make this throw using a fifth dimension. Most people can’t comprehend the existence of a fifth dimension without having to understand how to maneuver in it. With linear math we can attempt to understand and represent the relationships between these dimensions.

Important Definitions

Linear Independence
A set of linearly independent vectors {V_1, V_2, . . . , V_k } has ONLY the zero (trivial) solution <x_1, x_2, . . . , x_k > = <0, 0, . . . , 0 >  for the equation x_1 * V_1 + x_2 * V _2 + . . . + x_k * V_k = 0

Linear Dependence
Alternatively, if x_1, x_2, . . . , or x_k \neq 0 , the set of vectors is said to be linearly dependent.

Determining Linear Independence

By row reducing a coefficient matrix created from our vectors {V_1, V_2, . . . , V_k }, we can determine our <x_1, x_2, . . . , x_k >. Then to classify a set of vectors as linearly independent or dependent, we compare to the definitions above.

Determine if the following set of vectors are linearly independent:

\begin{bmatrix} 1\\1\\1\end{bmatrix} , \begin{bmatrix} 2\\2\\2\end{bmatrix} , \begin{bmatrix} 0\\0\\5\end{bmatrix} , \begin{bmatrix} 1\\2\\3\end{bmatrix}

Setting up a Corresponding System of Equations and Finding it’s RREF Matrix

We need to understand that our vectors can be represented with a system of equations all equaling zero to satisfy the equation x_1 * V_1 + x_2 * V _2 + . . . + x_k * V_k = 0 from our definition of linear independence. These equations will look something like this:

1*x_1 + 2*x_2 + 0*x_3 + 1*x_4 = 0
1*x_1 + 2*x_2 + 0*x_3 + 2*x_4 = 0
1*x_1 + 2*x_2 + 5*x_3 + 3*x_4 = 0

Notice that I have simply taken the coefficients from the given vectors and multiplied them by four variables (the number of variables will equal the number of vectors in the given set). They have been set equal to zero to allow us to test for linear independence. From here, create a coefficient matrix and perform row operations to reduce the matrix to reduced row echelon form (rref) .

rref \left[ \begin{array}{cccc|c} 1&2&0&1&0\\1&2&0&2&0\\1&2&5&3&0\end{array}\right] = \left[ \begin{array}{cccc|c} 1&2&0&0&0\\0&0&1&0&0\\0&0&0&1&0\end{array}\right]

Finding the Solution of the RREF Matrix

Finding the solution of the rref matrix may be the more difficult step in this process. However, it may become trivial following a few simple steps.

1) Identify the free variables in the matrix. Free variables are non-zero and located to the right of pivot variables. Pivot variables are the first non-zero entry in each row and since we have taken the rref of our matrix, all of the pivot variable coefficients are 1. By locating all free variables (or by eliminating all pivot variables) we find that x_2 is our only free variable.

2) Write free variables into your solution. The variable x_2 can be written into our solution vector as itself but we will represent it with another variable name (i.e. t ) so that our solution is in parametric form. Multiple free variables are represented with multiple variables names (i.e. s, t ). After this step your solution vector should look like this: <x_1, x_2, x_3 , x_4 > = <? , t, ? , ? >.

3) Solve for pivot variables. The pivot variables should either be constant (i.e. 0, 6) or a function of your free variables (i.e. 3 t-4 ). From the rref matrix we can see that x_1 = -2 t , x_3 = 0 , and x_4 = 0 .

4) Complete the solution vector. Placing the values we just calculated into our solution vector: <x_1, x_2, x_3 , x_4 > = <-2 t , t, 0 , 0 >


Since not all of our X_i = 0 , the given set of vectors is said to be linearly dependent. The linear dependence relation is written using our solution vector multiplied by the respective vector from the given set: -2 t * V_1 + t * V_2 + 0 * V_3 + 0 * V_4 = 0 . We can also conclude that any vectors with non-zero coefficients are linear combinations of each other. Therefore, V_1 and V_2 are a linear combination.



Leave a Reply

Your email address will not be published. Required fields are marked *