User:Prof McCarthy/Linear independence

Vectors in R2
Three vectors: Consider the set of vectors v1= (1, 1), v2= (-3, 2) and v3= (2, 4), then the condition for linear dependence is a set of non-zero scalars, such that
 * $$ a_1 \begin{Bmatrix} 1\\1\end{Bmatrix} + a_2 \begin{Bmatrix} -3\\2\end{Bmatrix} + a_3 \begin{Bmatrix} 2\\4\end{Bmatrix} =\begin{Bmatrix} 0\\0\end{Bmatrix},$$

or
 * $$ \begin{bmatrix} 1 & -3 & 2 \\ 1 & 2 & 4 \end{bmatrix}\begin{Bmatrix} a_1\\ a_2 \\ a_3 \end{Bmatrix}= \begin{Bmatrix} 0\\0\end{Bmatrix}.$$

Row reduce this matrix equation by subtracting the first equation from the second to obtain,
 * $$ \begin{bmatrix} 1 & -3 & 2 \\ 0 & 5 & 2 \end{bmatrix}\begin{Bmatrix} a_1\\ a_2 \\ a_3 \end{Bmatrix}= \begin{Bmatrix} 0\\0\end{Bmatrix}.$$

Continue the row reduction by (i) dividing the second equation by 5, and then (ii) multiplying by 3 and adding to the first equation, that is
 * $$ \begin{bmatrix} 1 & 0 & 22/5 \\ 0 & 1 & 4/5 \end{bmatrix}\begin{Bmatrix} a_1\\ a_2 \\ a_3 \end{Bmatrix}= \begin{Bmatrix} 0\\0\end{Bmatrix}.$$

We can now rearrange this equation to obtain
 * $$ \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\begin{Bmatrix} a_1\\ a_2 \end{Bmatrix}= \begin{Bmatrix} a_1\\ a_2 \end{Bmatrix}=-a_3\begin{Bmatrix} 22/5\\4/5\end{Bmatrix}.$$

which shows that non-zero ai exist so v3= (2, 4) can be defined in terms of v1= (1, 1), v2= (-3, 2). Thus, the three vectors are linearly dependent.

Two vectors: Now consider the linear dependence of the two vectors v1= (1, 1), v2= (-3, 2), and check,
 * $$ a_1 \begin{Bmatrix} 1\\1\end{Bmatrix} + a_2 \begin{Bmatrix} -3\\2\end{Bmatrix} =\begin{Bmatrix} 0\\0\end{Bmatrix},$$

or
 * $$ \begin{bmatrix} 1 & -3 \\ 1 & 2  \end{bmatrix}\begin{Bmatrix} a_1\\ a_2 \end{Bmatrix}= \begin{Bmatrix} 0\\0\end{Bmatrix}.$$

The same row reduction presented above yields
 * $$ \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\begin{Bmatrix} a_1\\ a_2 \end{Bmatrix}= \begin{Bmatrix} 0\\0\end{Bmatrix}.$$

which shows that non-zero ai do not exist so v1= (1, 1) and v2= (-3, 2) are linearly independent.

Alternative method using determinants
An alternative method uses the fact that n vectors in $$\mathbb{R}^n$$ are linearly independent if and only if the determinant of the matrix formed by taking the vectors as its columns is non-zero.

In this case, the matrix formed by the vectors is
 * $$A = \begin{bmatrix}1&-3\\1&2\end{bmatrix} . \,\!$$

We may write a linear combination of the columns as
 * $$ A \Lambda = \begin{bmatrix}1&-3\\1&2\end{bmatrix} \begin{bmatrix}\lambda_1 \\ \lambda_2 \end{bmatrix} . \,\!$$

We are interested in whether AΛ = 0 for some nonzero vector Λ. This depends on the determinant of A, which is
 * $$ \det A = 1\cdot2 - 1\cdot(-3) = 5 \ne 0 . \,\!$$

Since the determinant is non-zero, the vectors (1, 1) and (&minus;3, 2) are linearly independent.

Otherwise, suppose we have m vectors of n coordinates, with m &lt; n. Then A is an n×m matrix and Λ is a column vector with m entries, and we are again interested in AΛ = 0. As we saw previously, this is equivalent to a list of n equations. Consider the first m rows of A, the first m equations; any solution of the full list of equations must also be true of the reduced list. In fact, if 〈i1,...,im〉 is any list of m rows, then the equation must be true for those rows.
 * $$ A_{{\lang i_1,\dots,i_m} \rang} \Lambda = \mathbf{0} . \,\!$$

Furthermore, the reverse is true. That is, we can test whether the m vectors are linearly dependent by testing whether
 * $$ \det A_{{\lang i_1,\dots,i_m} \rang} = 0 \,\!$$

for all possible lists of m rows. (In case m = n, this requires only one determinant, as above. If m &gt; n, then it is a theorem that the vectors must be linearly dependent.) This fact is valuable for theory; in practical calculations more efficient methods are available.

Example II
Let V = Rn and consider the following elements in V:


 * $$\begin{matrix}

\mathbf{e}_1 & = & (1,0,0,\ldots,0) \\ \mathbf{e}_2 & = & (0,1,0,\ldots,0) \\ & \vdots \\ \mathbf{e}_n & = & (0,0,0,\ldots,1).\end{matrix}$$

Then e1, e2, ..., en are linearly independent.

Proof
Suppose that a1, a2, ..., an are elements of R such that


 * $$ a_1 \mathbf{e}_1 + a_2 \mathbf{e}_2 + \cdots + a_n \mathbf{e}_n = 0 . \,\!$$

Since
 * $$ a_1 \mathbf{e}_1 + a_2 \mathbf{e}_2 + \cdots + a_n \mathbf{e}_n = (a_1 ,a_2 ,\ldots, a_n), \,\!$$

then ai = 0 for all i in {1, ..., n}.

Example III
Let V be the vector space of all functions of a real variable t. Then the functions et and e2t in V are linearly independent.

Proof
Suppose a and b are two real numbers such that


 * aet + be2t = 0

for all values of t. We need to show that a = 0 and b = 0. In order to do this, we divide through by et (which is never zero) and subtract to obtain
 * bet = &minus;a.

In other words, the function bet must be independent of t, which only occurs when b = 0. It follows that a is also zero.

Example IV
The following vectors in R4 are linearly dependent.

\begin{matrix} \\   \begin{bmatrix}1\\4\\2\\-3\end{bmatrix}, \begin{bmatrix}7\\10\\-4\\-1\end{bmatrix} \mathrm{and} \begin{bmatrix}-2\\1\\5\\-4\end{bmatrix} \\ \\ \end{matrix} $$

Proof
We need to find not-all-zero scalars $$\lambda_1$$, $$\lambda_2$$ and $$\lambda_3$$ such that



\begin{matrix} \\ \lambda_1 \begin{bmatrix}1\\4\\2\\-3\end{bmatrix}+ \lambda_2 \begin{bmatrix}7\\10\\-4\\-1\end{bmatrix}+ \lambda_3 \begin{bmatrix}-2\\1\\5\\-4\end{bmatrix}= \begin{bmatrix}0\\0\\0\\0\end{bmatrix}. \end{matrix} $$

Forming the simultaneous equations:



\begin{align} \lambda_1& \;+ 7\lambda_2& &- 2\lambda_3& = 0\\ 4\lambda_1& \;+ 10\lambda_2& &+ \lambda_3& = 0\\ 2\lambda_1& \;- 4\lambda_2& &+ 5\lambda_3& = 0\\ -3\lambda_1& \;-  \lambda_2& &- 4\lambda_3& = 0\\ \end{align} $$

we can solve (using, for example, Gaussian elimination) to obtain:

\begin{align} \lambda_1 &= -3 \lambda_3 /2 \\ \lambda_2 &= \lambda_3/2 \\ \end{align} $$ where $$\lambda_3$$ can be chosen arbitrarily.

Since these are nontrivial results, the vectors are linearly dependent.