Skip to main content

Section 4.3 Linear Span

We have already defined linear span of a set of vectors in \(\R^n\text{.}\) The same one can defined in any vector space.

Definition 4.3.1. Linear Span.

Let \(V\) be a vector space over \(\R\text{.}\) Let \(S=\{v_1,\ldots, v_k\}\) be a set of vectors in \(V\text{.}\) Then a vector \(v\) is called a linear combination of \(v_1,\ldots, v_k\) if there exist scalars \(\alpha_1,\ldots, \alpha_k\) such that
\begin{equation*} v = \alpha_1 v_1+\cdots+\alpha_kv_k\text{.} \end{equation*}
The set
\begin{equation*} L(S):=\{\alpha_1 v_1+\cdots+\alpha_kv_k:\alpha_1,\ldots,\alpha_k\in \R\} \end{equation*}
is called the linear span or spanning set of \(v_1,\ldots, v_k\text{.}\) We know that \(L(S)\) is a vector space of \(V\text{.}\)
If \(S\) is any subset of \(V\) (may be infinite), then \(L(S)\) is the set of all finite linear combinations of elements from \(S\text{.}\) In particular, \(v\in L(S)\) if there exists \(k\in \N\) and scalars \(\alpha_1,\ldots,\alpha_k\in \R\) such that \(v = \alpha_1 v_1+\cdots+\alpha_kv_k\text{.}\)

Checkpoint 4.3.2.

For any subset \(S\subset V\text{,}\) \(L(S)\) is a subspace of \(V\text{.}\)

Example 4.3.3.

Let \(v_1=(1,2,-1)\) and \(v_2=(3,1,2)\) and \(W = \{\alpha_1v_1+\alpha_2v_2:\alpha_1,\alpha_2\in \R\}\text{.}\) What is \(W\text{?}\) Can identify it geometrically? Yes, it is a plane passing through the origin. That the \(W\) can be written as \(W=\{(x,y,z):ax+by+cz=0\}\) for some \((a,b,c)\neq (0,0,0)\text{?}\) Can you find what are \(a,b,c\text{.}\)
From the concept of dot product, it easy to identify \((a,b,c)\) as a vector which is orthogonal/perpendicular to both \(v_1\) and \(v_2\text{.}\) In particular, we can find \((a,b,c)\) and \(v_1\times v_2\text{,}\) the cross product of \(v_1\) and \(v_2\text{.}\)
Suppose, we do not want to use the above concept to find \((a,b,c)\text{,}\) then what do we do?
Suppose \((x,y,z)\in W\text{,}\) Then there exists scalars \(\alpha\) and \(\beta\) such that
\begin{equation*} \begin{pmatrix}x\\y\\z \end{pmatrix} =\alpha \begin{pmatrix}1\\2\\-1 \end{pmatrix} +\beta \begin{pmatrix}3\\1\\2 \end{pmatrix} = \begin{pmatrix}1 \amp 3\\2 \amp 1 \\-1 \amp 2 \end{pmatrix} \begin{pmatrix}\alpha\\\beta \end{pmatrix}\text{.} \end{equation*}
In particular, \(W\) is the image space of \(\begin{pmatrix}1 \amp 3\\2 \amp 1 \\-1 \amp 2 \end{pmatrix}\text{.}\)
We need to find \((a,b,c)\) such that \(ax+by+cz=0\) for any \((x,y,z)\in W\text{.}\) In particular, we have
\begin{equation*} a(\alpha+2\beta)+b(2\alpha+\beta)+c(-\alpha+2\beta)=0 \end{equation*}
for any \(\alpha,\beta\in \R\text{.}\) Note that \(\alpha,\beta\) is our choice and we can choose conveniently to find \(a, b, c\text{.}\) It is easy to see that
\begin{align*} a+2b-c \amp = \amp 0, \text{ for \(\alpha=1,\beta=0\) }\\ 3a+b+2c \amp = \amp 0, \text{ for \(\alpha=0,\beta=1\) } \end{align*}
This is same say substituting \(v_1,v_2\) in the equation \(ax=by=cz=0\text{.}\)
In particular, we have \(a,b,c\) such that
\begin{equation*} \begin{pmatrix}1 \amp 2 \amp -1\\ 3 \amp 1 \amp 2 \end{pmatrix} \begin{pmatrix}a\\b\\c \end{pmatrix} =0\text{.} \end{equation*}
Thus \((a,b,c)\) is the kernel of \(B=\begin{pmatrix}1 \amp 2 \amp -1\\ 3 \amp 1 \amp 2 \end{pmatrix}\) and \(W\) is the orthogonal complement of kernel of \(B\text{.}\)
Solving the above equations, we can find \(a=1,b=-1,c=-1\) as one of the choices. This implies \(W\) is the plane \(x-y+z=0\text{.}\)

Example 4.3.4.

Let \(V=M_2(\R)\) with usual addition and scalar multiplication
\begin{equation*} S=\left\{ \begin{bmatrix}1 \amp 0 \\0 \amp 0 \end{bmatrix} , \begin{bmatrix}0 \amp 1 \\1 \amp 0 \end{bmatrix} , \begin{bmatrix}0 \amp 0 \\0 \amp 1 \end{bmatrix} \right\}\text{.} \end{equation*}
Then \(L(S)\) is the set of \(2\times 2\) symmetric matrices.

Example 4.3.5.

Let \(V=M_2(\R)\) with usual addition and scalar multiplication
\begin{equation*} S=\left\{ \begin{bmatrix}1 \amp 0 \\0 \amp 0 \end{bmatrix} , \begin{bmatrix}0 \amp 1 \\0 \amp 0 \end{bmatrix} , \begin{bmatrix}0 \amp 0 \\1 \amp 0 \end{bmatrix} , \begin{bmatrix}0 \amp 0 \\0 \amp 1 \end{bmatrix} \right\}\text{.} \end{equation*}
Then \(L(S)=M_2(\R)\text{.}\)

Example 4.3.6.

Let \(S =\{1,x,x^2,\ldots, x^n\}\text{.}\) Then \(L(S)\) is the set of all polynomials of degree less than or equals to \(n\text{.}\)

Checkpoint 4.3.7.

Let \(S_1,S_2\subset V\) such that \(S_1\subset S_2\text{.}\) Then show that \(L(S_1)\) is a subspace of \(L(S_2)\text{.}\)

Checkpoint 4.3.8.

Let \(u,v\in V\text{.}\) Then show that \(L(\{u,v\})=L(\{u-v,2u+3v\})\text{.}\) Can you generalize this?

Definition 4.3.9.

Let \(V\) be a vector space and \(S\subset V\text{.}\) A subspace \(W\) of \(V\) is called the smallest subspace of \(V\) containing \(S\) if (i) \(W\) is subspace of \(V\) with \(S\subset W\text{,}\) and (ii) if \(W'\) is subspace of \(V\) with \(S\subset W'\text{,}\) then \(W\subset W'\text{.}\)

Example 4.3.10.

(i) Let \(v\in V\text{.}\) Then \(\R v=L(\{v\})\) is the smallest subspace of \(V\) containing \(v\text{.}\)
(ii) Let \(S=\{v_1,\ldots,v_k\}\subset V\text{.}\) Then \(L(S)\) is the smallest subspace of \(V\) containing \(S\text{.}\)

Checkpoint 4.3.11.

Suppose \(L\) is a line in the plane? Then what is \(L(S)\text{?}\)