Skip to main content

Section 4.5 Basis and dimension

In this section, we define basis and dimension of a vector spaces.

Subsection 4.5.1 Basis of a Vector Space

We can defined basis of a vector space similar to basis of subspaces in \(\R^n\text{.}\)

Definition 4.5.1. Basis of a vector space.

Let \(V\) be a vector space over \(\R\text{.}\) A set of vectors \(\beta=\{v_1,v_2,\ldots,v_n\}\subset V\) is called a basis of \(V\) if every vector \(v\in \R^n\) can be expressed uniquely as linear combinations of \(v_1,v_2,\ldots,v_n\text{.}\)
Thus \(\beta\) is basis of \(V\) if
  1. \(L(\beta)=\R^n\text{,}\) that is, every vector \(v\in \R^n\) can be expressed as linear combinations of \(v_1,v_2,\ldots,v_n\)
  2. If \(v=\alpha_1v_1+\alpha_2v_2+\cdots +\alpha_nv_n\) and \(v=\beta_1v_1+\beta_2v_2+\cdots +\beta_nv_n\text{,}\) then \(\alpha_1=\beta_1, \alpha_2=\beta_2=\cdots,\alpha_n=\beta_n\text{.}\)

Checkpoint 4.5.2.

If \(\beta=\{v_1,\ldots,v_n\}\) is a basis of vector spave \(V\) over \(\R\text{,}\) then (i) \(L(\beta)=V\) and (ii) \(\beta\) is linearly independent.
Hint.
Follow the arguments simialr to TheoremΒ 2.4.2.
We have already seen several examples of bases in \(\R^n\) and some subspaces of \(\R^n\text{.}\)

Example 4.5.3.

Let \(V={\cal P}_n(\R)\text{.}\) The set \(\{1,x,x^2,\ldots, x^n\}\) is basis of \(V\text{,}\) called the standard basis.

Example 4.5.4.

\(\{1,i\}\) is a basis of \(\mathbb{C}\) as a vector space over \(\R\text{.}\)

Example 4.5.5.

\begin{equation*} S=\left\{ \begin{bmatrix}1 \amp 0 \\0 \amp 0 \end{bmatrix} , \begin{bmatrix}0 \amp 1 \\0 \amp 0 \end{bmatrix} , \begin{bmatrix}0 \amp 0 \\1 \amp 0 \end{bmatrix} , \begin{bmatrix}0 \amp 0 \\0 \amp 1 \end{bmatrix} \right\} \end{equation*}
is a basis \(M_2(\R)\text{,}\) called the standard basis.

Proof.

Let \(\beta=\{v_1,\ldots, v_n\}\) be a linearly independent. Enough to show that \(L(\beta)=V\text{.}\) Let \(v\in \R^n\text{.}\) Then we know that \(\beta\cup \{ v\}\) is linearly dependent. That is, there exist scalars, \(\alpha_1,\ldots, \alpha_n,\alpha_{n+1}\) not all zero such that
\begin{equation*} \alpha_1v_1+\vdots+\alpha_n v_n+\alpha_{n+1}v=0. \end{equation*}
We calim that \(\alpha_{n+1}\neq 0\text{.}\) If \(\alpha_{n+1}=0\text{,}\) then we have scalars \(\alpha_1,\ldots, \alpha_n\) not all zero such that such that
\begin{equation*} \alpha_1v_1+\cdots+\alpha_n v_n=0. \end{equation*}
This is a contraction, as \(\beta\) is linearly independent. Thus \(\alpha_{n+1}\neq0\text{.}\) Hence we have
\begin{equation*} v=-\frac{1}{\alpha_{n+1}}\left(\alpha_1v_1+\cdots+\alpha_n v_n\right). \end{equation*}
In otherwords, \(v\in L(\beta)\text{.}\)
Can you generalize the above lemma?

Proof.

Proof.

Proof.

Let \(\beta\) be a basis and since \(\gamma\) is linearly independen, by the CorollaryΒ 4.5.8, we have \(m\leq n\text{.}\) Interchanging the role of \(\beta\) and \(\gamma\text{,}\) we have \(n\leq m\text{.}\) Hence \(m=n\text{.}\)

Definition 4.5.10. Finite Dimensional Vector Space.

A vector space \(V\) is called finite dimensional if there exists a finite subset \(S\) of \(V\) such that \(L(S)=V\text{.}\)
A vector space which is not finite dimensional is called an infinite dimensional.

Definition 4.5.11.

We say a vector space \(V\) is of dimension \(n\) if it has a basis \(\beta\) consisting of \(n\) elements.

Checkpoint 4.5.12.

What is the dimension of \(V=\{0\}\text{,}\) the zero space?

Example 4.5.13.

  1. \(\R^n\) is a \(n\) dimensional vectors space over \(\R\text{.}\)
  2. \(M_n(\R)\text{,}\) the set of all \(n\times n\) matrices pver \(\R\) is a \(n^2\)-dimensional vector space over \(\R\text{.}\)
  3. \({\cal P}_n(\R)\text{,}\) the set of all polynomials of degree less than or equal to \(n\) over \(\R\) is \((n+1)\)-dimensional vector space over \(\R\text{.}\)

Example 4.5.14.

Let \(W\) be the set of all \(3\times 3\) real symmetric matrices. The set
\begin{align*} \beta=\left\{ \begin{bmatrix}1 \amp 0 \amp 0 \\0 \amp 0 \amp 0 \amp \\ 0 \amp 0 \amp 0 \end{bmatrix}, \begin{bmatrix}0 \amp 0 \amp 0 \\0 \amp 1 \amp 0 \amp \\ 0 \amp 0 \amp 0 \end{bmatrix}, \begin{bmatrix}0 \amp 0 \amp 0 \\0 \amp 0 \amp 0 \amp \\ 0 \amp 0 \amp 1 \end{bmatrix} \right.\\ \left. \begin{bmatrix}0 \amp 1 \amp 0 \\1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \end{bmatrix}, \begin{bmatrix}0 \amp 0 \amp 1 \\0 \amp 0 \amp 0 \\ 1 \amp 0 \amp 0 \end{bmatrix}, \begin{bmatrix}0 \amp 0 \amp 0 \\0 \amp 0 \amp 1 \amp \\ 0 \amp 1 \amp 0 \end{bmatrix} \right\} \end{align*}
is a basis of \(W\text{.}\) That is, \(W\) is 6 dimensional vector space over \(\R\text{.}\) What is dimension of the set of \(n\times n\) real symmetric matrices and dimension of \(n\times n\) real skew-symmetric matrices?

Checkpoint 4.5.15.

Let \(W\) be the set of all \(3\times 3\) real skew-symmetric matrices. Find a basis and hence the dimension of \(W\text{.}\)

Subsection 4.5.2 How to find a basis of a finite dimensional vector space?

First let us look at the following results.

Checkpoint 4.5.16.

Let \(u\in V\) be a nonzero vector. Suppose \(v\notin L(\{v\})\text{,}\) then show that \(\{u,v\}\) is linearly independent.
Can you generalize this result?

Proof.

Let \(v\notin { span}(\{v_1,\ldots, v_k\})\) and \(\alpha_1,\ldots,\alpha_{k+1}\) be scalars such that
\begin{equation*} \alpha_1 v_1+\cdots+\alpha_kv_k+\alpha_{k+1}v=0. \end{equation*}
We claim that \(\alpha_{k+1}=0\text{.}\) For if \(\alpha_{k+1}=0\) implies \(v\in { span}(\{v_1,\ldots, v_k\})\) a contradiction.
LemmaΒ 4.5.17 gives a way to construct a basis of a finite dimensional vector space which is the content of the next theorem.

Proof.

Let \(S=\{v_1,\ldots,v_k\}\) be linearly independent set. if \(L(S)=V\text{,}\) then we are done as \(S\) is a basis of \(V\text{.}\) Suppose not, chose \(u_1\notin L(S)\text{.}\) Then by LemmaΒ 4.5.17, \(\{v_1,\ldots, u_1\}\) is linearly independet in \(V\text{.}\) If \(L(\{v_1,\ldots, u_1\})=V\text{,}\) then \(\{v_1,\ldots, u_1\}\) is a basis of \(V\text{.}\) Otherwise we choose, \(u_2\not in L(\{v_1,\ldots, u_1\})\text{.}\) Again by LemmaΒ 4.5.17, \(\{v_1,\ldots, u_1,u_2\}\) is linearly independet in \(V\text{.}\) If \(L(\{v_1,\ldots, u_1,u_2\})=V\text{,}\) then \(\{v_1,\ldots, u_1,u_2\}\) is a basis of \(\text{.}\) Otherwise, we continue this process. Since \(V\)is finite dimensional vector space, this process must end after a finite number of steps. In fact, the process ends exactly \(n-k\) steps where \(n\) is the dimension of \(V\text{.}\) (can you see it why?)
The above results give a way to find a basis of a finite dimensional vector space starting with a nonzero vector in \(V\text{.}\)

Example 4.5.19.

Complete the set \(S=\{v_1=(1, 2, 1, 0), v_2=(2, 2, 1, 0)\}\) to a basis of \(\R^4\text{.}\) One way of achieving this to find \(v_3\notin L(S)\text{.}\) Then Chose \(v_4\notin L(\{v_3\}\cup S)\text{.}\) Then in view of LemmaΒ 4.5.17, \(\beta=\{v_1,v_2,v_3,v_4\}\) is linearly independent. Since \(\dim(\R^4)=4\text{,}\) \(\beta\) is a basis of \(\R^4\text{.}\)
Another way to achieve this is to look at the standard basis vectors \(e_i\) not in \(L(S)\text{.}\) In particular, \(v_3,v_4\in\{e_1,e_2,e_3,e_4\}\text{.}\) In order to find this we can apply RREF to the matrix \(\begin{bmatrix}v_1\amp v_2 \amp e_1 \amp e_2 \amp e_3 \amp e_4 \end{bmatrix}\) and choose columns corresponding to the pivots. We have
\begin{equation*} \left[\begin{array}{rrrrrr} 1 \amp 2 \amp 1 \amp 0 \amp 0 \amp 0 \\ 2 \amp 2 \amp 0 \amp 1 \amp 0 \amp 0 \\ 1 \amp 1 \amp 0 \amp 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 1 \end{array} \right]\xrightarrow{RREF} \left[ \begin{array}{rrrrrr} 1 \amp 0 \amp -1 \amp 0 \amp 2 \amp 0 \\ 0 \amp 1 \amp 1 \amp 0 \amp -1 \amp 0 \\ 0 \amp 0 \amp 0 \amp 1 \amp -2 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 1 \end{array} \right]\text{.} \end{equation*}
Clearly pivot columns are 1,2,4,6, which corresponds to vector \(v_1,v_2,e_2, e_4\text{.}\) Thus \(\{v_1,v_2,e_2,e_4\}\) is an extended basis of \(\R^4\text{.}\)
Note that TheoremΒ 4.5.18 gives a way to find a basis of finite dimensional vector space starting from a linearly independet set. How about starting with s finite spanning set?

Proof.

Assume that \(0\notin S\text{.}\) Otherwise \(L(S)=L(S\setminus\{0\})\text{.}\) We shall use well-ordering principle of natural numbers which states that any non empty subset of natural number has a least element.
Define
\begin{equation*} {\cal A}:=\{A\subset S:L(A)=L(S)=V\}. \end{equation*}
Clearly, \(S\in {\cal A}\text{.}\) Hence \(T:=\{|A|: A\in {\cal A}\}\) is a non empty subset of natural numbers. By the Wel-ordering principle, \(T\) has a least element. Let \(n\) be the least element of \(T\) and \(B\in {\cal A}\) such that \(|B|=n\text{.}\) We claim that \(B\) is a basis of \(V\text{.}\) By definition \(L(B)=V\text{.}\) We claim that \(B\) is linearly independet. Suppose not, there exist \(v\in B\) such that \(v\in L(S\setminus \{v\})\text{.}\) This implies \(S\setminus \{v\}\subset S\) and \(L(S\setminus \{v\})=V\text{.}\) Hence \(S\setminus \{v\}\in {\cal A}\) but \(|S\setminus \{v\}|=n-1\text{,}\) a contradiction. This shows that \(B\) is linearly independent. Hence \(B\) is basis of \(V\text{.}\)

Example 4.5.21.

Consider \(v_1,\ldots, v_8\) in \(\R^5\text{,}\) where
\begin{equation*} \begin{split} v_1=(2, -3, 4, -5, -2), v_2=(-6, 9, -12, 15, -6), v_3=(3, -2, 7, -9, 1),\\ v_4=(2, -8, 2, -2, 6), v_5=(-1, 1, 2, 1, -3), v_6=(0, -3, -18, 9, 12), \\ v_7=(1, 0, -2, 3, -2), v_8=(2, -1, 1, -9, 7) \end{split} \end{equation*}
We wish to find a subset of \(\{v_1,\ldots, v_8\}\) which is a basis of \(\R^5\text{.}\) We can achieve this by applying RREF to the column matrix \(\begin{bmatrix}v_1\amp v_2\amp \cdots \amp v_8 \end{bmatrix}\text{.}\) Thus
\begin{align*} \left[\begin{array}{rrrrrrrr} 2 \amp -6 \amp 3 \amp 2 \amp -1 \amp 0 \amp 1 \amp 2 \\ -3 \amp 9 \amp -2 \amp -8 \amp 1 \amp -3 \amp 0 \amp -1 \\ 4 \amp -12 \amp 7 \amp 2 \amp 2 \amp -18 \amp -2 \amp 1 \\ -5 \amp 15 \amp -9 \amp -2 \amp 1 \amp 9 \amp 3 \amp -9 \\ -2 \amp -6 \amp 1 \amp 6 \amp -3 \amp 12 \amp -2 \amp 7 \end{array}\right]\\ \xrightarrow{RREF} \left[\begin{array}{rrrrrrrr} 1 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \amp -\frac{4}{3} \amp 0 \amp -\frac{1}{3} \amp 0 \amp \frac{1}{3} \\ 0 \amp 0 \amp 1 \amp -2 \amp 0 \amp -2 \amp 0 \amp 1 \\ 0 \amp 0 \amp 0 \amp 0 \amp 1 \amp -4 \amp 0 \amp -2 \\ 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 1 \amp -1 \end{array} \right] \end{align*}
Clearly pivot columns are 1, 2, 3, 5, 7. Hence \(\{v_1,v_2,v_3,v_5,v_7\}\) is basis of \(\R^5\text{.}\)

Definition 4.5.22.

Let \(V\) be a vector space. A set of vectors \(S\) of \(V\) is called a maximal linearly independent set if \(S\cup \{v\}\) is linearly dependent for any vector \(v\in V\text{.}\)

Example 4.5.23.

(i) Any set \(S\) with two linearly independent set of vectors in \(\R^2\) is a maximal linearly independent set.
(ii) Any set \(S\) with three linearly independent set of vectors in \(\R^3\) is a maximal linearly independent set.

Definition 4.5.24.

Let \(V\) be a vector space. A set of vectors \(S\) of \(V\) is called a minimal set of generators if (i) \(L(S)=V\) and (ii) for any \(u\in S\text{,}\) \(L(S\setminus \{u\})\neq V\text{.}\)

Example 4.5.25.

  1. Any set \(S\) with two linearly independent set of vectors in \(\R^2\) is a minimal set of generators.
  2. Any set \(S\) with three linearly independent set of vectors in \(\R^3\) is a minimal set of generators.
In the following theorem we mention the equivalent conditions for a set to be a basis of a finite dimensional vector space.

Proof.

We prove this by showing \((i)\implies (ii)\implies (iii)\implies (iv)\implies (i)\text{.}\)
(β€Šβ 2Β β‡’Β 3β€Šβ )Β 
Let \(\beta=\{v_1,\ldots, v_n\}\) be a basis. Let \(v\in V\text{,}\) then by TheoremΒ 4.5.7 \(\beta\cup \{v\}\) is linearly dependeny. Hence \(\beta\) is maximal linearly independent set in \(V\text{.}\)
(β€Šβ 3Β β‡’Β 4β€Šβ )Β 
Let \(\beta=\{v_1,\ldots,v_n\}\) be a maximal linearly independent set. To show \(\beta\) is a minimal set of generators, we need to show two things (i) \(L(\beta)=V\) and (ii) no proper subset of \(\beta\) can span \(V\text{.}\)
To prove (i), let \(v\in V\text{.}\) Since \(\beta\) is a maximal set of generators, \(\{v,v_1,\ldots,v_n\}\) is linearly dependent. Hence there exits scalars, \(\alpha_0,\alpha_1,\ldots,\alpha_n \) not all zero such that
\begin{equation*} \alpha_0 v+\sum_{i=1}^n\alpha_i v_i=0. \end{equation*}
We claim, that \(\alpha_0\neq 0\text{.}\) For if \(\alpha_0=0\text{,}\) then we have scalars \(\alpha_1,\ldots,\alpha_n \) not all zero such that
\begin{equation*} \sum_{i=1}^n\alpha_i v_i=0 \end{equation*}
which implies \(\beta\) is linearly dependent, a contradiction. Hence \(\alpha_0\neq 0\text{.}\) Thus we have
\begin{equation*} v = -\frac{1}{\alpha_0} \sum_{i=1}^n\alpha_i v_i. \end{equation*}
This implies, \(L(\beta)=V\text{.}\)
We prove (ii) by contradiction. Without loss of generality, we assume that \(\{v_1,v_2,\ldots, v_{n-1}\}\) spans \(V\text{.}\) This would mean \(v\) lies in the spanning set of \(\{v_1,v_2,\ldots, v_{n-1}\}\text{.}\) In particular, \(\{v_1,v_2,\ldots, v_{n-1},v_n\}\) is linearly dependent, a contradiction.
(β€Šβ 4Β β‡’Β 1β€Šβ )Β 
Let \(\beta=\{v_1,\ldots,v_n\}\) be a minimal generating set. We need to show that it is a basis.
Since \(\beta\) is a generating set, for any \(v\in V\text{,}\) there exist scalars \(\alpha_i\) such that \(v=\sum \alpha_i v_i\text{.}\) We need to show that this expression is unique. Suppose not, let \(v=\sum \beta_i v_i\) with \(\beta_i\neq \alpha_i\) for at least one \(i\in \{1,\ldots,n\}\text{.}\) Then we have
\begin{equation*} 0=v-v = (\beta_i-\alpha_i)v_i+\sum_{j\neq i} (\beta_j-\alpha_j) v_j. \end{equation*}
In paticular,
\begin{equation*} v_i=-\frac{1}{(\beta_i-\alpha_i)}\sum_{j\neq i} (\beta_j-\alpha_j) v_j. \end{equation*}
This would imply \(\{v_1,\ldots, v_{i-1},v_{i+1},\ldots,v_n\}\) is a generating set for \(V\) (why?). This is a contradiction to the minimality of \(\beta\text{.}\)

Subsection 4.5.3 Lagrange Interpolation

Consider the vector space \({\cal P}_n(\R)\text{.}\) Fix \(n+1\) distinct real numbers \(c_0,c_1,\ldots, c_n\text{.}\) Define polynomials
\begin{equation} \ell_i(x)= \frac{(x-c_0)\cdots (x-c_{i-1})(x-c_{i+1})\cdots(x-c_n)}{(c_i-c_0)\cdots (c_i-c_{i-1})(c_i-c_{i+1})\cdots(c_i-c_n)}\tag{4.5.1} \end{equation}
for \(i=0,1,\ldots n\text{.}\) The above equation can be written as
\begin{equation} \ell_i(x)=\prod_{j=0,j\neq i}^{n}\frac{x-c_j}{c_i-c_j}.\tag{4.5.2} \end{equation}
It is easy to see that \(\ell_i(c_j)=1\) if \(j=i\) and 0 otherwise. We claim that \(\{\ell_i\}_{i=0}^n\) is a linearly independent subset of \({\cal P}_n(\R)\text{.}\) For
\begin{equation} \alpha_0\ell_0 + \alpha_1\ell_1+\cdots+\alpha_n\ell_n=\sum\alpha_i\ell_i=0\text{.}\tag{4.5.3} \end{equation}
Here the right hand side is the zero polynomial. This implies \(\sum\alpha_i\ell_i(c_j)=0\) for all \(j=0,\ldots, n\text{.}\) Since \(\sum\alpha_i\ell_i(c_j)=\alpha_j\text{,}\) it implies that \(\alpha_j=0\) for all \(j=0,\ldots, n\text{.}\) Hence the claim.
Since \({\cal P}_n(\R)\) is \((n+1)\)-dimensional vector space, the set \(\{\ell_i\}_{i=0}^n\) is a basis. Hence every \(n\)-th degree polynomial can be expressed uniquely as linear combination of \(\ell_i\text{.}\) Suppose \(g\) is polynomial passing through points \(\{(x_i,y_i)\}_{i=0}^n\text{,}\) (that is \(g(x_i)=y_i)\)) where \(x_0,\ldots,x_n\) are \(n\) distinct real numbers. This unique polynomial is given by
\begin{equation} g(x)=\sum_{i=0}^n \ell_i(x)y_i\tag{4.5.4} \end{equation}
called the Lagrange interpolation polynomial passing through \(\{(x_i,y_i)\}_{i=0}^n\text{.}\)

Subsection 4.5.4 Dimension Formula

Let \(V\) be a vector space over \(\R\text{.}\) Let \(W_1\) and \(W_2\) be subspaces of \(V\text{.}\) Define a subset
\begin{equation*} W_1+W_2:= \{x+y:x\in W_2,y\in W_2\}. \end{equation*}
It is easy to check that \(W_1+W_2\) is a subspace of \(V\text{.}\) (why?)
In addition if, we assume that \(V\) is a finite dimensional, then what can you say about the dimension of \(W_1+W_2\text{?}\)
Recall, an analogous result from set theory. If \(A\) and \(B\) are finite sets then
\begin{equation*} |A\cup B|=|A|+|B|-|A\cap B|. \end{equation*}
The following theorem gives a way to find the dimension of \(W_1+W_2\text{.}\)

Proof.

The basic idea of the proof is to start with a basis of \(W_1\cap W_2\) and extend this to a basis of \(W_1\) and \(W_2\) and them show that union of basis of \(W_1\) and \(W_2\) gives a basis of \(V\text{.}\)
Let \(\{u_1,\ldots,u_k\}\) be basis of \(W_2\cap W_2\text{.}\) Let \(\{u_1,\ldots,u_k,v_1,\ldots, v_m\}\) \(\{u_1,\ldots,u_k,w_1,\ldots, v_n\}\) be extended bases of \(W_1\) and \(W_2\) respectively. Then we have
\begin{align*} \rm{dim} (W_1)+\rm{dim} (W_2)-\rm{dim} (W_1\cap W_2)\amp=\amp k+n+k+m-k \\ \amp=\amp k+m+n. \end{align*}
We calim that \(\beta =\{u_1,\ldots,u_k,v_1,\ldots, v_m,w_1,\ldots,w_n\}\) is a bais of \(W_1+W_2\text{.}\)
Let \(x\in W_1+W_2\text{.}\) Then \(x=w_1+w_2\) for some \(w_1\in W_1\) and \(w_2\in W_2\text{.}\) Let \(w_1=\sum\alpha_iu_i+\sum \beta_j v_j\) and \(w_2=\sum\gamma_i u_i+\sum \delta_jw_j\text{.}\) Then we have
\begin{equation*} x=\sum(\alpha_i+\gamma_i)u_i+\sum \beta_j v_j+\sum \delta_jw_j. \end{equation*}
This shows that \(\beta\) spans \(W_1+W_2\text{.}\)
Next, we show that \(\beta\) is linearly independent.
Let \(\sum \alpha_iu_i+\sum \beta_j v_j+\sum \gamma_rw_r=0\text{.}\) Then we have
\begin{equation} \sum \alpha_iu_i+\sum \beta_j v_j=-\sum \gamma_rw_r\tag{4.5.6} \end{equation}
Since the expression on the left hand side is in \(W_1\text{,}\) we have \(\sum \gamma_rw_r \in W_1\cap W_2\text{.}\) Hence there exist scalars \(\delta_i\) such that
\begin{equation*} -\sum \gamma_rw_r = \sum \delta_iu_r \end{equation*}
which implies
\begin{equation*} \sum \delta_iu_r+\sum \gamma_rw_r=0. \end{equation*}
Since \(\{u_1,\ldots,u_k,w_1,\ldots,w_n\}\) is linearly independet, we have \(\gamma_1=\cdots=\gamma_n=0\text{.}\) Hence by (4.5.6), we have
\begin{equation*} \sum \alpha_iu_i+\sum \beta_j v_j=0. \end{equation*}
Since \(\{u_1,\ldots,u_k,v_1,\ldots,v_m\}\) is linearly independet, we have \(\alpha_1=\cdots=\alpha_k=0\) and \(\beta_1=\cdots=\beta_m=0\text{.}\) This proves that \(\beta\) is linearly independent. This completes the proof.

Example 4.5.28.

Let \(W_1=:\{(x_1,x_2,x_3)\in \R^3:x_1+x_2+x_3=0\}\) and \(W_2:=\{(x_1,x_2,x_3)\in \R^3:x_1+x_2-x_3=0\}\) of \(\R^3\text{.}\) Clearly \(\dim{(W_1)}=\dim{(W_2)}=2\text{.}\) What is \(W_1\cap W_2\text{?}\) It is the line of intersection of the two planes, \(x_1+x_2+x_3=0\) and \(x_1+x_2-x_3=0\text{.}\) Thus \(\dim{(W_1\cap W_2)}=1\text{.}\) It is easy to see that
\begin{equation*} W_2\cap W_2=\{\alpha(1,-1,0):\alpha\in\R\}. \end{equation*}
What is \(W_1+W_2\text{?}\) One can easily show that \(W_1+W_2=\R^3=V\text{.}\) However by dimension formula
\begin{align*} \dim{(W_1+W_2)}=\amp \dim{(W_1)}+\dim{(W_2)}-\dim{(W_1\cap W_2)}\\ =\amp 2+2-1=3\text{.} \end{align*}
Since \(W_1+W_2\) is a 3 dimensional subspace of \(\R^3\text{,}\) it is in fact \(\R^3\text{.}\)

Definition 4.5.29.

Let \(W_1\) and \(W_2\) be subspaces of a vector space \(V\) such that \(W_1+W_2=V\) and \(W_1\cap W_2=\emptyset\text{,}\) then we say that \(V\) is a direct sum of \(W_1\) and \(W_2\text{.}\) We write this as \(V=W_1\oplus W_2\text{.}\)

Example 4.5.30.

  1. \(\R^2=\R e_1\oplus \R e_2\text{.}\)
  2. Let \(W_1=\{(x_1,x_2,x_3)\in \R^3:x_1+2x_2-x_3=0\}\) and \(W_2=\{t(1,2,-1):t\in \R\}\text{.}\) Then \(\R^3=W_1\oplus W_2\text{.}\)
  3. Let \(W_1\) be the set of all \(n\times n\) symmetric matrices and \(W_2\text{,}\) the set of all \(n\times n\) skew-symmetric matrices. Then \(M_n(\R)=W_1\oplus W_2\text{.}\)
  4. Let \(W_1=\{(x_1,x_2,x_3)\in \R^3:x_1+2x_2-x_3=0\}\) and \(W_2=\{(x_1,x_2,x_3)\in \R^3:2x_1-x_2+x_3=0\}\text{,}\) then \(\R^3=W_1+W_2\) however it is not the direct sum.
  5. Let \(V\) be a fininite dimensional vector space with a basis \(\{v_1,\ldots, v_n\}\text{.}\) Then
    \begin{equation*} V=\R v_1\oplus \R v_2 \oplus \cdots \oplus \R v_n. \end{equation*}