Subsection 2.4.1 Basis
Definition 2.4.1. Basis.
A set of vectors
\(\beta=\{v_1,v_2,\ldots,v_n\}\) is called a
basis of
\(\R^n\) if every vector
\(v\in \R^n\) can be expressed uniquely as linear combinations of
\(v_1,v_2,\ldots,v_n\text{.}\) Thus
\(\beta\) is basis of
\(\R^n\) if
(i) \(L(\beta)=\R^n\text{,}\) that every vector \(v\in \R^n\) can be expressed as linear combinations of \(v_1,v_2,\ldots,v_n\text{.}\)
(ii) If \(v=\alpha_1v_1+\alpha_2v_2+\cdots +\alpha_nv_n\) and \(v=\beta_1v_1+\beta_2v_2+\cdots +\beta_nv_n\text{,}\) then \(\alpha_1=\beta_1, \alpha_2=\beta_2=\cdots,\alpha_n=\beta_n\text{.}\)
Similarly one can define a basis of any subspace of \(\R^n\text{.}\)
It is easy to prove the follwoing theorem which is opent taken as definition in many books.
Theorem 2.4.2.
A set of vectors \(\beta=\{v_1,v_2,\ldots,v_n\}\) is called a basis of \(\R^n\) iff (i) \(L(\beta)=\R^n\)and \(\beta\) is linearly independent.
Example 2.4.3.
(i) \(\{(1,0),(0,1)\}\) is a basis of \(\R^2\) called the standard basis of \(\R^2\text{.}\)
(ii) \(\{(1,-1),(2,1)\}\) is a basis of \(\R^2\text{.}\)
(iii) \(\{(1,0,0),(0,1,0),(0,0,1)\}\) is a basis of \(\R^3\) called the standard basis of \(\R^3\text{.}\)
(iv) \(\{(1,1,-1),(-1,1,1),(1,-1,1)\}\) is a basis of \(\R^3\text{.}\)
In \(\R^n\text{,}\) we define \(e_i:=(0,\ldots, 1,\ldots,0)\) where \(i\) component is 1 rest are zeros. Then it is easy to see that \(\{e_1,\ldots, e_n\}\) is a bais of \(\R^n\) called the standard basis.
Example 2.4.4.
Consider the plane \(W=\{(x_1,x_2,x_3)\in \R^3:x_1+2x_2-x_3=0\}\text{.}\) Note that, here \(x_1\) and \(x_2\) can be thought of as free variables. Any point \((x_1,x_2,x_3)\in W\text{,}\) we have
\begin{equation*}
(x_1,x_2,x_3)=(x_1,x_2,x_1+2x_2)=(\alpha,\beta,\alpha+2\beta)=\alpha(1,0,1)+\beta(0,1,2).
\end{equation*}
Thus \(\{(1,0,1),(0,1,2)\}\) spans \(W\text{.}\) It is easy to see that \(\{(1,0,1),(0,1,2)\}\) is linearly independent. Hence \(\beta =\{(1,0,1),(0,1,2)\}\) is a basis of \(W\text{.}\) In fact, any two vectors in \(W\) which are linearly independent form a basis of \(W\text{.}\)
Theorem 2.4.5.
Any set of \(n\) linearly independent vectors forms a basis of \(\R^n\text{.}\)
Proof.
Theorem 2.4.6.
Let \(\beta=\{u_1,u_2,\ldots,u_k\}\) be a basis of a subsapce \(W\) of \(\R^n\) with \(k\) elements. Then any set \(S=\{v_1,v_2,\ldots, v_{k+1}\}\subset W\) with \(k+1\) elements is linearly dependent.
Proof.
Let \(\alpha_1,\ldots,\alpha_{k+1}\) be scalars such that
\begin{equation}
\alpha_1 v_1+\alpha_2v_2+\cdots+\alpha_kv_k+\alpha_{k+1}v_{k+1}=0\tag{2.4.1}
\end{equation}
Since \(\beta\) is a basis of \(W\text{,}\) for each \(i=1,2,\ldots, k+1\text{,}\) we have
\begin{equation*}
v_i = \sum_{j=1}^k a_{ij}u_j
\end{equation*}
Substituting this in Equation
(2.4.1), we get
\begin{equation}
\alpha_1 \left(\sum_{j=1}^k a_{1j}u_j\right)+\cdots+
\alpha_{k+1}\left(\sum_{j=1}^k a_{k+1j}u_j\right)=0\tag{2.4.2}
\end{equation}
Collecting the coefficients of
\(u_i's\) in the Equation
(2.4.2), we get
\begin{equation}
\left(\sum \alpha_ia_{i1}\right)u_1+\cdots +
\left(\sum \alpha_ia_{ik+1}\right)u_{k+1}=0.\tag{2.4.3}
\end{equation}
Since\(\beta\) is lineary independent, we have
\begin{align*}
\alpha_1 a_{11}+\alpha_2a_{21}+\cdots+\alpha_{k+1}a_{k+11} =\amp 0 \\
\alpha_1 a_{12}+\alpha_2a_{22}+\cdots+\alpha_{k+1}a_{k+12} =\amp 0 \\
\vdots \amp \\
\alpha_1 a_{1k}+\alpha_2a_{2k}+\cdots+\alpha_{k+1}a_{k+1k} =\amp 0
\end{align*}
These are \(k\) homogeneous linear equations in \(k+1\) variables \(\alpha_1,\ldots,\alpha_{k+1}\text{.}\) Hence it has a non zero solution. In particular, there exist scalars, \(\alpha_1,\ldots,\alpha_{k+1}\) not all zero such that \(\alpha_1 v_1+\alpha_2v_2+\cdots+\alpha_kv_k+\alpha_{k+1}v_{k+1}=0\text{.}\) Hence \(S\) is linearly dependent.
Corollary 2.4.7.
Let \(\beta=\{v_1,v_2,\ldots,v_k\}\) be a basis of a subscape \(W\) of \(\R^n\) with \(k\) elements. If \(S\) is a linearly independet subset in \(W\text{,}\) then \(|S|\leq k\text{.}\)
Theorem 2.4.8.
Let \(\beta\) and \(\gamma\) be two bases of a subscape \(W\) of \(\R^n\text{.}\) Then \(\beta\) and \(\gamma\) have the same number of elements.
Proof.
Suppose
\(|\beta|=r\) and
\(|\gamma|=s\text{.}\) Since
\(\beta\) is a basis and
\(\gamma\) is linearly independet, by Corollary
Corollary 2.4.7,
\(s\leq r\text{.}\) Similarly
\(\gamma\) is a basis and
\(\beta\) is linearly independet, we have
\(s\leq r\text{.}\) Hence
\(r=s\text{.}\)
Subsection 2.4.2 Dimension of Subspaces
Since the number of elements any two bases are same. This leads to the dinition of dimension of a vector subspace.
Definition 2.4.9.
Let \(W\) be subspace of \(\R^n\text{.}\) The number of elements in a basis of \(W\) is called the dimension of \(W\text{.}\)
Example 2.4.10.
(i) \(\R^n\) is a \(n\)-dimensional vector space over \(\R\text{.}\)
(ii) Any plane passing through origin in \(\R^3\) is a 2 dimensional subspace.
(iii) \(W:=\{(x_1,\ldots,x_n):x_1+\cdots+x_n=0\}\) is \(n-1\) dimensional subspace of \(\R^n\text{.}\) Write down a basis of \(W\text{.}\)
(iv) \(W=\{(x_1,x_2,x_3,x_4)\in \R^4:x_1=x_3,x_2=x_4\}\) is a 2-dimensional subspace of \(\R^4\text{.}\) Write down a basis of \(W\text{.}\)
How to find basis of a subspace \(L(S)\text{?}\)
Suppose \(W\) is subspace spanned by a set of \(k\) vectors, say, \(v_1,\ldots,v_k\) in \(\R^n\text{.}\) How to find a basis of \(W\text{?}\) Note that \(\dim{(W)}\leq k\text{.}\) In order to find a basis of \(W\text{,}\) we construct a matrix \(A\) whose rows are \(v_1, \ldots, v_k\text{.}\) Find the reduced-row-echelon form of \(A\text{.}\) Then the non-zero rows in RREF(\(A\)) form a basis of \(W\text{.}\)
Example 2.4.11.
Consider the set of vectors \(v_1 =\left(1,-1,2,3,1,4\right)\text{,}\) \(v_2=\left(2,1,0,2,-3,1\right)\text{,}\) \(v_3=\left(-4,-5,4,0,11,5\right)\text{,}\) \(v_4=\left(-1,0,2,1,3,2\right)\) and \(v_5=\left(-2,-2,4,2,7,5\right)\text{.}\) Let \(W\) be the linear span of \(\{v_1,v_2,v_3,v_4,v_5\}\text{.}\) Let us find a basis and hence the dimension of \(W\text{.}\)
We construct the matrix \(A\) whose rows are \(\{v_1,v_2,v_3,v_4,v_5\}\) and apply RREF.
\begin{equation*}
RREF\left(\left[\begin{array}{rrrrrr}
1 \amp -1 \amp 2 \amp 3 \amp 1 \amp 4 \\
2 \amp 1 \amp 0 \amp 2 \amp -3 \amp 1 \\
-4 \amp -5 \amp 4 \amp 0 \amp 11 \amp 5 \\
-1 \amp 0 \amp 2 \amp 1 \amp 3 \amp 2 \\
-2 \amp -2 \amp 4 \amp 2 \amp 7 \amp 5
\end{array}\right]\right)=\left[\begin{array}{rrrrrr}
1 \amp 0 \amp 0 \amp 1 \amp -\frac{5}{4} \amp \frac{3}{4} \\
0 \amp 1 \amp 0 \amp 0 \amp -\frac{1}{2} \amp -\frac{1}{2} \\
0 \amp 0 \amp 1 \amp 1 \amp \frac{7}{8} \amp \frac{11}{8} \\
0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \\
0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0
\end{array}\right]
\end{equation*}
Thus \(W\) has a basis consisting of three non zero rows of \(RREF(A)\text{.}\) That is,
\begin{equation*}
\beta = \left\{\left(1 , 0 , 0 , 1 , -\frac{5}{4} , \frac{3}{4}\right),
\left(0 , 1 , 0 , 0 , -\frac{1}{2} , -\frac{1}{2}\right),
\left(0 , 0 , 1 , 1 , \frac{7}{8} , \frac{11}{8}\right)\right\}
\end{equation*}
is basis of
\(W\) and it is a 3 dimensional subspace of
\(\R^6\text{.}\) Note that
\(W\) is also the row space of
\(A\text{.}\) Note that each column of \(A\) is a vector in \(\R^5\text{.}\) Let us find the column space of \(A\text{.}\) Thus to find the \({\rm col}(A)\text{,}\) we take the transpose of \(A\) and apply the RREF.
\begin{equation*}
RREF\left(\left[\begin{array}{rrrrr}
1 \amp 2 \amp -4 \amp -1 \amp -2 \\
-1 \amp 1 \amp -5 \amp 0 \amp -2 \\
2 \amp 0 \amp 4 \amp 2 \amp 4 \\
3 \amp 2 \amp 0 \amp 1 \amp 2 \\
1 \amp -3 \amp 11 \amp 3 \amp 7 \\
4 \amp 1 \amp 5 \amp 2 \amp 5
\end{array}\right]\right)=
\left[\begin{array}{rrrrr}
1 \amp 0 \amp 2 \amp 0 \amp 1 \\
0 \amp 1 \amp -3 \amp 0 \amp -1 \\
0 \amp 0 \amp 0 \amp 1 \amp 1 \\
0 \amp 0 \amp 0 \amp 0 \amp 0 \\
0 \amp 0 \amp 0 \amp 0 \amp 0 \\
0 \amp 0 \amp 0 \amp 0 \amp 0
\end{array}\right]
\end{equation*}
Thus the basis \(\gamma\) of \({\rm col}(A)\) consists of three non-zero rows of \(RREF(A^T)\text{.}\) Thus
\begin{equation*}
\gamma=\{(1,0,2,0,1),(0,1,-3,0,-1),(0,0,0,1,1)\}
\end{equation*}
is a basis of \({\rm col}(A)\text{.}\) Notice that \(\dim{({\rm col}(A))}=\dim{({\rm row}(A))}\text{.}\)
Definition 2.4.12.
The \(\dim{({\rm col}(A))}\) is called the column rank of \(A\) and \(\dim{({\rm row}(A))}\) is called the row rank of \(A\text{.}\)
Theorem 2.4.13.
The row rank and column rank of any matrix are same. This is called the rank of the matrix.
Example 2.4.14.
Consider a matrix \(A=\left[\begin{array}{rrrrr}
1 \amp 2 \amp -4 \amp -1 \amp -2 \\
-1 \amp 1 \amp -5 \amp 0 \amp -2 \\
2 \amp 0 \amp 4 \amp 2 \amp 4 \\
3 \amp 2 \amp 0 \amp 1 \amp 2 \\
1 \amp -3 \amp 11 \amp 3 \amp 7 \\
4 \amp 1 \amp 5 \amp 2 \amp 5
\end{array}\right]\text{.}\) Let us find the null space of \(A\text{.}\) That is, find a basis of \({\cal N}(A)\text{.}\) The null space of \(A\) is given by
\begin{align*}
{\cal N}(A) =\amp \{x\in \R^5:Ax=0\}\\
=\amp \left\{\begin{bmatrix} x_1\\x_2\\\vdots\\x_5\end{bmatrix}:
\left[\begin{array}{rrrrr}
1 \amp 2 \amp -4 \amp -1 \amp -2 \\
-1 \amp 1 \amp -5 \amp 0 \amp -2 \\
2 \amp 0 \amp 4 \amp 2 \amp 4 \\
3 \amp 2 \amp 0 \amp 1 \amp 2 \\
1 \amp -3 \amp 11 \amp 3 \amp 7 \\
4 \amp 1 \amp 5 \amp 2 \amp 5
\end{array}\right]\begin{bmatrix} x_1\\x_2\\\vdots\\x_5\end{bmatrix}=\begin{bmatrix} 0\\0\\\vdots\\0\end{bmatrix}
\right\}\\
=\amp \left\{\begin{bmatrix} x_1\\x_2\\\vdots\\x_5\end{bmatrix}:
\left[\begin{array}{rrrrr}
1 \amp 0 \amp 2 \amp 0 \amp 1 \\
0 \amp 1 \amp -3 \amp 0 \amp -1 \\
0 \amp 0 \amp 0 \amp 1 \amp 1 \\
0 \amp 0 \amp 0 \amp 0 \amp 0 \\
0 \amp 0 \amp 0 \amp 0 \amp 0 \\
0 \amp 0 \amp 0 \amp 0 \amp 0
\end{array}\right]
\begin{bmatrix} x_1\\x_2\\\vdots\\x_5\end{bmatrix}=\begin{bmatrix} 0\\0\\\vdots\\0\end{bmatrix}
\right\}\quad \text{using RREF}\\
=\amp
\left\{\begin{bmatrix} x_1\\x_2\\\vdots\\x_5\end{bmatrix}:
x_1+2x_3+x_5=0, x_2-3x_3-x_5=0,x_4+x_5=0
\right\}\\
=\amp
\left\{\begin{bmatrix} \alpha\\\beta\\\alpha+\beta\\2\alpha+\beta\\-2\alpha-\beta\end{bmatrix}:\alpha,\beta\in\R \right\}\\
=\amp
\left\{
\alpha \begin{bmatrix}
1 \\0\\1\\3\\-3\end{bmatrix}+\beta\begin{bmatrix}
0 \\1\\1\\2\\-2\end{bmatrix}:\alpha,\beta\in\R
\right\}
\end{align*}
Thus
\(\dim{{\cal N}(A)}=2\) and
\(\delta =\{(1,0,1,3,-3),(0,1,1,2,-2)\}\) is a basis of
\({\cal N}(A)\text{.}\)
The \(\dim{{\cal N}(A)}\) is called the nullity of \(A\text{.}\) Notice that for this matrix
\begin{equation*}
{\rm nullity}(A)+{\rm rank}(A)=\text{ number of columns of }A.
\end{equation*}
This is true for any matrix \(A\text{.}\)
Example 2.4.15.
Consider the matrix \(A=\left[\begin{array}{rrrrrr}
1 \amp -1 \amp 2 \amp 3 \amp 1 \amp 4 \\
2 \amp 1 \amp 0 \amp 2 \amp -3 \amp 1 \\
-4 \amp -5 \amp 4 \amp 0 \amp 11 \amp 5 \\
-1 \amp 0 \amp 2 \amp 1 \amp 3 \amp 2 \\
-2 \amp -2 \amp 4 \amp 2 \amp 7 \amp 5
\end{array}\right]\text{.}\) Let us find the image space, \({\cal R}(A)\) of \(A\text{.}\) Let \(b = \begin{bmatrix} b_1\\b_2\\b_3\\b_4\\b_5\end{bmatrix}\in {\cal R}(A)\) lies in then there exists \(x\in\R^6\) such that \(Ax=b\text{,}\) In particular \(Ax=b\) has a solution. Thus to find a solution we apply the RREF to the augmented matrix \([A|b]\text{.}\) It is easy to see that
\begin{equation*}
RREF(A)=
\left[\begin{array}{rrrrrrr}
1 \amp 0 \amp 0 \amp 1 \amp -\frac{5}{4} \amp \frac{3}{4} \amp \frac{1}{4} b_{1} + \frac{1}{4} b_{2} - \frac{1}{4} b_{4} \\
0 \amp 1 \amp 0 \amp 0 \amp -\frac{1}{2} \amp -\frac{1}{2} \amp -\frac{1}{2} b_{1} + \frac{1}{2} b_{2} + \frac{1}{2} b_{4} \\
0 \amp 0 \amp 1 \amp 1 \amp \frac{7}{8} \amp \frac{11}{8} \amp \frac{1}{8} b_{1} + \frac{1}{8} b_{2} + \frac{3}{8} b_{4} \\
0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp -2 b_{1} + 3 b_{2} + b_{3} \\
0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp -b_{1} + b_{2} - b_{4} + b_{5}
\end{array}\right]
\end{equation*}
This means that \(Ax=b\) has a solution iff
\begin{equation*}
-2 b_{1} + 3 b_{2} + b_{3}=0 \text{ and }-b_{1} + b_{2} - b_{4} + b_{5}=0\text{.}
\end{equation*}
Solving these equations, it is easy to see that
\begin{align*}
{\cal R}(A)=\amp \left\{\begin{bmatrix}
b_1\\b_2\\b_3\\b_4\\b_5
\end{bmatrix}: -2 b_{1} + 3 b_{2} + b_{3}=0, -b_{1} + b_{2} - b_{4} + b_{5}=0\right\}\\
=\amp \left\{\begin{bmatrix}
b_1\\b_2\\2b_1-3b_2\\b_4\\b_1-b_2+b4
\end{bmatrix}:b_1,b_2,b_4\in \R\right\}\\
=\amp \left\{b_1\begin{bmatrix}1\\0\\2\\0\\1\end{bmatrix}+
b_2\begin{bmatrix}0\\1\\-3\\0\\-1\end{bmatrix}+
b_4\begin{bmatrix}0\\0\\0\\1\\1\end{bmatrix}:b_1,b_2,b_4\in \R\right\}
\end{align*}
Thus
\begin{equation*}
\left\{\begin{bmatrix}1\\0\\2\\0\\1\end{bmatrix},
\begin{bmatrix}0\\1\\-3\\0\\-1\end{bmatrix},
\begin{bmatrix}0\\0\\0\\1\\1\end{bmatrix}
\right\}
\end{equation*}
is a basis of \({\cal R}(A)\) which is same as the column space of \(A\text{.}\) Note that \({\cal R}(A)\) is null space of the matrix \(\begin{bmatrix} -2 \amp 3 \amp 1 \amp 0 \amp 0\\-1\amp 1\amp 0 \amp -1 \amp 1 \end{bmatrix}\text{.}\)
Theorem 2.4.16.
Let \(A \) be \(m\times n\) real matrix. Then
\begin{equation*}
{\rm rank}(A)+{\rm nullity}(A)=n\text{.}
\end{equation*}
Definition 2.4.17.
Let \(\beta =\{v_1,v_2,\ldots, v_n\}\)be a basis of \(\R^n\text{.}\) Let \(x\in \R^n\text{.}\) Then we know that there exists unique scalars \(x_1,\ldots, x_n\) such that \(x=x_1v_1+x_2v_2+\cdots+x_nv_n\text{.}\) Then \(x_1,\ldots, x_n\) are called the coordinates of \(x\) with respect to the basis \(\beta\text{.}\)
Notice that the order in which basis vectors appear is important. Suppose \(\beta'=\{v_2,v_2,v_3\ldots, v_n\}\text{.}\) Then \(\beta'\) is also a basis of \(\R^n\text{.}\) However the coordinates of \(x\) with respect to the basis \(\beta'\) is \(x_2,x_1,x_3,\ldots,x_n\text{.}\) This is the reason basis of \(\R^n\) is called an ordered basis. By a basis we will always mean an ordered basis.
How to find the coordinates of a vector w.r.t. a given basis?
Suppose \(\beta=\{v_1,v_2,\ldots, v_n\}\) be a basis of \(\R^n\) and \(v\in \R^n\text{.}\) How to find the coordinates of \(v\) with respect to \(\beta\text{?}\) Let \(v =x_1v_1+x_2v_2+\cdots+x_nv_n\text{.}\) We need to find \(x_1,\cdots, x_n\text{.}\) Note that
\begin{equation*}
v = x_1v_1+x_2v_2+\cdots+x_nv_n=
\begin{bmatrix} v_1\amp v_2\cdots\amp v_n\end{bmatrix}
\begin{bmatrix} x_1\\x_2\\\vdots\\x_n\end{bmatrix}=Ax.
\end{equation*}
Thus to find \(x\text{,}\) we need to solve \(Ax=v\text{,}\) where \(A\) is the matrix whose columns are \(v_1,\ldots, v_n\text{.}\) This can be done using the RREF. Let us illustrate this with few examples.
Example 2.4.18.
If \(x=(x_1,\ldots,x_n)\in \R^n\text{.}\) Then \(x=x_1e_1+x_2e_2+\cdots+x_ne_n\text{.}\) In particular \(x_1,x_2,\ldots, x_n\) is the coordinate of \(x\) with respect to the standard basis \(\{e_1,e_2,\ldots,e_n\}\text{.}\)
Example 2.4.19.
Consider a basis \(\beta=\{(1,-1),(2,1)\}\text{.}\) Find the coordinates of \(v=(2,3)\) with respect to \(\beta\text{.}\) In order to find the coordinates of \(v\) with respect to \(\beta\text{,}\) we solve the system \(Ax=b\) where \(A=\begin{bmatrix} 1 \amp 2\\-1\amp 1\end{bmatrix}\) and \(b = \begin{bmatrix} 2\\3\end{bmatrix}\text{.}\) Using RREF
\begin{equation*}
[A~|~b]\xrightarrow{RREF} \left[\begin{array}{rr|r}
1 \amp 0 \amp -\frac{4}{3} \\
0 \amp 1 \amp \frac{5}{3}
\end{array}\right]
\end{equation*}
Hence the coordinates of \(v\) w.r.t. \(\beta\) is \((-4/3,5/3)\text{.}\)
Example 2.4.20.
Find the coordinates of the vector \((1,2,3)\) with respect to a basis\\ \(\beta=\{(1,-1,1),(1,1,-1),(-1,1,1)\}\) of \(\R^3\text{.}\) Using the RREF we have
\begin{equation*}
\left[\begin{array}{rrr|r}
1 \amp 1 \amp -1 \amp 1 \\
-1 \amp 1 \amp 1 \amp 2 \\
1 \amp -1 \amp 1 \amp 3
\end{array}\right] \xrightarrow{RREF}
\left[\begin{array}{rrr|r}
1 \amp 0 \amp 0 \amp 2 \\
0 \amp 1 \amp 0 \amp \frac{3}{2} \\
0 \amp 0 \amp 1 \amp \frac{5}{2}
\end{array}\right]
\end{equation*}
Hence the coordinates of \((1,2,3)\) with respect the given basis is \((2,3/2,5/2)\text{.}\)
Example 2.4.21.
Find the coordinates of the vector \((1,2,3,4)\) with respect to a basis \(\beta=\{(1,-1,1,1),(1,1,-1,1),(1,1,-1,1),(-1,1,1,1)\}\) of \(\R^4\text{.}\) Using the RREF we have
\begin{equation*}
\left[\begin{array}{rrrr|r}
1 \amp 1 \amp 1 \amp -1 \amp 1\\
-1 \amp 1 \amp 1 \amp 1 \amp 2\\
1 \amp -1 \amp 1 \amp 1 \amp 3\\
1 \amp 1 \amp -1 \amp 1 \amp 4
\end{array}\right] \xrightarrow{RREF}
\left[\begin{array}{rrrr|r}
1 \amp 0 \amp 0 \amp 0 \amp \frac{3}{2} \\
0 \amp 1 \amp 0 \amp 0 \amp 1 \\
0 \amp 0 \amp 1 \amp 0 \amp \frac{1}{2} \\
0 \amp 0 \amp 0 \amp 1 \amp 2
\end{array}
\right]
\end{equation*}
Hence the coordinates of \((1,2,3)\) with respect the given basis is \((3/2,1,1/2,2)\text{.}\)
Subsection 2.4.3 Change of bases.
Let \(\beta=\{u_1,u_2\ldots,u_n\}\) and \(\gamma=\{v_1,v_2\ldots,v_n\}\) be two bases of \(\R^n\text{.}\) Fix a vector \(x\in \R^n\text{.}\) Let \(x_\beta=\begin{bmatrix}c_1\\\vdots \\c_n\end{bmatrix}\) and \(x_\gamma=\begin{bmatrix}d_1\\\vdots \\d_n\end{bmatrix}\) be the coordinates of \(x\) with respect to \(\beta\) and \(\gamma\) respectively. Then we have
\begin{equation*}
x = \begin{bmatrix}u_1\amp \cdots \amp u_n\end{bmatrix}\begin{bmatrix}c_1\\\vdots \\c_n\end{bmatrix}=Ax_\beta.
\end{equation*}
Similarly
\begin{equation*}
x = \begin{bmatrix}v_1\amp \cdots \amp v_n\end{bmatrix}\begin{bmatrix}d_1\\\vdots \\d_n\end{bmatrix}=Bx_\gamma.
\end{equation*}
Thus we have
\begin{equation*}
Ax_\beta=Bx_\gamma\implies x_\beta = A^{-1}B x_\gamma \text{ and } x_\gamma = B^{-1}Ax_\beta.
\end{equation*}
The matrices \(A^{-1}B\) and \(B^{-1}A\) are called transition matrices. We denotes \(A^{-1}B\) by \([I]_\gamma^\beta\) and \(B^{-1}A\) by \([I]_\beta^\gamma\text{.}\) Note that \([I]_\beta^\gamma=\left([I]_\gamma^\beta\right)^{-1}.\)
Furthermore, the transition matrix \(I_\gamma^\beta\) can be obtained by applying RREF to the \([A|B]\) and extracting the last \(n\) columns. Let us illustrate this with an example.
Example 2.4.22.
Consider \(\beta = \left\{\left(2,-1,3\right), \left(3,1,2\right), \left(1,1,1\right)\right\}\text{,}\) and \(\gamma = \left\{\left(1,-1,1\right), \left(1,1,-1\right), \left(-1,1,1\right)\right\}\) be two bases of \(\R^3\text{.}\) Consider a vector \(x=(1,2,3)\text{.}\) We have
\begin{equation*}
A = \left(\begin{array}{rrr}
2 \amp 3 \amp 1 \\
-1 \amp 1 \amp 1 \\
3 \amp 2 \amp 1
\end{array}\right), B = \left(\begin{array}{rrr}
1 \amp 1 \amp -1 \\
-1 \amp 1 \amp 1 \\
1 \amp -1 \amp 1
\end{array}\right)
\end{equation*}
First we find \(x_\beta\) and \(x_\gamma\text{.}\)
\begin{equation*}
\left(\begin{array}{rrr|r}
2 \amp 3 \amp 1 \amp 1 \\
-1 \amp 1 \amp 1 \amp 2 \\
3 \amp 2 \amp 1 \amp 3
\end{array}\right)
\xrightarrow{RREF}
\left(\begin{array}{rrr|r}
1 \amp 0 \amp 0 \amp \frac{3}{5} \\
0 \amp 1 \amp 0 \amp -\frac{7}{5} \\
0 \amp 0 \amp 1 \amp 4
\end{array}\right)
\end{equation*}
\begin{equation*}
\implies x_\beta =\begin{pmatrix} 3/5\\7/5\\4\end{pmatrix}.
\end{equation*}
Similarly
\begin{equation*}
\left(\begin{array}{rrr|r}
1 \amp 1 \amp -1 \amp 1 \\
-1 \amp 1 \amp 1 \amp 2 \\
1 \amp -1 \amp 1 \amp 3
\end{array}\right)\xrightarrow{RREF}
\left(\begin{array}{rrr|r}
1 \amp 0 \amp 0 \amp 2 \\
0 \amp 1 \amp 0 \amp \frac{3}{2} \\
0 \amp 0 \amp 1 \amp \frac{5}{2}
\end{array}\right)
\end{equation*}
\begin{equation*}
\implies x_\gamma =\begin{pmatrix} 2\\3/2\\5/2\end{pmatrix}.
\end{equation*}
Now to find the transition matrix \([I]_\gamma^\beta\text{,}\) we have
\begin{equation*}
[A~|~B]\xrightarrow{RREF} \left(\begin{array}{rrr|rrr}
1 \amp 0 \amp 0 \amp \frac{2}{5} \amp -\frac{4}{5} \amp \frac{2}{5} \\
0 \amp 1 \amp 0 \amp \frac{2}{5} \amp \frac{6}{5} \amp -\frac{8}{5} \\
0 \amp 0 \amp 1 \amp -1 \amp -1 \amp 3
\end{array}\right)
\end{equation*}
\begin{equation*}
\implies [I]_\gamma^\beta =
\left(\begin{array}{rrr}
\frac{2}{5} \amp -\frac{4}{5} \amp \frac{2}{5} \\
\frac{2}{5} \amp \frac{6}{5} \amp -\frac{8}{5} \\
-1 \amp -1 \amp 3
\end{array}\right)
\end{equation*}
It is easy to verify that \(x_\beta = [I]_\gamma^\beta x_\gamma\text{.}\) Similarly to find the transition matrix \([I]_\beta\gamma\text{,}\) we have
\begin{equation*}
[B~|~A]\xrightarrow{RREF} \left(\begin{array}{rrr|rrr}
1 \amp 0 \amp 0 \amp \frac{5}{2} \amp \frac{5}{2} \amp 1 \\
0 \amp 1 \amp 0 \amp \frac{1}{2} \amp 2 \amp 1 \\
0 \amp 0 \amp 1 \amp 1 \amp \frac{3}{2} \amp 1
\end{array}\right)
\end{equation*}
\begin{equation*}
\implies [I]_\beta^\gamma =\left(\begin{array}{rrr}
\frac{5}{2} \amp \frac{5}{2} \amp 1 \\
\frac{1}{2} \amp 2 \amp 1 \\
1 \amp \frac{3}{2} \amp 1
\end{array}\right)
\end{equation*}
It is easy to verify that \(x_\gamma=[I]_\beta^\gamma x_\beta\text{.}\)
Checkpoint 2.4.23.
What are all subspaces of \(\R^2\) and \(\R^3\text{?}\)
Checkpoint 2.4.24.
If \(W\) is a subspace of \(\R^n\text{,}\) then it is null space of some matrix.
We end this chapter by look at a bigger example. Here we also illustrate RREF gives several informations on a matrix.
Example 2.4.25.
Consider a set of 7 vectors \(v_1,\ldots, v_7\in \R^7\text{.}\)
\begin{equation*}
v_1=\left(\begin{array}{r}
1 \\
-1 \\
2 \\
0 \\
3 \\
2 \\
1
\end{array}\right),
v_2=\left(\begin{array}{r}
2 \\
1 \\
0 \\
2 \\
-3 \\
0 \\
1
\end{array}\right),
v_3=\left(\begin{array}{r}
-1 \\
-5 \\
6 \\
-4 \\
15 \\
6 \\
1
\end{array}\right),
v_4=\left(\begin{array}{r}
0 \\
2 \\
3 \\
1 \\
-1 \\
3 \\
-1
\end{array}\right),
v_5=\left(\begin{array}{r}
4 \\
3 \\
1 \\
2 \\
0 \\
1 \\
3
\end{array}\right),
\end{equation*}
\begin{equation*}
v_6=\left(\begin{array}{r}
-13 \\
-30 \\
4 \\
-22 \\
52 \\
4 \\
0
\end{array}\right),
v_7=\left(\begin{array}{r}
-2 \\
-1 \\
0 \\
1 \\
2 \\
3 \\
4
\end{array}\right)
\end{equation*}
Define the matrix \(A\) whose columns are \(v_1,\ldots, v_7\) and apply RREF to \(A\text{.}\)
\begin{equation*}
A=\left(\begin{array}{rrrrrrr}
1 \amp 2 \amp -1 \amp 0 \amp 4 \amp -13 \amp -2 \\
-1 \amp 1 \amp -5 \amp 2 \amp 3 \amp -30 \amp -1 \\
2 \amp 0 \amp 6 \amp 3 \amp 1 \amp 4 \amp 0 \\
0 \amp 2 \amp -4 \amp 1 \amp 2 \amp -22 \amp 1 \\
3 \amp -3 \amp 15 \amp -1 \amp 0 \amp 52 \amp 2 \\
2 \amp 0 \amp 6 \amp 3 \amp 1 \amp 4 \amp 3 \\
1 \amp 1 \amp 1 \amp -1 \amp 3 \amp 0 \amp 4
\end{array}\right)
\end{equation*}
\begin{equation*}
RREF(A)=\left(\begin{array}{rrrrrrr}
1 \amp 0 \amp 3 \amp 0 \amp 0 \amp 9 \amp 0 \\
0 \amp 1 \amp -2 \amp 0 \amp 0 \amp -7 \amp 0 \\
0 \amp 0 \amp 0 \amp 1 \amp 0 \amp -4 \amp 0 \\
0 \amp 0 \amp 0 \amp 0 \amp 1 \amp -2 \amp 0 \\
0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 1 \\
0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \\
0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0
\end{array}\right)
\end{equation*}
From the RREF of
\(A\text{,}\) we have the following observations:
(i) The reduced row-echelon form of \(A\) has 5 non zero rows. This means the rank of \(A\) is 5. In particular, \(A\) is singular.
(ii) The pivots columns are 1, 2, 4, 5, 7. In particular, \(\{v_1,v_2,v_4,v_5,v_7\}\) are linearly independent and forms a basis of the subspace spanned by \(\{v_1,v_2,v_4,v_5,v_7\}\text{.}\)
(iii) The 3rd columns gives \(v_3\) as linear combinations of \(v_1\) and \(v_2\text{.}\) In particular, \(v_3=3v_1-2v_2\text{.}\) Similarly \(v_6=9v_1-7v_2-4v_4-2v_5\text{.}\)
(iv) Since rank of \(A\) is 5, the nullity of \(A\) is 2.
(v) First five rows of \(RREF(A)\) constitute a basis of the row space of \(A\text{.}\)