In this section, we wish to find linear maps from \(\R^n\) to \(\R^m\text{.}\) We shall see that these linear maps are essentially given by an \(m\times n\) matrix. We shall also see how to find matrix of a linear transformation with respect to a given basis on domain and codomain.
Subsection3.2.1Linear maps from \(\R^n\) to \(\R^m\)
We want to find a linear map from \(\R^n\) to \(\R^m\text{.}\) Suppose \(T\colon \R^n\to \R^m\) is a linear map. Then for \(x\in \R^n\text{,}\)\(T(x)\in \R^m\text{.}\) In particular, \(T(x)\) has \(m\) components. Let us write these components as \(T_1(x),\ldots, T_m(x)\text{.}\) Thus \(T\) is given by
Note that for each \(i=1,\ldots, m\text{,}\)\(T_i\) is a map from \(\R^n\to \R\text{.}\)
Checkpoint3.2.1.
Show that \(T\colon \R^n\to \R^m\) defined by \(T(x)=\left(T_1(x),\ldots, T_m(x)\right)\) is linear map if and only if \(T_i\colon \R^n\to \R\) is linear map for each \(i\text{.}\)
From Ex. Checkpoint 3.2.1, it follows that in order to know linear map \(T\text{,}\) it is sufficient to know component \(T_i\colon \R^n\to \R\) for each \(i\text{.}\)
Example3.2.2.Linear map from \(\R^n\) to \(\R\).
Suppose \(T\colon \R^n\to \R\) is a linear map. Consider the standard basis \(\beta=\{e_1,e_2,\ldots,
e_n\}\text{.}\) Then for \(x\in \R^n\text{,}\) we have \(x=x_1e_1+x_2e_2+\cdots+x_n e_n\text{.}\) Since \(T\) is linear, we have
Thus, if \(T\colon \R^n\to \R\) is a linear map, there there exist scalars, \(a_1,a_2,\ldots,a_n\) such that \(T(x)=a_1x_1+a_2x_2+\cdots+a_nx_n\text{.}\) Here we have \(a_i=T(e_i)\) for \(i=1,\ldots,n\text{.}\) It is clear that to know \(T\) it is good enough to know \(T(e_1),\ldots, T(e_n)\text{.}\)
What we have proved is, any linear map \(T\) from \(\R^n\to \R\) is given by
where \(a_i=T(e_i)\) for \(1\leq i\leq n\text{.}\)
What happens if you chose a different basis (other than standard basis)?
Let us come back to the linear map \(T(x)=\begin{bmatrix}T_1(x)\\T_2(x)\\\vdots\\T_m(x) \end{bmatrix}\text{.}\) Since for each \(i\text{,}\)\(T_i\) is linear, there exist scalars, \(a_{i1},a_{i2},\ldots,a_{in}\in \R\) such that \(T_i(x)=a_{i1}x_1+a_{i2}x_2+\cdots+a_{i_n}x_n\text{.}\) Thus
Thus we have shown that any linear map \(T\colon \R^n\to\R^m\) is a matrix transformation \(T_A\text{,}\) where \(A=[a_{ij}]\text{.}\) Note that the matrix of \(T\)
Notice that the \(j\)-th columns of \(A\) is the coordinates of the vector \(T(e_j)\) with respect to the standard basis \(\{e_1,\ldots,e_m\}\) of \(\R^m\text{.}\) Thus to find the matrix of \(T\text{,}\) we find the coordinates of \(T(e_j)\) with respect to the basis on the codomain and put it in the \(j\)-th column.
What happens if we change the bases on \(\R^n\) and \(\R^m\text{.}\) In order to see this let us consider an example.
Example3.2.3.
Consider a linear map \(T\colon \R^3\to \R^2\) defined by \(T\left(\begin{bmatrix}x_1\\x_2\\x_3 \end{bmatrix} \right)=\begin{bmatrix}2x_1-x_2+x_3\\x_1+x_2-x_3 \end{bmatrix}\text{.}\) It is easy to see that \(T\) is a matrix transformation \(T_A\) where \(A=\begin{bmatrix}2 \amp -1 \amp 1\\1 \amp 1\amp -1 \end{bmatrix}\text{.}\) In particular, \(A\) is the matrix of \(T\) when we consider standard bases on the domain \(\R^3\) and codomain \(\R^2\text{.}\)
Let us consider a basis \(\beta =\{v_1=(1,1,-1),v_2=(1,-1,1),v_3=(-1,1,1)\}\) of the domain and the standard basis \(\gamma=\{(1,0),(0,1)\}\) on the codomain. In order to find the matrix \(A\) of \(T\text{,}\) we find the image of \(T(v_1)\) and find its coordinates with respect to the standard basis \(\gamma\text{.}\) We have \(T(v_1)=(0,3)\text{.}\) Thus the first columns of \(A\) is \(\begin{bmatrix}0\\3 \end{bmatrix}\text{.}\) Similarly \(T(v_2)=(4,-1)\) and \(T(v_3)=(2,-1)\text{.}\) Hence the matrix of \(A\) of \(T\) with respect to the basis \(\beta\) and \(\gamma\) is \(\begin{bmatrix}0\amp 4 \amp 2\\3\amp -1\amp -1 \end{bmatrix}\text{.}\) We denote this matrix as \([T]_\beta^\gamma\text{.}\)
Checkpoint3.2.4.
Consider the linear transformation defined in the Example 3.2.3. Find the matrix of \(T\) with respect to a basis \(\beta =\{v_1=(1,1,-1),v_2=(1,-1,1),v_3=(-1,1,1)\}\) of \(\R^3\) and \(\gamma=\{w_2=(1,-1),(2,1)\}\) of \(\R^2\text{.}\)
Example3.2.5.
Consider a linear map \(T\colon \R^3\to \R^3\) given by \(T\left(\begin{bmatrix}x_1\\x_2\\x_3 \end{bmatrix} \right)=\begin{bmatrix}2x_1-x_2+x_3\\x_1+x_2-x_3\\3x_1+2x_3 \end{bmatrix}\text{.}\) Let us find the matrix of \(T\) with respect to a basis \(\beta =\{v_1=(1,1,-1),v_2=(1,-1,1),v_3=(-1,1,1)\}\) of \(\R^3\) on the domain and codomain. Note that columns of \([T]_\beta^\beta\) are the coordinates of \(T(v_1), T(v_2), T(v_3)\) with respect to the basis \(\beta\text{.}\) This can be obtained simultaneously by applying RREF to \(\begin{bmatrix}v_1 \amp v_2 \amp v_3 \amp T(v_1)\amp T(v_2)\amp T(v_3) \end{bmatrix}\) and taking the last three columns as \([T]_\beta^\beta\text{.}\)
Let \(T,S\colon \R^n\to \R^m\) be two linear maps. Then show that \(T+S\) is a linear map. Furthermore, the matrix of \(T+S\) is the sum of matrices of \(T\) and \(S\text{.}\)
Next we look the composition of linear maps.
Subsection3.2.2Composition of linear transformations
Let \(T\colon \R^n\to \R^m\) and \(S\colon R^m\to \R^p\) be linear transformations. Then \(S\circ T\colon \R^n\to \R^p\) defined by \((S\circ T)(x)=S(T(x))\) is a linear map.
Suppose \(T(x)=Ax\) and \(S(y)=By\) are matrices transformations. Then
of \(\R^3\text{.}\) Let \(A=[T]_\beta^\gamma\text{,}\)\(B=[S]_\gamma^\beta\) and \(C=[S\circ T]_\beta^\beta\text{.}\) Then we shall show that \(C=BA\text{.}\) Note that
Let \(\beta=\{u_1,u_2\ldots,u_n\}\) and \(\gamma=\{v_1,v_2\ldots,v_n\}\) be two bases of \(\R^n\text{.}\) Recall, the the definition of the matrix of change of bases \([I]_\beta^\gamma\text{.}\) We obtained \([I]_\beta^\gamma\) by applying RREF to the matrix \([B~|A]\) and extracting the last \(n\) columns. This is nothing but the matrix of the identity linear map \(I\colon \R^n\to \R^n\) with respect to a basis \(\beta\) of the domain and \(\gamma\) of the codomain.
Now let us consider what happens to the matrix of a linear transformation \(T\colon \R^n\to \R^m\) when we change the basis on domain and codoamin. Let \(\beta=\{u_1,u_2\ldots,u_n\}\) and \(\gamma=\{v_1,v_2\ldots,v_m\}\) be bases of \(\R^n\) and \(\R^m\) respectively. Let \(A =[T]_\beta^\gamma\) be the matrix of \(T\) with respect to \(\beta\) and \(\gamma\text{.}\) Let \(\beta'=\{u_1',u_2'\ldots,u_n'\}\) and \(\gamma'=\{v_1',v_2'\ldots,v_m'\}\) be another bases of \(\R^n\) and \(\R^m\) respectively. Let \(B =[T]_{\beta'}^{\gamma'}\) be the matrix of \(T\) with respect to \(\beta'\) and \(\gamma'\text{.}\) How are \(A\) and \(B\) related? The relation is given by the following commutative diagram.
Figure3.2.8.Commutative Diagram
From the above commutative diagram, we have
\begin{equation*}
\tau A = B\rho \implies B = \tau A \rho^{-1} \text{ or } A = \tau^{-1}B\rho\text{.}
\end{equation*}
Example3.2.9.
Consider a linear map \(T\colon \R^4\to \R^3\) defined in the Example 3.2.7. Consider a basis\(\beta=\{u_1,u_2,u_3,u_4\}\) where
The matrix of change of basis \(\tau=[I]_\gamma^{\gamma'}=\left(\begin{array}{rrr} \frac{3}{2} \amp \frac{1}{2} \amp -1 \\ -\frac{1}{2} \amp \frac{5}{2} \amp 2 \\ -\frac{1}{2} \amp \frac{1}{2} \amp 0 \end{array} \right)\text{.}\)
It is easy to check that \(B\rho = \tau A\text{.}\)
Let \(T\colon \R^n\to \R^n\) be a linear transformation. Let \(\beta=\{v_1,\ldots,
v_n\}\) be a basis of \(\R^n\) and \(A=[T]_\beta\text{,}\) the matrix of \(T\) with respect to \(\beta\text{.}\) Let \(\gamma=\{u_1,\ldots,
u_n\}\) be another basis of \(\R^n\) and \(B=[T]_\gamma\text{,}\) the matrix of \(T\) with respect to \(\gamma\text{.}\) Let \(\rho=[I]_\beta^{\gamma}\) be matrix of change of basis from \(\beta\) to \(\gamma\text{.}\) Then we have \(B = \rho^{-1}A\rho\text{.}\) In this case, \(A\) and \(B\) are said to be similar matrices.
Definition3.2.10.
Let \(A\) and \(B\) be two real \(n\times n\) matrices. Then \(A\) and \(B\) are called similar if there exists a non singular matrix \(P\) such that \(B=P^{-1}AP\text{.}\)
Remark3.2.11.
A linear transformation \(T\colon \R^n\to \R^m\) is completely determined once it is defined on a basis. In other words, Let \(\beta=\{v_1,\ldots,
v_n\}\) be a basis of \(\R^n\text{.}\) Let \(w_1,\ldots, w_n\) be \(n\) vectors in \(\R^m\text{.}\) Then there exists a unique linear transformation \(T\colon \R^n\to \R^m\) such that \(T(v_i)=w_i\) for \(i=1,\ldots, n\text{.}\)
How is \(T\) defined, if \(T(v_i)=w_i\text{?}\) For \(v\in V\text{,}\) there exist scalars, \(\alpha_1,\ldots, \alpha_n\) such that \(v=\sum \alpha_iv_i\text{.}\) Then \(T(v)=\sum \alpha T(v_1)=\sum\alpha w_i\text{.}\)
Reading QuestionsReading Questions
Prove the uniqueness of the linear tranformation in the Remark 3.2.11
Example3.2.12.
Fix a basis \(\beta =\{ (1,1,-1),(1,-1,1),(-1,1,1)\}\) of \(\R^3\text{.}\) Define a linear map \(T\colon \R^3\to \R^3\) such that \(T(1,1,-1)=(1,1,0), T(1,-1,1)=(1,0,1), T(-1,1,1)=(0,1,1)\text{.}\) Find \(T\left(\begin{bmatrix}x_1\\ x_2\\ x_3 \end{bmatrix} \right)\text{.}\)
Thus in order to find \(T\) we need to know how is \(T\) defined on the standard basis vector. First we need to find the coordinates of \(e_1,e_2,e_2\) with respect to the basis \(\beta\) using RREF.