Skip to main content

Section 3.2 Linear maps from \(\R^n\) to \(\R^m\)

In this section, we wish to find linear maps from \(\R^n\) to \(\R^m\text{.}\) We shall see that these linear maps are essentially given by an \(m\times n\) matrix. We shall also see how to find matrix of a linear transformation with respect to a given basis on domain and codomain.

Subsection 3.2.1 Linear maps from \(\R^n\) to \(\R^m\)

We want to find a linear map from \(\R^n\) to \(\R^m\text{.}\) Suppose \(T\colon \R^n\to \R^m\) is a linear map. Then for \(x\in \R^n\text{,}\) \(T(x)\in \R^m\text{.}\) In particular, \(T(x)\) has \(m\) components. Let us write these components as \(T_1(x),\ldots, T_m(x)\text{.}\) Thus \(T\) is given by
\begin{equation*} T(x)=\begin{bmatrix}T_1(x)\\T_2(x)\\\vdots\\T_m(x) \end{bmatrix}\text{.} \end{equation*}
Note that for each \(i=1,\ldots, m\text{,}\) \(T_i\) is a map from \(\R^n\to \R\text{.}\)

Checkpoint 3.2.1.

Show that \(T\colon \R^n\to \R^m\) defined by \(T(x)=\left(T_1(x),\ldots, T_m(x)\right)\) is linear map if and only if \(T_i\colon \R^n\to \R\) is linear map for each \(i\text{.}\)
From Ex. Checkpoint 3.2.1, it follows that in order to know linear map \(T\text{,}\) it is sufficient to know component \(T_i\colon \R^n\to \R\) for each \(i\text{.}\)

Example 3.2.2. Linear map from \(\R^n\) to \(\R\).

Suppose \(T\colon \R^n\to \R\) is a linear map. Consider the standard basis \(\beta=\{e_1,e_2,\ldots, e_n\}\text{.}\) Then for \(x\in \R^n\text{,}\) we have \(x=x_1e_1+x_2e_2+\cdots+x_n e_n\text{.}\) Since \(T\) is linear, we have
\begin{align*} T(x)=\amp T(x_1e_1+x_2e_2+\cdots+x_n e_n)\\ =\amp x_1T(e_2)+x_2T(e_2)+\cdots+x_n T(e_n)\text{.} \end{align*}
Define \(T(e_1):=a_1, T(e_2):=a_2,\ldots, T(e_n):=a_n\text{.}\) Then
\begin{equation*} T(x)=a_1x_1+a_2x_2+\cdots+a_nx_n\text{.} \end{equation*}
Thus, if \(T\colon \R^n\to \R\) is a linear map, there there exist scalars, \(a_1,a_2,\ldots,a_n\) such that \(T(x)=a_1x_1+a_2x_2+\cdots+a_nx_n\text{.}\) Here we have \(a_i=T(e_i)\) for \(i=1,\ldots,n\text{.}\) It is clear that to know \(T\) it is good enough to know \(T(e_1),\ldots, T(e_n)\text{.}\)
What we have proved is, any linear map \(T\) from \(\R^n\to \R\) is given by
\begin{equation*} T(x)=a_1x_1+a_2x_2+\cdots+a_nx_n \end{equation*}
where \(a_i=T(e_i)\) for \(1\leq i\leq n\text{.}\)
What happens if you chose a different basis (other than standard basis)?
Let us come back to the linear map \(T(x)=\begin{bmatrix}T_1(x)\\T_2(x)\\\vdots\\T_m(x) \end{bmatrix}\text{.}\) Since for each \(i\text{,}\) \(T_i\) is linear, there exist scalars, \(a_{i1},a_{i2},\ldots,a_{in}\in \R\) such that \(T_i(x)=a_{i1}x_1+a_{i2}x_2+\cdots+a_{i_n}x_n\text{.}\) Thus
\begin{equation*} T(x)=\begin{bmatrix}a_{11}x_1+a_{12}x_2+\cdots+a_{1n}x_n\\ a_{21}x_1+a_{22}x_2+\cdots+a_{2n}x_n\\\vdots\\ a_{m1}x_1+a_{m2}x_2+\cdots+a_{mn}x_n \end{bmatrix} = \begin{bmatrix}a_{11}\amp a_{12}\amp \cdots\amp a_{1n}\\ a_{21}\amp a_{22}\amp \cdots\amp a_{2n}\\ \vdots \amp \vdots\amp \ddots\amp \vdots\\ a_{m1}\amp a_{m2}\amp \cdots\amp a_{mn} \end{bmatrix} \begin{bmatrix}x_1\\x_2\\\vdots \\x_n \end{bmatrix}\text{.} \end{equation*}
Thus we have shown that any linear map \(T\colon \R^n\to\R^m\) is a matrix transformation \(T_A\text{,}\) where \(A=[a_{ij}]\text{.}\) Note that the matrix of \(T\)
\begin{align*} A = \begin{bmatrix}a_{11}\amp a_{12}\amp \cdots\amp a_{1n}\\ a_{21}\amp a_{22}\amp \cdots\amp a_{2n}\\ \vdots \amp \vdots\amp \ddots\amp \vdots\\ a_{m1}\amp a_{m2}\amp \cdots\amp a_{mn} \end{bmatrix} = \amp\begin{bmatrix}T_1(e_1)\amp T_1(e_2)\amp \cdots\amp T_1(e_n)\\ T_2(e_1)\amp T_2(e_2)\amp \cdots\amp T_2(e_n)\\ \vdots \amp \vdots\amp \ddots\amp \vdots\\ T_m(e_1)\amp T_m(e_2)\amp \cdots\amp T_m(e_n) \end{bmatrix} \\ =\amp \begin{bmatrix}T(e_1)\amp T(e_2)\amp \cdots \amp T(e_n) \end{bmatrix}\text{.} \end{align*}
Notice that the \(j\)-th columns of \(A\) is the coordinates of the vector \(T(e_j)\) with respect to the standard basis \(\{e_1,\ldots,e_m\}\) of \(\R^m\text{.}\) Thus to find the matrix of \(T\text{,}\) we find the coordinates of \(T(e_j)\) with respect to the basis on the codomain and put it in the \(j\)-th column.
What happens if we change the bases on \(\R^n\) and \(\R^m\text{.}\) In order to see this let us consider an example.

Example 3.2.3.

Consider a linear map \(T\colon \R^3\to \R^2\) defined by \(T\left(\begin{bmatrix}x_1\\x_2\\x_3 \end{bmatrix} \right)=\begin{bmatrix}2x_1-x_2+x_3\\x_1+x_2-x_3 \end{bmatrix}\text{.}\) It is easy to see that \(T\) is a matrix transformation \(T_A\) where \(A=\begin{bmatrix}2 \amp -1 \amp 1\\1 \amp 1\amp -1 \end{bmatrix}\text{.}\) In particular, \(A\) is the matrix of \(T\) when we consider standard bases on the domain \(\R^3\) and codomain \(\R^2\text{.}\)
Let us consider a basis \(\beta =\{v_1=(1,1,-1),v_2=(1,-1,1),v_3=(-1,1,1)\}\) of the domain and the standard basis \(\gamma=\{(1,0),(0,1)\}\) on the codomain. In order to find the matrix \(A\) of \(T\text{,}\) we find the image of \(T(v_1)\) and find its coordinates with respect to the standard basis \(\gamma\text{.}\) We have \(T(v_1)=(0,3)\text{.}\) Thus the first columns of \(A\) is \(\begin{bmatrix}0\\3 \end{bmatrix}\text{.}\) Similarly \(T(v_2)=(4,-1)\) and \(T(v_3)=(2,-1)\text{.}\) Hence the matrix of \(A\) of \(T\) with respect to the basis \(\beta\) and \(\gamma\) is \(\begin{bmatrix}0\amp 4 \amp 2\\3\amp -1\amp -1 \end{bmatrix}\text{.}\) We denote this matrix as \([T]_\beta^\gamma\text{.}\)

Checkpoint 3.2.4.

Consider the linear transformation defined in the Example 3.2.3. Find the matrix of \(T\) with respect to a basis \(\beta =\{v_1=(1,1,-1),v_2=(1,-1,1),v_3=(-1,1,1)\}\) of \(\R^3\) and \(\gamma=\{w_2=(1,-1),(2,1)\}\) of \(\R^2\text{.}\)

Example 3.2.5.

Consider a linear map \(T\colon \R^3\to \R^3\) given by \(T\left(\begin{bmatrix}x_1\\x_2\\x_3 \end{bmatrix} \right)=\begin{bmatrix}2x_1-x_2+x_3\\x_1+x_2-x_3\\3x_1+2x_3 \end{bmatrix}\text{.}\) Let us find the matrix of \(T\) with respect to a basis \(\beta =\{v_1=(1,1,-1),v_2=(1,-1,1),v_3=(-1,1,1)\}\) of \(\R^3\) on the domain and codomain. Note that columns of \([T]_\beta^\beta\) are the coordinates of \(T(v_1), T(v_2), T(v_3)\) with respect to the basis \(\beta\text{.}\) This can be obtained simultaneously by applying RREF to \(\begin{bmatrix}v_1 \amp v_2 \amp v_3 \amp T(v_1)\amp T(v_2)\amp T(v_3) \end{bmatrix}\) and taking the last three columns as \([T]_\beta^\beta\text{.}\)
\begin{align*} \amp \begin{bmatrix}v_1 \amp v_2 \amp v_3 \amp T(v_1)\amp T(v_2)\amp T(v_3) \end{bmatrix} \\ =\amp \left[\begin{array}{rrrrrr} 1 \amp 1 \amp -1 \amp 0 \amp 4 \amp -2 \\ 1 \amp -1 \amp 1 \amp 3 \amp -1 \amp -1 \\ -1 \amp 1 \amp 1 \amp 1 \amp 5 \amp -1 \end{array} \right] \xrightarrow{RREF}\left[\begin{array}{rrrrrr} 1 \amp 0 \amp 0 \amp \frac{3}{2} \amp \frac{3}{2} \amp -\frac{3}{2} \\ 0 \amp 1 \amp 0 \amp \frac{1}{2} \amp \frac{9}{2} \amp -\frac{3}{2} \\ 0 \amp 0 \amp 1 \amp 2 \amp 2 \amp -1 \end{array} \right] \end{align*}
Hence
\begin{equation*} [T]_\beta=\left[\begin{array}{rrr} \frac{3}{2} \amp \frac{3}{2} \amp -\frac{3}{2} \\ \frac{1}{2} \amp \frac{9}{2} \amp -\frac{3}{2} \\ 2 \amp 2 \amp -1 \end{array} \right]\text{.} \end{equation*}

Checkpoint 3.2.6.

Let \(T,S\colon \R^n\to \R^m\) be two linear maps. Then show that \(T+S\) is a linear map. Furthermore, the matrix of \(T+S\) is the sum of matrices of \(T\) and \(S\text{.}\)
Next we look the composition of linear maps.

Subsection 3.2.2 Composition of linear transformations

Let \(T\colon \R^n\to \R^m\) and \(S\colon R^m\to \R^p\) be linear transformations. Then \(S\circ T\colon \R^n\to \R^p\) defined by \((S\circ T)(x)=S(T(x))\) is a linear map.
Suppose \(T(x)=Ax\) and \(S(y)=By\) are matrices transformations. Then
\begin{equation*} S(T(x))=S(Ax)=B(Ax)=(BA)x\text{.} \end{equation*}
Thus the matrix of \(S\circ T\) is \(BA\text{.}\)

Example 3.2.7.

Let \(T\colon \R^4\to \R^3\) and \(S\colon \R^3\to \R^4\) defined by
\begin{equation*} T\left(\begin{bmatrix}x_1\\x_2\\x_3\\x_4 \end{bmatrix} \right):= \begin{bmatrix}x_{1} + x_{3} + x_{4} \\ x_{1} + x_{2} + 2 x_{3} - x_{4} \\ 2 x_{1} + x_{2} + 3 x_{3} - 2 x_{4} \end{bmatrix} \end{equation*}
and
\begin{equation*} S\left(\begin{bmatrix}y_1\\y_2\\y_3 \end{bmatrix} \right):= \begin{bmatrix}y_{1} + y_{3} \\ y_{1} + 3 y_{2} + 2 y_{3} \\ 2 y_{1} - y_{2} + 3 y_{3} \\ y_{2} - y_{3} \end{bmatrix} \end{equation*}
Let
\begin{gather*} u_1=\left(1,-3,2,-1\right), u_2=\left(0,1,0,1\right)\\ u_3=\left(-1,2,-1,-1\right), u_4=\left(2,-8,4,-3\right) \end{gather*}
and define a basis \(\beta = \left\{u_1,u_2,u_3,u_4\right\}\) of \(\R^4\text{.}\) We take a basis
\begin{equation*} \gamma = \left\{v_1=\left(-1,1,1\right), v_2=\left(3,1,3\right), v_3=\left(2,-1,1\right)\right\} \end{equation*}
of \(\R^3\text{.}\) Let \(A=[T]_\beta^\gamma\text{,}\) \(B=[S]_\gamma^\beta\) and \(C=[S\circ T]_\beta^\beta\text{.}\) Then we shall show that \(C=BA\text{.}\) Note that
\begin{equation*} S\circ T \left(\begin{bmatrix}x_1\\x_2\\x_3\\x_4 \end{bmatrix} \right) := \begin{bmatrix}3 x_{1} + x_{2} + 4 x_{3} - x_{4}\\ 8 x_{1} + 5 x_{2} + 13 x_{3} - 6 x_{4}\\ 7 x_{1} + 2 x_{2} + 9 x_{3} - 3 x_{4}\\ -x_{1} - x_{3} + x_{4} \end{bmatrix}\text{.} \end{equation*}
First we find the matrix \(A\) using RREF
\begin{align*} \begin{bmatrix}v_1\amp v_2 \amp v_3 \amp T(u_1) \amp T(u_2) \amp T(u_3)\amp T(u_3) \end{bmatrix} =\amp\\ \begin{bmatrix} -1 \amp 3 \amp 2 \amp 2 \amp 1 \amp -3 \amp 3 \\ 1 \amp 1 \amp -1 \amp 3 \amp 0 \amp 0 \amp 5 \\ 1 \amp 3 \amp 1 \amp 7 \amp -1 \amp -1 \amp 14 \end{bmatrix}\amp \\ \xrightarrow{RREF}\begin{bmatrix} 1 \amp 0 \amp 0 \amp 3 \amp -\frac{3}{2} \amp \frac{7}{6} \amp \frac{43}{6} \\ 0 \amp 1 \amp 0 \amp 1 \amp \frac{1}{2} \amp -\frac{5}{6} \amp \frac{7}{6} \\ 0 \amp 0 \amp 1 \amp 1 \amp -1 \amp \frac{1}{3} \amp \frac{10}{3} \end{bmatrix}\amp \end{align*}
Hence \(A = \left[\begin{array}{rrrr} 3 \amp -\frac{3}{2} \amp \frac{7}{6} \amp \frac{43}{6} \\ 1 \amp \frac{1}{2} \amp -\frac{5}{6} \amp \frac{7}{6} \\ 1 \amp -1 \amp \frac{1}{3} \amp \frac{10}{3} \end{array} \right]\text{.}\) Next we find \(B\) using RREF
\begin{align*} \begin{bmatrix}u_1\amp u_2\amp u_3\amp u_4\amp S(v_1)\amp S(v_2)\amp S(v_3) \end{bmatrix} =\amp\\ \begin{bmatrix} 1 \amp 0 \amp -1 \amp 2 \amp 0 \amp 6 \amp 3 \\ -3 \amp 1 \amp 2 \amp -8 \amp 4 \amp 12 \amp 1 \\ 2 \amp 0 \amp -1 \amp 4 \amp 0 \amp 14 \amp 8 \\ -1 \amp 1 \amp -1 \amp -3 \amp 0 \amp -2 \amp -2 \end{bmatrix}\amp \\ \xrightarrow{RREF}\begin{bmatrix} 1 \amp 0 \amp 0 \amp 0 \amp 8 \amp 56 \amp 19 \\ 0 \amp 1 \amp 0 \amp 0 \amp -4 \amp -16 \amp -2 \\ 0 \amp 0 \amp 1 \amp 0 \amp 0 \amp 2 \amp 2 \\ 0 \amp 0 \amp 0 \amp 1 \amp -4 \amp -24 \amp -7 \end{bmatrix}\amp \end{align*}
Hence \(B=\left[ \begin{array}{rrr} 8 \amp 56 \amp 19 \\ -4 \amp -16 \amp -2 \\ 0 \amp 2 \amp 2 \\ -4 \amp -24 \amp -7 \end{array} \right]\text{.}\) It is easy to check that
\begin{equation*} BA= \left[\begin{array}{rrrr} 99 \amp -3 \amp -31 \amp 186 \\ -30 \amp 0 \amp 8 \amp -54 \\ 4 \amp -1 \amp -1 \amp 9 \\ -43 \amp 1 \amp 13 \amp -80 \end{array} \right]\text{.} \end{equation*}
Now we find the matrix \(C\) of the composition \(U=S\circ T\) using RREF
\begin{align*} \begin{bmatrix}u_1\amp u_2 \amp u_3 \amp u_4 \amp U(u_1) \amp U(u_2) \amp U(u_3)\amp U(u_4) \end{bmatrix} =\amp \\ \begin{bmatrix} 1 \amp 0 \amp -1 \amp 2 \amp 9 \amp 0 \amp -4 \amp 17 \\ -3 \amp 1 \amp 2 \amp -8 \amp 25 \amp -1 \amp -5 \amp 46 \\ 2 \amp 0 \amp -1 \amp 4 \amp 22 \amp -1 \amp -9 \amp 43 \\ -1 \amp 1 \amp -1 \amp -3 \amp -4 \amp 1 \amp 1 \amp -9 \end{bmatrix} \amp\\ \xrightarrow{RREF}\begin{bmatrix} 1 \amp 0 \amp 0 \amp 0 \amp 99 \amp -3 \amp -31 \amp 186 \\ 0 \amp 1 \amp 0 \amp 0 \amp -30 \amp 0 \amp 8 \amp -54 \\ 0 \amp 0 \amp 1 \amp 0 \amp 4 \amp -1 \amp -1 \amp 9 \\ 0 \amp 0 \amp 0 \amp 1 \amp -43 \amp 1 \amp 13 \amp -80 \end{bmatrix}\amp \end{align*}
This \(C= \left[\begin{array}{rrrr} 99 \amp -3 \amp -31 \amp 186 \\ -30 \amp 0 \amp 8 \amp -54 \\ 4 \amp -1 \amp -1 \amp 9 \\ -43 \amp 1 \amp 13 \amp -80 \end{array} \right]\text{.}\)
Hence we have \(C=BA\text{.}\)

Subsection 3.2.3 Matrix of Change of basis

Let \(\beta=\{u_1,u_2\ldots,u_n\}\) and \(\gamma=\{v_1,v_2\ldots,v_n\}\) be two bases of \(\R^n\text{.}\) Recall, the the definition of the matrix of change of bases \([I]_\beta^\gamma\text{.}\) We obtained \([I]_\beta^\gamma\) by applying RREF to the matrix \([B~|A]\) and extracting the last \(n\) columns. This is nothing but the matrix of the identity linear map \(I\colon \R^n\to \R^n\) with respect to a basis \(\beta\) of the domain and \(\gamma\) of the codomain.
Now let us consider what happens to the matrix of a linear transformation \(T\colon \R^n\to \R^m\) when we change the basis on domain and codoamin. Let \(\beta=\{u_1,u_2\ldots,u_n\}\) and \(\gamma=\{v_1,v_2\ldots,v_m\}\) be bases of \(\R^n\) and \(\R^m\) respectively. Let \(A =[T]_\beta^\gamma\) be the matrix of \(T\) with respect to \(\beta\) and \(\gamma\text{.}\) Let \(\beta'=\{u_1',u_2'\ldots,u_n'\}\) and \(\gamma'=\{v_1',v_2'\ldots,v_m'\}\) be another bases of \(\R^n\) and \(\R^m\) respectively. Let \(B =[T]_{\beta'}^{\gamma'}\) be the matrix of \(T\) with respect to \(\beta'\) and \(\gamma'\text{.}\) How are \(A\) and \(B\) related? The relation is given by the following commutative diagram.
Figure 3.2.8. Commutative Diagram
From the above commutative diagram, we have
\begin{equation*} \tau A = B\rho \implies B = \tau A \rho^{-1} \text{ or } A = \tau^{-1}B\rho\text{.} \end{equation*}

Example 3.2.9.

Consider a linear map \(T\colon \R^4\to \R^3\) defined in the Example 3.2.7. Consider a basis\(\beta=\{u_1,u_2,u_3,u_4\}\) where
\begin{align*} u_1=\left(1,-3,2,-1\right),\amp u_2=\left(0,1,0,1\right)\\ u_3=\left(-1,2,-1,-1\right),\amp u_4=\left(2,-8,4,-3\right) \end{align*}
of \(\R^4\) and a basis
\begin{equation*} \gamma = \left\{v_1=\left(-1,1,1\right), v_2=\left(3,1,3\right), v_3=\left(2,-1,1\right)\right\} \end{equation*}
of \(\R^3\text{.}\) From Example 3.2.7, \(A=[T]_\beta^\gamma=\left[\begin{array}{rrrr} 3 \amp -\frac{3}{2} \amp \frac{7}{6} \amp \frac{43}{6} \\ 1 \amp \frac{1}{2} \amp -\frac{5}{6} \amp \frac{7}{6} \\ 1 \amp -1 \amp \frac{1}{3} \amp \frac{10}{3} \end{array} \right]\text{.}\) Let \(\beta'=\{u_1',u_2',u_3',u_4'\}\) where
\begin{align*} u_1'=\left(1,1,1,-1\right), \amp u_2'=\left(1,1,-1,1\right)\\ u_3'=\left(1,-1,1,1\right), u_4'=\left(-1,1,1,1\right) \end{align*}
be another basis of \(\R^4\text{.}\) Let
\begin{equation*} \gamma'=\{\left(0,1,1\right), \left(1,0,1\right), \left(1,1,0\right)\} \end{equation*}
be another basis of \(\R^3\text{.}\) Then the matrix \(B=[T]_{\beta'}^{\gamma'}=\left[\begin{array}{rrrr} 6 \amp -2 \amp 0 \amp 0 \\ 2 \amp 0 \amp 2 \amp 0 \\ -1 \amp 1 \amp 1 \amp 1 \end{array} \right]\text{.}\)
The matrix of change of basis \(\rho=[I]_\beta^\{\beta'\}=\left(\begin{array}{rrrr} \frac{1}{4} \amp 0 \amp \frac{1}{4} \amp \frac{1}{4} \\ -\frac{5}{4} \amp \frac{1}{2} \amp \frac{1}{4} \amp -\frac{13}{4} \\ \frac{5}{4} \amp 0 \amp -\frac{5}{4} \amp \frac{11}{4} \\ -\frac{3}{4} \amp \frac{1}{2} \amp \frac{1}{4} \amp -\frac{9}{4} \end{array} \right)\text{.}\)
The matrix of change of basis \(\tau=[I]_\gamma^{\gamma'}=\left(\begin{array}{rrr} \frac{3}{2} \amp \frac{1}{2} \amp -1 \\ -\frac{1}{2} \amp \frac{5}{2} \amp 2 \\ -\frac{1}{2} \amp \frac{1}{2} \amp 0 \end{array} \right)\text{.}\)
It is easy to check that \(B\rho = \tau A\text{.}\)
Let \(T\colon \R^n\to \R^n\) be a linear transformation. Let \(\beta=\{v_1,\ldots, v_n\}\) be a basis of \(\R^n\) and \(A=[T]_\beta\text{,}\) the matrix of \(T\) with respect to \(\beta\text{.}\) Let \(\gamma=\{u_1,\ldots, u_n\}\) be another basis of \(\R^n\) and \(B=[T]_\gamma\text{,}\) the matrix of \(T\) with respect to \(\gamma\text{.}\) Let \(\rho=[I]_\beta^{\gamma}\) be matrix of change of basis from \(\beta\) to \(\gamma\text{.}\) Then we have \(B = \rho^{-1}A\rho\text{.}\) In this case, \(A\) and \(B\) are said to be similar matrices.

Definition 3.2.10.

Let \(A\) and \(B\) be two real \(n\times n\) matrices. Then \(A\) and \(B\) are called similar if there exists a non singular matrix \(P\) such that \(B=P^{-1}AP\text{.}\)

Remark 3.2.11.

A linear transformation \(T\colon \R^n\to \R^m\) is completely determined once it is defined on a basis. In other words, Let \(\beta=\{v_1,\ldots, v_n\}\) be a basis of \(\R^n\text{.}\) Let \(w_1,\ldots, w_n\) be \(n\) vectors in \(\R^m\text{.}\) Then there exists a unique linear transformation \(T\colon \R^n\to \R^m\) such that \(T(v_i)=w_i\) for \(i=1,\ldots, n\text{.}\)
How is \(T\) defined, if \(T(v_i)=w_i\text{?}\) For \(v\in V\text{,}\) there exist scalars, \(\alpha_1,\ldots, \alpha_n\) such that \(v=\sum \alpha_iv_i\text{.}\) Then \(T(v)=\sum \alpha T(v_1)=\sum\alpha w_i\text{.}\)

Reading Questions Reading Questions

Prove the uniqueness of the linear tranformation in the Remark 3.2.11

Example 3.2.12.

Fix a basis \(\beta =\{ (1,1,-1),(1,-1,1),(-1,1,1)\}\) of \(\R^3\text{.}\) Define a linear map \(T\colon \R^3\to \R^3\) such that \(T(1,1,-1)=(1,1,0), T(1,-1,1)=(1,0,1), T(-1,1,1)=(0,1,1)\text{.}\) Find \(T\left(\begin{bmatrix}x_1\\ x_2\\ x_3 \end{bmatrix} \right)\text{.}\)
We have
\begin{equation*} T\left(\begin{bmatrix}x_1\\ x_2\\ x_3 \end{bmatrix} \right)=T(x_1e_1+x_2e_2+x_3e_3)=x_1T(e_1)+x_2T(e_2)+x_3T(e_3)\text{.} \end{equation*}
Thus in order to find \(T\) we need to know how is \(T\) defined on the standard basis vector. First we need to find the coordinates of \(e_1,e_2,e_2\) with respect to the basis \(\beta\) using RREF.
\begin{align*} \begin{bmatrix}v_1 \amp v_2 \amp v_3 \amp | \amp e_1 \amp e_2 \amp e_3 \end{bmatrix} = \amp \left[\begin{array}{rrr|rrr} 1 \amp 1 \amp -1 \amp 1 \amp 0 \amp 0 \\ 1 \amp -1 \amp 1 \amp 0 \amp 1 \amp 0 \\ -1 \amp 1 \amp 1 \amp 0 \amp 0 \amp 1 \end{array} \right]\\ \amp \xrightarrow{RREF} \left[\begin{array}{rrr|rrr} 1 \amp 0 \amp 0 \amp \frac{1}{2} \amp \frac{1}{2} \amp 0 \\ 0 \amp 1 \amp 0 \amp \frac{1}{2} \amp 0 \amp \frac{1}{2} \\ 0 \amp 0 \amp 1 \amp 0 \amp \frac{1}{2} \amp \frac{1}{2} \end{array} \right]\text{.} \end{align*}
We have
\begin{equation*} \begin{aligned} e_1 = 1/2 v_1+1/2v_2 \implies T(e_1)=1/2 w_1+1/2 w_2=(1, 1/2, 1/2).\\ e_2 = 1/2 v_1+1/2v_3 \implies T(e_2)=1/2 w_1+1/2 w_3=(1/2, 1, 1/2).\\ e_3 = 1/2 v_2+1/2v_3 \implies T(e_3)=1/2 w_2+1/2 w_3=(1/2, 1/2, 1). \end{aligned} \end{equation*}
\begin{align*} T\left(\begin{bmatrix}x_1\\ x_2\\ x_3 \end{bmatrix} \right) =\amp x_1T(e_1)+x_2T(e_2)+x_3T(e_3)\\ =\amp x_1(1, 1/2, 1/2)+x_2(1/2, 1, 1/2)+x_3(1/2, 1/2, 1)\\ =\amp \begin{bmatrix}x_{1} + \frac{1}{2} x_{2} + \frac{1}{2} x_{3}\\ \frac{1}{2} x_{1} + x_{2} + \frac{1}{2} x_{3}\\ \frac{1}{2} x_{1} + \frac{1}{2} x_{2} + x_{3} \end{bmatrix} \text{.} \end{align*}
It is easy to check that \(T(v_i)=w_i\text{.}\)