Skip to main content
Contents
Dark Mode Prev Up Next
\( \newcommand{\Loadedframemethod}{default}
\newcommand{\R}{\mathbb{R}}
\newcommand{\N}{\mathbb{N}}
\newcommand{\Q}{\mathbb{Q}}
\newcommand{\Z}{\mathbb{Z}}
\newcommand{\inner}[2]{\left\langle #1,#2\right\rangle}
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
\newcommand{\normx}[1]{\left\Vert#1\right\Vert}
\newcommand{\partd}[2]{\dfrac{\partial #1}{\partial #2}}
\newcommand{\innprod}[2]{\left\lt #1,#2\right>}
\def\rank{{ rank\,}}
\newcommand{\ds}{\displaystyle}
\def\diag{{ diag\,}}
\def\proj{{ proj\,}}
\newcommand{\lt}{<}
\newcommand{\gt}{>}
\newcommand{\amp}{&}
\definecolor{fillinmathshade}{gray}{0.9}
\newcommand{\fillinmath}[1]{\mathchoice{\colorbox{fillinmathshade}{$\displaystyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\textstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptscriptstyle\phantom{\,#1\,}$}}}
\)
Section 6.5 QR-Factorization
Let \(A\) be an \(m\times n\) matrix whose columns are \(a_1,\ldots, a_n\text{.}\) Further assume that columns of \(A\) are linearly independent. Using the Gram-Schmidt orthogonalization process
\begin{align*}
u_1:\amp = a_1\\
u_2:\amp = a_2 -\frac{a_2\cdot u_1}{\norm{u_1}^2}u_1 \\
\vdots\amp\\
u_k: \amp=a_k-\frac{a_k\cdot u_1}{\norm{u_1}^2}u_1-\cdots -
\frac{a_k\cdot u_{k-1}}{\norm{u_{k-1}}^2}u_{k-1}\\
\vdots\amp
\end{align*}
and define \(q_k:= \frac{u_k}{\norm{u_k}}, k=1,\ldots, n\text{.}\) This implies
\begin{align*}
\norm{u_k}q_k \amp = u_k = a_k-\frac{a_k\cdot u_1}{\norm{u_1}^2}u_1-\cdots -\frac{a_k\cdot u_{k-1}}{\norm{u_{k-1}}^2}u_{k-1}\\
\amp =a_k-(a_k\cdot q_1)q_1-(a_k\cdot q_2)q_2+\cdots - (a_k\cdot q_{k-1})q_{k-1}\text{.}
\end{align*}
Hence
\begin{equation*}
a_k = (a_k\cdot q_1)q_1+(a_k\cdot q_2)q_2+\cdots + (a_k\cdot q_{k-1})q_{k-1}+\norm{u_k}q_k, k=1,2,\ldots, n\text{.}
\end{equation*}
Thus we have
\begin{align*}
a_1: = \amp \norm{u_1}q_1\\
a_2: = \amp (a_2\cdot q_1)q_1+\norm{u_2}q_2\\
\amp \vdots\\
a_k: =\amp (a_k\cdot q_1)q_1+(a_k\cdot q_2)q_2+\cdots + (a_k\cdot q_{k-1})q_{k-1}+\norm{u_k}q_k\\
\amp \vdots
\end{align*}
Thus
\begin{align*}
A \amp = [a_1~~a_2~~a_3~~\cdots~~a_n]\\
\amp = [q_1~~q_2~~q_3~~\cdots~~q_n]
\begin{bmatrix}\norm{u_1} \amp a_2\cdot q_1 \amp a_3\cdot q_1\amp \cdots \amp a_n\cdot q_1\\
0\amp \norm{u_2} \amp a_3\cdot q_2 \amp \cdots \amp a_n\cdot q_2\\
0\amp 0 \amp \norm{u_3}\amp \cdots \amp a_n\cdot q_3\\
\vdots \amp \vdots \amp \vdots \amp \ddots \amp \vdots\\ 0\amp 0 \amp 0 \amp \cdots \amp \norm{u_n} \end{bmatrix}\\
\amp =QR
\end{align*}
Here
\(Q\) has orthogonal columns and
\(R\) is upper triangular whose diagonal entries are positive, hence non-singular. This is what is known as
\(QR\) factorization. Thus we have the following result.
Theorem 6.5.1 .
Every
\(m\times n\) matrix with linearly independent columns has a
\(QR\) factorization,
\(A=QR\text{,}\) where columns of
\(Q\) are orthonormal and
\(R\) is an upper triangular matrix with positive diagonal entries.
Example 6.5.2 .
Let
\(A = \begin{pmatrix}0 \amp 1 \amp 1\\ 1 \amp 1 \amp -2\\ 1 \amp 1 \amp 2 \end{pmatrix} = \begin{bmatrix}a_1 \amp a_2 \amp a_3 \end{bmatrix}\text{.}\) Let us find the
\(QR\) factorization of
\(A\text{.}\) Note that columns of
\(A\) are vectors
\(v_1, v_2, v_3\) in the
ExampleΒ 6.2.4 . We have found
\(q_1, q_2, q_3\) in this example. Hence
\begin{equation*}
Q = \begin{bmatrix}q_1 \amp q_2 \amp q_3 \end{bmatrix} = \begin{pmatrix}0 \amp 1 \amp 0 \\ \frac{1}{\sqrt{2}} \amp 0 \amp \frac{-1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \amp 0 \amp \frac{1}{\sqrt{2}} \end{pmatrix}\text{.}
\end{equation*}
We also have
\begin{equation*}
u_1= (0, 1, 1), u_2=(1,0,0), u_3=(0,-2,2)
\end{equation*}
Hence
\begin{equation*}
R = \begin{pmatrix}\norm{u_1} \amp a_2 \cdot q_1 \amp a_3 \cdot q_1\\ 0 \amp \norm{u_2} \amp a_3 \cdot q_2\\ 0 \amp 0 \amp \norm{u_3} \end{pmatrix} = \begin{pmatrix}\sqrt{2} \amp \sqrt(2) \amp 0\\ 0 \amp 1 \amp 1 \\ 0 \amp 0 \amp 2\sqrt{2} \end{pmatrix}
\end{equation*}
Note that once we have
\(Q\text{,}\) then
\(R = Q^TA\text{.}\)
Example 6.5.3 .
Find the QR-factorization of
\(A=\left(\begin{array}{rrr} -1 \amp 1 \amp -1 \\ 1 \amp -1 \amp 0 \\ 0 \amp 0 \amp 2 \\ 1 \amp 1 \amp 1 \end{array} \right)=\begin{bmatrix}a_1\amp a_2\amp a_3 \end{bmatrix}\text{.}\) It is easy to check that columns of
\(A\) are linearly independent. In fact, columns of
\(A\) are rows of the matrix defined in the ExampleΒ
ExampleΒ 6.2.5 . From this example, we have
\begin{equation*}
u_1 = \begin{pmatrix}-1\\ 1\\ 0\\ 1 \end{pmatrix},
q_1 = \begin{pmatrix}-\frac{1}{\sqrt{3}}\\\frac{1}{\sqrt{3}}\\0\\\frac{1}{\sqrt{3}} \end{pmatrix},
u_2 = \begin{pmatrix}2/3\\ -2/3\\ 0\\ 4/3 \end{pmatrix},
\end{equation*}
\begin{equation*}
q_2 = \begin{pmatrix}\frac{1}{\sqrt{6}}\\ -\frac{1}{\sqrt{6}}\\0\\\sqrt{\frac{2}{3}} \end{pmatrix},
u_3 = \begin{pmatrix}-1/2\\-1/2\\2\\0 \end{pmatrix},
q_3 = \begin{pmatrix}-\frac{1}{3} \, \sqrt{\frac{1}{2}}\\-\frac{1}{3} \, \sqrt{\frac{1}{2}}\\\frac{4}{3} \, \sqrt{\frac{1}{2}}\\0 \end{pmatrix}
\end{equation*}
Hence
\begin{equation*}
Q = [q_1~q_2~q_3]=\left(\begin{array}{rrr} -\frac{1}{3} \, \sqrt{3} \amp \frac{1}{2} \, \sqrt{\frac{2}{3}} \amp -\frac{1}{3} \, \sqrt{\frac{1}{2}} \\ \frac{1}{3} \, \sqrt{3} \amp -\frac{1}{2} \, \sqrt{\frac{2}{3}} \amp -\frac{1}{3} \, \sqrt{\frac{1}{2}} \\ 0 \amp 0 \amp \frac{4}{3} \, \sqrt{\frac{1}{2}} \\ \frac{1}{3} \, \sqrt{3} \amp \sqrt{\frac{2}{3}} \amp 0 \end{array} \right)\text{.}
\end{equation*}
Also
\begin{equation*}
R = Q^TA=\left(\begin{array}{rrr} \sqrt{3} \amp -\frac{1}{3} \, \sqrt{3} \amp \frac{2}{3} \, \sqrt{3} \\ 0 \amp 2 \, \sqrt{\frac{2}{3}} \amp \frac{1}{2} \, \sqrt{\frac{2}{3}} \\ 0 \amp 0 \amp 3 \, \sqrt{\frac{1}{2}} \end{array} \right)\text{.}
\end{equation*}
Checkpoint 6.5.6 .
Find the QR-factorization of the following matrices:
\begin{equation*}
\begin{bmatrix}2 \amp 1 \\ 1 \amp 1 \end{bmatrix} ,
\begin{bmatrix}1 \amp -1 \amp 1\\ 2 \amp 0 \amp 1\\2 \amp 1 \amp -2 \end{bmatrix} ,
\begin{bmatrix}1 \amp 1 \amp 0 \\-1 \amp 0 \amp 1\\0 \amp 1 \amp 1\\1 \amp -1 \amp 0 \end{bmatrix}
\end{equation*}
Activity 6.5.1 . Sage Routine for QR-factorization.
Sage has in inbuilt function to find the
\(QR\) -factorization. Let us demonstrate this.
QR Algorithm for Eigenvalues
The QR-factorization has many applications of which one of them is to compute eigenvalues iteratively. The algorithm is as follows:
Start with a square matrix
\(A_0 = A\text{.}\)
For
\(k = 0,1,2,\ldots\) do:
Compute the QR decomposition \(A_k = Q_k R_k\text{.}\)
Form \(A_{k+1} = R_k Q_k\text{.}\)
Continue until convergence, i.e.,
\(A_k\) is (nearly) upper triangular.
The eigenvalues are approximated by the diagonal entries of the limiting matrix
\(A_k\text{.}\)
Let us write a Sage routine to compute the eigenvalues of a square matrix iteratively.
Next let us call the Sage function to find eigenvalues.