An \(n\times n\) matrix is called orthogonally diagonalizable if there exists an orthogonal matrix \(P\) such that \(P^{-1}AP\) is a diagonal matrix.
Example6.4.5.
Let \(A\) be a symmetric matrix and \(\lambda_1\) and \(\lambda_2\) are distinct eigenvalues of \(A\text{.}\) If \(v_1\) and \(v_2\) are eigenvectors corresponding to \(\lambda_1\) and \(\lambda_2\) respectively. Then \(v_1\) and \(v_2\) are orthogonal.
Theorem6.4.6.
Let \(A\) be an \(n\times n\) matrix. Then the following are equivalent.
(i) \(A\) has an orthonormal set of eigenvectors.
(ii) \(A\) is orthogonally diagonalizable.
(iii) \(A\) is symmetric.
Example6.4.7.
Consider a matrix \(A=\left(\begin{array}{rrr} 5 \amp -2 \amp -4 \\ -2 \amp 8 \amp -2 \\ -4 \amp -2 \amp 5 \end{array} \right)\text{.}\) Clearly \(A\) is symmetric and hence it is orthogonally diagonalizable. The characteristic polynomial of \(A\) is
Hence \(0, 9, 9\) are eigenvalues of \(A\text{.}\) Its is easy to find that \(v_1=(1, 1/2, 1)\) is an eigenvector corresponding to the eigenvalue 0. \(v_2=(1, 0, -1), v_2=(0, 1, -1/2)\) are eigenvectors corresponding to eigenvalue 9. Hence \(P:=\left(\begin{array}{rrr} 1 \amp 1 \amp 0 \\ \frac{1}{2} \amp 0 \amp 1 \\ 1 \amp -1 \amp -\frac{1}{2} \end{array} \right)\text{.}\) Then