In the last chapter, we dealt with notion of dot product and geometry in \(\R^n\text{.}\) The dot product and related notion can be generalized to an arbitrary vector space over \(\R\) or \(\mathbb{C}\text{.}\) All the notions, we have learned in the last section can be generalized over an inner product space.
Note that the dot product of two vectors in \(\R^n\) is a scalar, in particular, dot product can be thought of as a function from \('\cdot' \colon \R^n\times \R^n \to\R\) satisfying the following properties:
Let \(V\) be a vector space over \(\R\text{.}\) An inner product on \(V\) is a function that assigns a real number \(\langle x, y\rangle\) to every pair \(x,y\) of vectors in \(V\) (that is, a function \(\langle \cdot, \cdot\rangle\colon V \times V \to \R\)) satisfying the following properties.
If \(V\) is real vector space with inner product \(\langle. , .\rangle\text{.}\) Then \((V, \langle . , .\rangle)\) called in inner product space over \(\R\text{.}\)
The last two properties make the inner product linear in the second variable. Using the symmetry property, it can also be shown that the inner product is linear in the first variable as well. That is,
\begin{equation*}
\langle (x+y),z\rangle=\langle x, z\rangle+\langle y, z\rangle, \text{ and } \langle (\alpha x) y\rangle=\alpha\langle x, y\rangle
\end{equation*}
Note that this inner product can be thought of as the standard dot product on \(\R^{n^2}\text{.}\) The elements of the matrix \(A\) can be thought of as a vector in \(\R^{n^2}\text{.}\) Then
Let \((V, \langle .\rangle)\) be a real inner product space. Then norm of any vector \(x\in V\) corresponding to the inner product \(\langle . \rangle\) is defined as
This is called the parallelogram identity. Geometrically, in a parallelogram, the sum of square of the diagonals is 2 times the sum of the squares of the side lengths.
Thus for any two non zero vectors, \(x\) and \(y\text{,}\)\(\frac{\inner{x}{y}}{\norm{x}\norm{y}}\) always lies between \(-1\) and 1. This allows us to define the angle between two non zero vectors. We assign this number to \(\cos\theta\) with \(\theta\in[-\pi,\pi]\) called the angle between \(x\) and \(y\text{.}\) Thus, if \(\theta\) is the angle between \(x\) and \(y\text{,}\) then we have
All the notions that we defined for dot product, namely, orthogonality, orthogonal projection, Gram-Schmidt orthogonalization process can we defined in a similar manner. All we need to do is, replace the dot product by the given inner product.
Any vector space \(V\) over \(\R\) with a function \(\norm{.} : V \to \R\) which satisfies all the properties mentioned in TheoremΒ 7.1.13 is called a normed linear space.. Thus any inner product space is also a normed linear space.
Let \((V, \inner{.}{.})\) be a real inner product space. Define (i) orthogonality of two vectors \(x\) and \(y\) in \(V\text{,}\) (ii) orthogonal complement of a subset \(U\) of \(V\text{,}\) (iii) Orthogonal projection of a vector \(v\) onto a non-zero vector \(u\text{,}\) (iv) orthogonal set and orthonormal sets in \(V\) and (v) Gram-Schmidt orthogonalization process.
Let \(C([-\pi,\pi])\) be the vectors space of set of continuous functions from \([-\pi,\pi]\) to \(\R\text{.}\) Define the inner product on \(C([-\pi,\pi])\) as
Let \(\beta=\{u_1,\ldots,
u_n\}\) be an orthogonal basis of an inner product space \(V\text{.}\) Let \(v\in V\) and \(\theta_1,\ldots, \theta_n\) be the angle between \(v\) and \(u_1,\ldots, u_n\text{,}\) respectively. Then
Let \(\beta=\{u_1,\ldots,
u_n\}\) be an orthogonal basis of an inner product space \(V\text{.}\) Let \(x\) and \(y\) be two vectors such that \(x=\sum x_i u_i\) and \(y=\sum y_i u_i\text{.}\) Then
Consider \(V ={\cal P}_3(\R)\) with inner product \(\inner{p}{q}:=\int_{-1}^1 p(x)q(x)\,dx\text{.}\) Use the standard basis \(\beta =\{v_1,v_2,v_3,v_4\} = \{1,x,x^2,x^3\}\) to find an orthogonal basis of \({\cal P}_3(\R)\text{.}\)
First of all notice that \(\beta\) is not an orthogonal basis. For \(\inner{v_1}{v_3}=\inner{1}{x^2} = \int_{-1}^1 x^2 dx = \frac23\text{,}\)\(\inner{v_2}{v_4}=\int_{-1}^1 x^4 dx = \frac25\text{.}\) Also note that \(\inner{v_1}{v_2}=\int_{-1}^1 xdx = 0\text{.}\)\(\inner{v_2}{v_3}=\int_{-1}^1 x^3dx = 0\text{.}\)\(\inner{v_1}{v_4}=\int_{-1}^1 x^3 dx = 0\text{.}\)\(\inner{v_3}{v_4}=\int_{-1}^1 x^5dx = 0\text{.}\)
Consider the standard basis \(\beta=\{1,x,x^2,x^3\}\) of \({\cal P}_3(\R)\) with inner product \(\inner{f}{g}:=\int_0^1 f(x)g(x)\,dx\text{.}\) Find an orthonormal basis starting with \(\beta\) using the Gram-Schmidt orthogonalization process.
Let \(A=\left(\begin{array}{rrr}2 \amp -1 \amp 0 \\-1 \amp 2 \amp -1 \\0 \amp -1 \amp 2 \end{array} \right)\text{.}\) It is easy to check that \(A\) is a symmetric and positive definite matrix. (why?) Define an inner product on \(\mathbb{R}^3\) as \(\inner{u}{v}:=v^TAu\text{.}\)
Use the the Gram-Schmidt orthogonalization process to find an orthonormal basis of from the standard basis vectors \(\beta=\{e_1, e_2, e_3\}\) with respect to the above inner product.
Let \(V\) be an inner product space and \(W\leq V\text{,}\) a finite dimensional subspace of \(V\text{.}\) Let \(\{u_1,\ldots, u_k\}\) be an orthonormal basis of \(W\text{.}\) Suppose \(v\in V\text{.}\) Similar to definitionΒ 6.3.5, we can define the orthogonal projection of \(v\) onto \(W\) as
Find the orthogonal projection of vector \(b=\begin{bmatrix}1\\2\\3\\4 \end{bmatrix}\) onto the subspace spanned by three vectors \(\left\{\begin{bmatrix}1\\-1\\0\\1 \end{bmatrix} , \begin{bmatrix}0\\1\\1\\-1 \end{bmatrix} , \begin{bmatrix}1\\1\\-1\\0 \end{bmatrix} \right\}\text{.}\)
Note that the concepts of GramβSchmidt orthogonalization, orthogonal projection, and reflection can be naturally extended to an inner product space \((V, \langle \cdot, \cdot \rangle)\text{.}\) Explore how these notions generalize in such spaces, and implement solutions to related problems using Sage.