In the last section we dealt with notion of dot product and geometry in \(\R^n\text{.}\) The dot product and related notion can be generalized to an arbitrary vector space over \(\R\) or \(\mathbb{C}\text{.}\) All the notions, we have learned in the last section can be generalized over an inner product space.
Note that the dot product of two vectors in \(\R^n\) is a scalar, in particular, dot product can be thought of as a function from \('\cdot' \colon \R^n\times \R^n \to\R\) satisfying the following properties:
\(x \cdot x\geq 0\) for all \(x\in \R^n\text{.}\)
\(x \cdot x= 0\) if and only if \(x=0\text{.}\)
\(x \cdot y=y\cdot x\) for all \(x,y \in \R^n\text{.}\)
\(x\cdot (y+z) = x\cdot y +x\cdot z\) for all \(x,y,z\in \R^n\text{.}\)
\(x\cdot (\alpha y)=\alpha x\cdot y = (\alpha x)\cdot y\text{.}\)
The notion of dot product on \(\R^n\) can ve generalized on vector space known as inner product. We have the following definition.
Definition7.1.1.Inner Product.
Let \(V\) be a vector space over \(\R\text{.}\) An inner product on \(V\) is a function that assigns a real number \(\langle x, y\rangle\) to every pair \(x,y\) of vectors in \(V\) satisfying the following properties.
\(\langle x, x\rangle \geq 0\) for all \(x \in V\) and \(\langle x, x\rangle = 0\) if and only if \(x=0\text{.}\)
\(\langle x, y\rangle = \langle y, x\rangle\) for all \(x,y \in V\text{.}\) (Symmetry)
\(\langle x, (y+z)\rangle=\langle x, y\rangle+\langle x, z\rangle\) for all \(x,y,z\in V\text{.}\)
\(\langle x, (\alpha y)\rangle=\alpha\langle x, y\rangle\) for all \(x,y\in V\) and \(\alpha \in \R\text{.}\)
If \(V\) is real vector space with inner product \(\langle. , .\rangle\text{.}\) Then \((V, \langle . , .\rangle)\) called in inner product space over \(\R\text{.}\)
The last two properties make the inner product linear in the second variable. Using the symmetry property, it can also be shown that the inner product is linear in the first variable as well. That is,
\begin{equation*}
\langle (x+y),z\rangle=\langle x, z\rangle+\langle y, z\rangle, \text{ and } \langle (\alpha x) y\rangle=\alpha\langle x, y\rangle
\end{equation*}
Next we look at several examples of inner procuct on various vector spaces that we have defined in Chapter 4.
Example7.1.2.
On \(\R^n\text{,}\) the standard dot product is an inner product. Thus define
\begin{equation*}
\langle x, y\rangle:=x\cdot y
\end{equation*}
This is also called the Euclidean inner product on \(\R^n\text{.}\)
Example7.1.3.
Let \(V=M_n(\R)\text{,}\) the set of all \(n\times n\) matrices over \(\R\text{.}\) Define
\begin{equation*}
\langle A, B\rangle:=tr(AB^T)
\end{equation*}
It is easy to show that this is an inner product on \(M_n(\R)\text{.}\)
Note that this inner product can be thought of as the standard dot product on \(\R^{n^2}\text{.}\) The elements of the matrix \(A\) can be thought of as a vector in \(\R^{n^2}\text{.}\) Then
It is easy to see that \(\langle p,q \rangle\) defined inner product on the vector space \({\cal P}_n(R)\text{.}\)
Here \(0, 1, 2, \ldots, n\) are nothing special. Instead, we can use any \(n+1\) distinct real numbers, \(c_0,c_1,\ldots, c_n\text{.}\)
Hint.
\(\langle p,p \rangle =\) mean that \(p\) has \(n+1\) roots, which is not possible unless \(p=0\) by the fundamental theorem of algebra.
Definition7.1.7.
Let \((V, \langle .\rangle)\) be a real inner product space. Then norm of any vector \(x\in V\) corresponding to the inner product \(\langle . \rangle\) is defined as
This is called the parallelogram identity. Geometrically, in a parallelogram, the sum of square of the diagonals is 2 times the sum of the squares of the side lengths.
Thus for any two non zero vectors, \(x\) and \(y\text{,}\)\(\frac{\inner{x}{y}}{\norm{x}\norm{y}}\) always lies between \(-1\) and 1. This allows us to define the angle between two non zero vectors. We assign this number to \(\cos\theta\) with \(\theta\in[-\pi,\pi]\) called the angle between \(x\) and \(y\text{.}\) Thus, if \(\theta\) is the angle between \(x\) and \(y\text{,}\) then we have
All the notions that we defined for dot product, namely, orthogonality, orthogonal projection, Gram-Schmidt orthogonalization process can we defined in a similar manner. All we need to do is, replace the dot product by the given inner product.
Theorem7.1.12.Properties of Norm.
Let \((V,\langle ., .\rangle)\) be an innepr product space. The norm defined as Definition 7.1.7 has the following properties:
for all \(x\in V\text{,}\)\(\norm{x}\geq 0\) and \(\norm{x}= 0\) if and only if \(x=0\text{.}\)
for all \(\alpha \in \R\) and \(x\in V\text{,}\)\(\norm{\alpha x}=|\alpha|\norm{x}\text{.}\)
for all \(x,y \in V\text{,}\)\(\norm{x+y}\leq \norm{x}+\norm{y}\text{.}\)
Definition7.1.13.
Any vector space \(V\) over \(\R\) with a function \(\norm{.} : V \to \R\) which satisfies all the properties mentioned in Theorem 7.1.12 is called a normed linear space.. Thus any inner product space is also a normed linear space.
Note7.1.14.
The concepts such as orthogonality, orthogonal projection, orthogonal complement of any subset, orthogonal and orthonormal sets and Gram-Schmidt orthogonalization process etc that we defined and dealt with in the previuos chapter with respect to the dot product on \(\R^n\) can be defined on an inner product space. All we need to do is to replace the dot product by the corresponding inner product. We encourage readers to define each one of them.
Checkpoint7.1.15.
Let \((V, \inner{.}{.})\) be a real inner product space. Define (i) orthogonality of two vectors \(x\) and \(y\) in \(V\text{,}\) (ii) orthogonal complement of a subset \(U\) of \(V\text{,}\) (iii) Orthogonal projection of a vector \(v\) onto a non-zero vector \(u\text{,}\) (iv) orthogonal set and orthonormal sets in \(V\) and (v) Gram-Schmidt orthogonalization process.
Checkpoint7.1.16.
Let \(x, y\) be two vectors in an inner product space \(V\text{.}\) Then show that
(i) \(x\) and \(y\) are orthogonal if and only if \(\norm{x+y}=\norm{x-y}\text{.}\) (what does it mean geometrically?)
(ii) \(x+y\) and \(x-y\) are orthogonal if and only if \(\norm{x}=\norm{y}\text{.}\)
Checkpoint7.1.17.
Let \(C([-\pi,\pi])\) be the vectors space of set of continuous functions from \([-\pi,\pi]\) to \(\R\text{.}\) Define the inner product on \(C([-\pi,\pi])\) as
Let \(\beta=\{u_1,\ldots,
u_n\}\) be an orthogonal basis of an inner product space \(V\text{.}\) Let \(v\in V\) and \(\theta_1,\ldots, \theta_n\) be the angle between \(v\) and \(u_1,\ldots, u_n\text{,}\) respectively. Then
Here \(\cos\theta_i\) are called the direction cosines of \(v\) corresponding to \(\beta\text{.}\)
Checkpoint7.1.20.
Let \(\beta=\{u_1,\ldots,
u_n\}\) be an orthogonal basis of an inner product space \(V\text{.}\) Let \(x\) and \(y\) be two vectors such that \(x=\sum x_i u_i\) and \(y=\sum y_i u_i\text{.}\) Then
Consider \(V ={\cal P}_3(\R)\) with inner product \(\inner{p}{q}:=\int_{-1}^1 p(x)q(x)\,dx\text{.}\) Use the standard basis \(\beta =\{v_1,v_2,v_3,v_4\} = \{1,x,x^2,x^3\}\) to find an orthogonal basis of \({\cal P}_3(\R)\text{.}\)
First of all notice that \(\beta\) is not an orthogonal basis. For \(\inner{v_1}{v_3}=\inner{1}{x^2} = \int_{-1}^1 x^2 dx = \frac23\text{,}\)\(\inner{v_2}{v_4}=\int_{-1}^1 x^4 dx = \frac25\text{.}\) Also note that \(\inner{v_1}{v_2}=\int_{-1}^1 xdx = 0\text{.}\)\(\inner{v_2}{v_3}=\int_{-1}^1 x^3dx = 0\text{.}\)\(\inner{v_1}{v_4}=\int_{-1}^1 x^3 dx = 0\text{.}\)\(\inner{v_3}{v_4}=\int_{-1}^1 x^5dx = 0\text{.}\)
Since \(v_1\) and \(v_2\) are already orthogonal, we can choose \(u_1=v_1=1\) and \(u_2 = v_2=x\) in the Gram-Schmidt process. Next
Consider the standard basis \(\beta=\{1,x,x^2,x^3\}\) of \({\cal P}_3(\R)\) with inner product \(\inner{f}{g}:=\int_0^1 f(x)g(x)\,dx\text{.}\) Find an orthonormal basis starting with \(\beta\) using the Gram-Schmidt orthogonalization process.
Example7.1.24.
Let \(A=\left(\begin{array}{rrr}2 \amp -1 \amp 0 \\-1 \amp 2 \amp -1 \\0 \amp -1 \amp 2 \end{array} \right)\text{.}\) It is easy to check that \(A\) is a symmetric and positive definite matrix. (why?) Define an inner product on \(\mathbb{R}^3\) as \(\inner{u}{v}:=v^TAu\text{.}\)
Use the the Gram-Schmidt orthogonalization process to find an orthonormal basis of from the standard basis vectors \(\beta=\{e_1, e_2, e_3\}\) with respect to the above inner product.
which is the Lagrange interpolation expansion of \(p(x)\text{.}\)
Definition7.1.26.Projection onto a subspace.
Let \(V\) be an inner product space and \(W\leq V\text{,}\) a finite dimensional subspace of \(V\text{.}\) Let \(\{u_1,\ldots, u_k\}\) be an orthonormal basis of \(W\text{.}\) Suppose \(v\in V\text{.}\) Similar to definition 6.3.5, we can define the orthogonal projection of \(v\) onto \(W\) as
Find the orthogonal projection of vector \(b=\begin{bmatrix}1\\2\\3\\4 \end{bmatrix}\) onto the subspace spanned by three vectors \(\left\{\begin{bmatrix}1\\-1\\0\\1 \end{bmatrix} , \begin{bmatrix}0\\1\\1\\-1 \end{bmatrix} , \begin{bmatrix}1\\1\\-1\\0 \end{bmatrix} \right\}\text{.}\)