PCA, as mentioned earlier, is a dimensionality reduction techniques. It has numerous applications like, visualization of high dimensional data, facial recognition, computer vision, image compression, determining patterns in a data, data mining, bioinformatics, psychology, analyzing and forecasting stock data,etc. We mention, image compression as one of the applications.
Similar to SVD, we can also compress the images using PCA. We take any image, first of all we separate the RBG channels of the images and apply PCA separately to red channel, green channel and blue channel. Next we take first \(k\) principal components and project the red, green and blue channel images and then combine the three channels to obtained the transformed image with \(k\) principal components.
After applying PCA and taking first 5, 20 and 50 principal components and combining the three channels together we get the following approximate images as shown in the FiguresΒ 10.2.6, FigureΒ 10.2.7, FigureΒ 10.2.8, respectively. Each channel is of size \(600\times 800\text{.}\)
The covariance matrix of \(X\) is \(\frac{1}{n-1}X^TX\text{.}\) This shows that \(S\) and \(X^TX\) are similar matrices. If \(\lambda_1,\ldots, \lambda_r\) are non zero eigenvalues of \(S\) and \(\sigma_1,\ldots, \sigma_r\) are singular values of \(X\text{.}\) Then they are related by the following relation
\begin{equation*}
\sigma_i^2=(n-1)\lambda_i, i = 1, 2,\ldots, r\text{.}
\end{equation*}
The relation \(X^TX = V\left( \Sigma^T\Sigma\right) V^T\) shows that right singular vectors are same as principal components. The left singular vectors are given by