In Chapter 5, EM Algorithm and Applications, we said that any algorithm that decorrelates the input covariance matrix is performing a PCA without dimensionality reduction. Starting from this approach, Rubner, and Tavan (in the paper A Self-Organizing Network for Principal-Components Analysis, Rubner J., Tavan P., Europhysics. Letters, 10(7), 1989) proposed a neural model whose goal is decorrelating the output components to force the consequent decorrelation of the output covariance matrix (in lower-dimensional subspace). Assuming a zero-centered dataset and E[y] = 0, the output covariance matrix for m principal components is as follows:

Hence, it's possible to achieve an approximate decorrelation, forcing the terms yiyj with i ≠ j to become close to zero. The main difference with a standard approach (such as whitening...