Paper

Enhanced Principal Component Analysis under A Collaborative-Robust Framework

Principal component analysis (PCA) frequently suffers from the disturbance of outliers and thus a spectrum of robust extensions and variations of PCA have been developed. However, existing extensions of PCA treat all samples equally even those with large noise. In this paper, we first introduce a general collaborative-robust weight learning framework that combines weight learning and robust loss in a non-trivial way. More significantly, under the proposed framework, only a part of well-fitting samples are activated which indicates more importance during training, and others, whose errors are large, will not be ignored. In particular, the negative effects of inactivated samples are alleviated by the robust loss function. Then we furthermore develop an enhanced PCA which adopts a point-wise sigma-loss function that interpolates between L_2,1-norm and squared Frobenius-norm and meanwhile retains the rotational invariance property. Extensive experiments are conducted on occluded datasets from two aspects including reconstructed errors and clustering accuracy. The experimental results prove the superiority and effectiveness of our model.

Results in Papers With Code
(↓ scroll down to see all results)