Paper

What Objective Does Self-paced Learning Indeed Optimize?

Self-paced learning (SPL) is a recently raised methodology designed through simulating the learning principle of humans/animals. A variety of SPL realization schemes have been designed for different computer vision and pattern recognition tasks, and empirically substantiated to be effective in these applications. However, the investigation on its theoretical insight is still a blank. To this issue, this study attempts to provide some new theoretical understanding under the SPL scheme. Specifically, we prove that the solving strategy on SPL accords with a majorization minimization algorithm implemented on a latent objective function. Furthermore, we find that the loss function contained in this latent objective has a similar configuration with non-convex regularized penalty (NSPR) known in statistics and machine learning. Such connection inspires us discovering more intrinsic relationship between SPL regimes and NSPR forms, like SCAD, LOG and EXP. The robustness insight under SPL can then be finely explained. We also analyze the capability of SPL on its easy loss prior embedding property, and provide an insightful interpretation to the effectiveness mechanism under previous SPL variations. Besides, we design a group-partial-order loss prior, which is especially useful to weakly labeled large-scale data processing tasks. Through applying SPL with this loss prior to the FCVID dataset, which is currently one of the biggest manually annotated video dataset, our method achieves state-of-the-art performance beyond previous methods, which further helps supports the proposed theoretical arguments.

Results in Papers With Code
(↓ scroll down to see all results)