Label Consistent Quadratic Surrogate Model for Visual Saliency Prediction

CVPR 2015  ·  Yan Luo, Yongkang Wong, Qi Zhao ·

Recently, an increasing number of works have proposed to learn visual saliency by leveraging human fixations. However, the collection of human fixations is time consuming and the existing eye tracking datasets are generally small when compared with other domains. Thus, it contains a certain degree of dataset bias due to the large image variations (e.g., outdoor scenes vs. emotion-evoking images). In the learning based saliency prediction literature, most models are trained and evaluated within the same dataset and cross dataset validation is not yet a common practice. Instead of directly applying model learned from another dataset in cross dataset fashion, it is better to transfer the prior knowledge obtained from one dataset to improve the training and prediction on another. In addition, since new datasets are built and shared in the community from time to time, it would be good not to retrain the entire model when new data are added. To address these problems, we proposed a new learning based saliency model, namely Label Consistent Quadratic Surrogate algorithm, which employs an iterative online algorithm to learn a sparse dictionary with label consistent constraint. The advantages of the proposed model are three-folds: (1) the quadratic surrogate function guarantees convergence at each iteration, (2) the label consistent constraint enforces the predicted sparse code to be discriminative, and (3) the online properties enable the proposed algorithm to adapt existing model with new data without retraining. As shown in this work, the proposed saliency model achieves better performance than the state-of-the-art saliency models.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here