Visual Attention is Beyond One Single Saliency Map

23 Oct 2018  ·  Li Jian ·

Of later years, numerous bottom-up attention models have been proposed on different assumptions. However, the produced saliency maps may be different from each other even from the same input image. We also observe that human fixation map varies across time greatly. When people freely view an image, they tend to allocate attention at salient regions of large scale at first, and then search more and more detailed regions. In this paper, we argue that, for one input image visual attention cannot be described by only one single saliency map, and this mechanism should be modeled as a dynamic process. Under the frequency domain paradigm, we proposed a global inhibition model to mimic this process by suppressing the {\it non-saliency} in the input image; we also show that the dynamic process is influenced by one parameter in the frequency domain. Experiments illustrate that the proposed model is capable of predicting human dynamic fixation distribution.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here