Saliency Prediction
87 papers with code • 3 benchmarks • 7 datasets
A saliency map is a model that predicts eye fixations on a visual scene. Saliency prediction is informed by the human visual attention mechanism and predicts the possibility of the human eyes to stay in a certain position in the scene.
Libraries
Use these libraries to find Saliency Prediction models and implementationsLatest papers
Brand Visibility in Packaging: A Deep Learning Approach for Logo Detection, Saliency-Map Prediction, and Logo Placement Analysis
In the third step, by integrating logo detection with a saliency map generation, the framework provides a comprehensive brand attention score.
What Do Deep Saliency Models Learn about Visual Attention?
In recent years, deep saliency models have made significant progress in predicting human visual attention.
Spherical Vision Transformer for 360-degree Video Saliency Prediction
The growing interest in omnidirectional videos (ODVs) that capture the full field-of-view (FOV) has gained 360-degree saliency prediction importance in computer vision.
A positive feedback method based on F-measure value for Salient Object Detection
The majority of current salient object detection (SOD) models are focused on designing a series of decoders based on fully convolutional networks (FCNs) or Transformer architectures and integrating them in a skillful manner.
Deep Saliency Mapping for 3D Meshes and Applications
Nowadays, three-dimensional (3D) meshes are widely used in various applications in different areas (e. g., industry, education, entertainment and safety).
TinyHD: Efficient Video Saliency Prediction with Heterogeneous Decoders using Hierarchical Maps Distillation
Video saliency prediction has recently attracted attention of the research community, as it is an upstream task for several practical applications.
TempSAL -- Uncovering Temporal Information for Deep Saliency Prediction
Deep saliency prediction algorithms complement the object recognition features, they typically rely on additional information, such as scene context, semantic relationships, gaze direction, and object dissimilarity.
TempSAL - Uncovering Temporal Information for Deep Saliency Prediction
Deep saliency prediction algorithms complement the object recognition features, they typically rely on additional information such as scene context, semantic relationships, gaze direction, and object dissimilarity.
Panoramic Vision Transformer for Saliency Detection in 360° Videos
360$^\circ$ video saliency detection is one of the challenging benchmarks for 360$^\circ$ video understanding since non-negligible distortion and discontinuity occur in the projection of any format of 360$^\circ$ videos, and capture-worthy viewpoint in the omnidirectional sphere is ambiguous by nature.
GASP: Gated Attention For Saliency Prediction
We show that gaze direction and affective representations contribute a prediction to ground-truth correspondence improvement of at least 5% compared to dynamic saliency models without social cues.