Gaze Prediction
13 papers with code • 0 benchmarks • 5 datasets
Benchmarks
These leaderboards are used to track progress in Gaze Prediction
Most implemented papers
Understanding and Modeling the Effects of Task and Context on Drivers' Gaze Allocation
Therefore, to enable analysis and modeling of these factors for drivers' gaze prediction, we propose the following: 1) we correct the data processing pipeline used in DR(eye)VE to reduce noise in the recorded gaze data; 2) we then add per-frame labels for driving task and context; 3) we benchmark a number of baseline and SOTA models for saliency and driver gaze prediction and use new annotations to analyze how their performance changes in scenarios involving different tasks; and, lastly, 4) we develop a novel model that modulates drivers' gaze prediction with explicit action and context information.
SCOUT+: Towards Practical Task-Driven Drivers' Gaze Prediction
In this paper, we address the challenge of effective modeling of task and context with common sources of data for use in practical systems.
Data Limitations for Modeling Top-Down Effects on Drivers' Attention
The crux of the problem is lack of public data with annotations that could be used to train top-down models and evaluate how well models of any kind capture effects of task on attention.