Browse SoTA > Computer Vision > Eye Tracking

Eye Tracking

26 papers with code · Computer Vision

Eye tracking research

Benchmarks

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

Eye Tracking for Everyone

CVPR 2016 CSAILVision/GazeCapture

We believe that we can put the power of eye tracking in everyone's palm by building eye tracking software that works on commodity hardware such as mobile phones and tablets, without the need for additional sensors or devices.

EYE TRACKING GAZE ESTIMATION

Predicting the Driver's Focus of Attention: the DR(eye)VE Project

10 May 2017ndrplz/dreyeve

In this work we aim to predict the driver's focus of attention.

EYE TRACKING

Attention Based Glaucoma Detection: A Large-scale Database and CNN Model

CVPR 2019 smilell/AG-CNN

The attention maps of the ophthalmologists are also collected in LAG database through a simulated eye-tracking experiment.

EYE TRACKING

AiR: Attention with Reasoning Capability

ECCV 2020 szzexpoi/AiR

In this work, we propose an Attention with Reasoning capability (AiR) framework that uses attention to understand and improve the process leading to task outcomes.

EYE TRACKING

Realtime and Accurate 3D Eye Gaze Capture with DCNN-based Iris and Pupil Segmentation

IEEE Transactions on Visualization and Computer Graphics ( Early Access ) 2019 1996scarlet/Laser-Eye

A comparison against Wang et al.[3] shows that our method advances the state of the art in 3D eye tracking using a single RGB camera.

EYE TRACKING

Advancing NLP with Cognitive Language Processing Signals

4 Apr 2019DS3Lab/zuco-nlp

Cognitive language processing data such as eye-tracking features have shown improvements on single NLP tasks.

EEG EYE TRACKING NAMED ENTITY RECOGNITION RELATION CLASSIFICATION SENTIMENT ANALYSIS

DAVE: A Deep Audio-Visual Embedding for Dynamic Saliency Prediction

25 May 2019hrtavakoli/DAVE

Our results suggest that (1) audio is a strong contributing cue for saliency prediction, (2) salient visible sound-source is the natural cause of the superiority of our Audio-Visual model, (3) richer feature representations for the input space leads to more powerful predictions even in absence of more sophisticated saliency decoders, and (4) Audio-Visual model improves over 53. 54\% of the frames predicted by the best Visual model (our baseline).

EYE TRACKING SALIENCY PREDICTION

STAViS: Spatio-Temporal AudioVisual Saliency Network

CVPR 2020 atsiami/STAViS

We introduce STAViS, a spatio-temporal audiovisual saliency network that combines spatio-temporal visual and auditory information in order to efficiently address the problem of saliency estimation in videos.

EYE TRACKING SALIENCY PREDICTION