Gaze Estimation
73 papers with code • 9 benchmarks • 16 datasets
Gaze Estimation is a task to predict where a person is looking at given the person’s full face. The task contains two directions: 3-D gaze vector and 2-D gaze position estimation. 3-D gaze vector estimation is to predict the gaze vector, which is usually used in the automotive safety. 2-D gaze position estimation is to predict the horizontal and vertical coordinates on a 2-D screen, which allows utilizing gaze point to control a cursor for human-machine interaction.
Source: A Generalized and Robust Method Towards Practical Gaze Estimation on Smart Phone
Latest papers with no code
What Do You See in Vehicle? Comprehensive Vision Solution for In-Vehicle Gaze Estimation
GazeDPTR shows state-of-the-art performance on the IVGaze dataset.
CLIP-Gaze: Towards General Gaze Estimation via Visual-Linguistic Model
To overcome these limitations, we propose a novel framework called CLIP-Gaze that utilizes a pre-trained vision-language model to leverage its transferable knowledge.
PrivatEyes: Appearance-based Gaze Estimation Using Federated Secure Multi-Party Computation
Latest gaze estimation methods require large-scale training data but their collection and exchange pose significant privacy risks.
TransGOP: Transformer-Based Gaze Object Prediction
Gaze object prediction aims to predict the location and category of the object that is watched by a human.
CrossGaze: A Strong Method for 3D Gaze Estimation in the Wild
Gaze estimation, the task of predicting where an individual is looking, is a critical task with direct applications in areas such as human-computer interaction and virtual reality.
Towards mitigating uncann(eye)ness in face swaps via gaze-centric loss terms
We additionally propose a novel loss equation for the training of face swapping models, leveraging a pretrained gaze estimation network to directly improve representation of the eyes.
Comparative Analysis of Kinect-Based and Oculus-Based Gaze Region Estimation Methods in a Driving Simulator
Driver's gaze information can be crucial in driving research because of its relation to driver attention.
SLYKLatent, a Learning Framework for Facial Features Estimation
In this research, we present SLYKLatent, a novel approach for enhancing gaze estimation by addressing appearance instability challenges in datasets due to aleatoric uncertainties, covariant shifts, and test domain generalization.
Appearance Debiased Gaze Estimation via Stochastic Subject-Wise Adversarial Learning
In this paper, we address these challenges and propose a novel framework: Stochastic subject-wise Adversarial gaZE learning (SAZE), which trains a network to generalize the appearance of subjects.
Low-cost Geometry-based Eye Gaze Detection using Facial Landmarks Generated through Deep Learning
Introduction: In the realm of human-computer interaction and behavioral research, accurate real-time gaze estimation is critical.