Driver distraction detection and recognition using RGB-D sensor

1 Feb 2015  ·  Céline Craye, Fakhri Karray ·

Driver inattention assessment has become a very active field in intelligent transportation systems. Based on active sensor Kinect and computer vision tools, we have built an efficient module for detecting driver distraction and recognizing the type of distraction. Based on color and depth map data from the Kinect, our system is composed of four sub-modules. We call them eye behavior (detecting gaze and blinking), arm position (is the right arm up, down, right of forward), head orientation, and facial expressions. Each module produces relevant information for assessing driver inattention. They are merged together later on using two different classification strategies: AdaBoost classifier and Hidden Markov Model. Evaluation is done using a driving simulator and 8 drivers of different gender, age and nationality for a total of more than 8 hours of recording. Qualitative and quantitative results show strong and accurate detection and recognition capacity (85% accuracy for the type of distraction and 90% for distraction detection). Moreover, each module is obtained independently and could be used for other types of inference, such as fatigue detection, and could be implemented for real cars systems.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here