How Do Drivers Allocate Their Potential Attention? Driving Fixation Prediction via Convolutional Neural Networks

The traffic driving environment is a complex and dynamic changing scene in which drivers have to pay close attention to salient and important targets or regions for safe driving. Modeling drivers’ eye movements and attention allocation in traffic driving can also help guiding unmanned intelligent vehicles. However, until now, few studies have modeled drivers’ true fixations and allocations while driving. To this end, we collect an eye tracking dataset from a total of 28 experienced drivers viewing 16 traffic driving videos. Based on the multiple drivers’ attention allocation dataset, we propose a convolutional-deconvolutional neural network (CDNN) to predict the drivers’ eye fixations. The experimental results indicate that the proposed CDNN outperforms the state-of-the-art saliency models and predicts drivers’ attentional locations more accurately. The proposed CDNN can predict the major fixation location and shows excellent detection of secondary important information or regions that cannot be ignored during driving if they exist. Compared with the present object detection models in autonomous and assisted driving systems, our human-like driving model does not detect all of the objects appearing in the driving scenes, but it provides the most relevant regions or targets, which can largely reduce the interference of irrelevant scene information.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here