Self-driving cars : the task of making a car that can drive itself without human guidance.
( Image credit: Learning a Driving Simulator )
In this paper, we propose a method that can generate contrastive explanations for such data where we not only highlight aspects that are in themselves sufficient to justify the classification by the deep model, but also new aspects which if added will change the classification.
We propose a new bottom-up method for multi-person 2D human pose estimation that is particularly well suited for urban mobility such as self-driving cars and delivery robots.
#8 best model for Keypoint Detection on COCO test-dev
Understanding human motion behavior is critical for autonomous moving platforms (like self-driving cars and social robots) if they are to navigate human-centric environments.
The basis for most vision based applications like robotics, self-driving cars and potentially augmented and virtual reality is a robust, continuous estimation of the position and orientation of a camera system w. r. t the observed environment (scene).
First, we introduce neuron coverage for systematically measuring the parts of a DL system exercised by test inputs.
Despite the relevance of semantic scene understanding for this application, there is a lack of a large dataset for this task which is based on an automotive LiDAR.