Self-driving cars : the task of making a car that can drive itself without human guidance.
( Image credit: Learning a Driving Simulator )
In this paper, we propose a method that can generate contrastive explanations for such data where we not only highlight aspects that are in themselves sufficient to justify the classification by the deep model, but also new aspects which if added will change the classification.
We propose a new bottom-up method for multi-person 2D human pose estimation that is particularly well suited for urban mobility such as self-driving cars and delivery robots.
Ranked #8 on Keypoint Detection on COCO test-dev
Understanding human motion behavior is critical for autonomous moving platforms (like self-driving cars and social robots) if they are to navigate human-centric environments.
Although cameras are ubiquitous, robotic platforms typically rely on active sensors like LiDAR for direct 3D perception.
The basis for most vision based applications like robotics, self-driving cars and potentially augmented and virtual reality is a robust, continuous estimation of the position and orientation of a camera system w. r. t the observed environment (scene).
First, we introduce neuron coverage for systematically measuring the parts of a DL system exercised by test inputs.