Search Results for author: Hédi Ben-Younes

Found 5 papers, 4 papers with code

OCTET: Object-aware Counterfactual Explanations

1 code implementation CVPR 2023 Mehdi Zemni, Mickaël Chen, Éloi Zablocki, Hédi Ben-Younes, Patrick Pérez, Matthieu Cord

We conduct a set of experiments on counterfactual explanation benchmarks for driving scenes, and we show that our method can be adapted beyond classification, e. g., to explain semantic segmentation models.

Autonomous Driving counterfactual +4

STEEX: Steering Counterfactual Explanations with Semantics

1 code implementation17 Nov 2021 Paul Jacob, Éloi Zablocki, Hédi Ben-Younes, Mickaël Chen, Patrick Pérez, Matthieu Cord

In this work, we address the problem of producing counterfactual explanations for high-quality images and complex scenes.

counterfactual Counterfactual Explanation

Raising context awareness in motion forecasting

1 code implementation16 Sep 2021 Hédi Ben-Younes, Éloi Zablocki, Mickaël Chen, Patrick Pérez, Matthieu Cord

Learning-based trajectory prediction models have encountered great success, with the promise of leveraging contextual information in addition to motion history.

Motion Forecasting Trajectory Prediction

Explainability of deep vision-based autonomous driving systems: Review and challenges

no code implementations13 Jan 2021 Éloi Zablocki, Hédi Ben-Younes, Patrick Pérez, Matthieu Cord

The concept of explainability has several facets and the need for explainability is strong in driving, a safety-critical application.

Autonomous Driving Explainable artificial intelligence

Driving Behavior Explanation with Multi-level Fusion

1 code implementation9 Dec 2020 Hédi Ben-Younes, Éloi Zablocki, Patrick Pérez, Matthieu Cord

In this era of active development of autonomous vehicles, it becomes crucial to provide driving systems with the capacity to explain their decisions.

Explainable artificial intelligence Trajectory Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.